Running real inference at scale? Apply for our limited $10K credit program — Find out more
cURL
curl --request POST \ --url https://api.friendli.ai/v1/detokenize \ --header 'Content-Type: application/json' \ --data ' { "tokens": [ 128000, 3923, 374, 1803, 1413, 15592, 30 ] } '
{ "text": "What is generative AI?" }
By giving a list of tokens, generate a detokenized output text string.
A token sequence to detokenize.
Routes the request to a specific adapter.
"(adapter-route)"
Successfully detokenized the tokens.
Detokenized text output.
Was this page helpful?