Build Smarter Agents with Nemotron 3 Nano Omni on FriendliAI — Explore models
curl --request POST \
--url https://api.friendli.ai/serverless/v1/detokenize \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"tokens": [
128000,
3923,
374,
1803,
1413,
15592,
30
]
}
'{
"text": "What is generative AI?"
}Convert token IDs back to text using Friendli Model APIs. Decode tokenized output into a human-readable string for post-processing.
curl --request POST \
--url https://api.friendli.ai/serverless/v1/detokenize \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"tokens": [
128000,
3923,
374,
1803,
1413,
15592,
30
]
}
'{
"text": "What is generative AI?"
}By giving a list of tokens, generate a detokenized output text string. To request successfully, it is mandatory to enter a Personal API Key (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your API Key.Documentation Index
Fetch the complete documentation index at: https://friendli.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
ID of team to run requests as (optional parameter).
Code of the model to use. See available model list.
"meta-llama/Llama-3.1-8B-Instruct"
A token sequence to detokenize.
[128000, 3923, 374, 1803, 1413, 15592, 30]Successfully detokenized the tokens.
Detokenized text output.