POST
/
dedicated
/
v1
/
chat
/
render
Chat render
curl --request POST \
  --url https://api.friendli.ai/dedicated/v1/chat/render \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "content": "You are a helpful assistant.",
      "role": "system"
    },
    {
      "content": "Hello!",
      "role": "user"
    }
  ],
  "model": "(endpoint-id)"
}'
{
  "text": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\nHello!<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n"
}
Given a list of messages forming a conversation, the API renders them into the final prompt text that will be sent to the model. To request successfully, it is mandatory to enter a Friendli Token (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your token.

Authorizations

Authorization
string
header
required

When using Friendli Suite API for inference requests, you need to provide a Friendli Token for authentication and authorization purposes.

For more detailed information, please refer here.

Headers

X-Friendli-Team
string | null

ID of team to run requests as (optional parameter).

Body

application/json
messages
Messages · array
required

A list of messages comprising the conversation so far.

Examples:
[
{
"content": "You are a helpful assistant.",
"role": "system"
},
{ "content": "Hello!", "role": "user" }
]
model
string
required

ID of target endpoint. If you want to send request to specific adapter, use the format "YOUR_ENDPOINT_ID:YOUR_ADAPTER_ROUTE". Otherwise, you can just use "YOUR_ENDPOINT_ID" alone.

Examples:

"(endpoint-id)"

chat_template_kwargs
object | null

Additional keyword arguments supplied to the template renderer. These parameters will be available for use within the chat template.

tools
Tool · object[] | null

A list of tools the model may call. Currently, only functions are supported as a tool. A maximum of 128 functions is supported. Use this to provide a list of functions the model may generate JSON inputs for.

When tools is specified, min_tokens and response_format fields are unsupported.

Response

Successfully rendered chat messages into prompt text.

text
string
required

The rendered text.