stream
option is set to true
), the response is in MIME type text/event-stream
. Otherwise, the content type is application/json
.
You can view the schema of the streamed sequence of chunk objects in streaming mode here.
Authorizations
Headers
ID of team to run requests as (optional parameter).
Body
A list of messages comprising the conversation so far.
[
{
"content": "You are a helpful assistant.",
"role": "system"
},
{ "content": "Hello!", "role": "user" }
]
Code of the model to use. See available model list.
"meta-llama-3.1-8b-instruct"
Additional keyword arguments supplied to the template renderer. These parameters will be available for use within the chat template.
A list of endpoint sentence tokens.
Number between -2.0 and 2.0. Positive values penalizes tokens that have been sampled, taking into account their frequency in the preceding text. This penalization diminishes the model's tendency to reproduce identical lines verbatim.
Accepts a JSON object that maps tokens to an associated bias value. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model.
Whether to return log probabilities of the output tokens or not.
The maximum number of tokens to generate. For decoder-only models like GPT, the length of your input tokens plus max_tokens
should not exceed the model's maximum length (e.g., 2048 for OpenAI GPT-3). For encoder-decoder models like T5 or BlenderBot, max_tokens
should not exceed the model's maximum output length. This is similar to Hugging Face's max_new_tokens
argument.
200
A scaling factor used to determine the minimum token probability threshold. This threshold is calculated as min_p
multiplied by the probability of the most likely token. Tokens with probabilities below this scaled threshold are excluded from sampling. Values range from 0.0 (inclusive) to 1.0 (inclusive). Higher values result in stricter filtering, while lower values allow for greater diversity. The default value of 0.0 disables filtering, allowing all tokens to be considered for sampling.
The number of independently generated results for the prompt. Defaults to 1. This is similar to Hugging Face's num_return_sequences
argument.
Whether to enable parallel function calling.
Number between -2.0 and 2.0. Positive values penalizes tokens that have been sampled at least once in the existing text.
Penalizes tokens that have already appeared in the generated result (plus the input tokens for decoder-only models). Should be positive value (1.0 means no penalty). See keskar et al., 2019 for more details. This is similar to Hugging Face's repetition_penalty
argument.
Enable to continue text generation even after an error occurs during a tool call.
Note that enabling this option may use more tokens, as the system generates additional content to handle errors gracefully. However, if the system fails more than 8 times, the generation will stop regardless.
Tip This is useful in scenarios where you want to maintain text generation flow despite errors, such as when generating long-form content. The user will not be interrupted by tool call issues, ensuring a smoother experience.
Seed to control random procedure. If nothing is given, random seed is used for sampling, and return the seed along with the generated result. When using the n
argument, you can pass a list of seed values to control all of the independent generations.
When one of the stop phrases appears in the generation result, the API will stop generation. The stop phrases are excluded from the result. Defaults to empty list.
Whether to stream generation result. When set true, each token will be sent as server-sent events once generated.
Options related to stream.
It can only be used when stream: true
.
Sampling temperature. Smaller temperature makes the generation result closer to greedy, argmax (i.e., top_k = 1
) sampling. Defaults to 1.0. This is similar to Hugging Face's temperature
argument.
Determines the tool calling behavior of the model.
When set to none
, the model will bypass tool execution and generate a response directly.
In auto
mode (the default), the model dynamically decides whether to call a tool or respond with a message.
Alternatively, setting required
ensures that the model invokes at least one tool before responding to the user.
You can also specify a particular tool by {"type": "function", "function": {"name": "my_function"}}
.
Limits sampling to the top k tokens with the highest probabilities. Values range from 0 (no filtering) to the model's vocabulary size (inclusive). The default value of 0 applies no filtering, allowing all tokens.
The number of most likely tokens to return at each token position, each with an associated log probability. logprobs
must be set to true if this parameter is used.
Keeps only the smallest set of tokens whose cumulative probabilities reach top_p
or higher. Values range from 0.0 (exclusive) to 1.0 (inclusive). The default value of 1.0 includes all tokens, allowing maximum diversity.
The probability that XTC (Exclude Top Choices) filtering will be applied for each sampling decision. When XTC is triggered, high-probability tokens above the xtc_threshold
are excluded except for the least likely viable token. This stochastic activation allows for a balance between standard sampling and creativity-boosting exclusion filtering. Values range from 0.0 (inclusive) to 1.0 (inclusive), where 0.0 means XTC is never applied, 1.0 means XTC is always applied when viable tokens exist, and intermediate values provide probabilistic activation. The default value of 0.0 disables XTC filtering.
A probability threshold used to identify “top choice” tokens for exclusion in XTC (Exclude Top Choices) sampling. Tokens with probabilities at or above this threshold are considered viable candidates, and all but the least likely viable token are excluded from sampling. This option reduces the dominance of highly probable tokens while preserving some diversity by keeping the least confident “top choice.” Values range from 0.0 (inclusive) to 1.0 (inclusive). Higher values make the filtering more selective by requiring higher probabilities to trigger exclusion, while lower values apply filtering more broadly. The default value of 0.0 disables XTC filtering entirely.
Response
Successfully generated a tool assisted chat response.
The Unix timestamp (in seconds) for when the generation completed.
A unique ID of the chat completion.
The object type, which is always set to chat.completion
.
"chat.completion"
The model to generate the completion. For dedicated endpoints, it returns the endpoint id.