POST
/
dedicated
/
v1
/
completions

See available models at this pricing table.

To successfully run an inference request, it is mandatory to enter a Friendli Token (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your token.

When streaming mode is used (i.e., stream option is set to true), the response is in MIME type text/event-stream. Otherwise, the content type is application/json. You can view the schema of the streamed sequence of chunk objects in streaming mode here.

Authorizations

Authorization
string
headerrequired

When using Friendli Endpoints API for inference requests, you need to provide a Friendli Token for authentication and authorization purposes.

For more detailed information, please refer here.

Headers

X-Friendli-Team
string

ID of team to run requests as (optional parameter).

Body

application/json
prompt
string
required

The prompt (i.e., input text) to generate completions for. Either prompt or tokens field is required.

model
string
required

ID of target endpoint. If you want to send request to specific adapter, using "ENDPOINT_ID:ADAPTER_ROUTE" format.

bad_word_tokens
object[] | null

Same as the above bad_words field, but receives token sequences instead of text phrases. This is similar to Hugging Face's bad_word_ids argument.

bad_words
string[] | null

Text phrases that should not be generated. For a bad word phrase that contains N tokens, if the first N-1 tokens appears at the last of the generated result, the logit for the last token of the phrase is set to -inf. Before checking whether a bard word is included in the result, the word is converted into tokens. We recommend using bad_word_tokens because it is clearer. For example, after tokenization, phrases "clear" and " clear" can result in different token sequences due to the prepended space character. Defaults to empty list.

beam_compat_no_post_normalization
boolean | null
beam_compat_pre_normalization
boolean | null
beam_search_type
string | null
default: DETERMINISTIC

One of DETERMINISTIC, NAIVE_SAMPLING, and STOCHASTIC. Which beam search type to use. DETERMINISTIC means the standard, deterministic beam search, which is similar to Hugging Face's beam_search. Argmuents for controlling random sampling such as top_k and top_p are not allowed for this option. NAIVE_SAMPLING is similar to Hugging Face's beam_sample. STOCHASTIC means stochastic beam search (more details in Kool et al. (2019)). This option is ignored if num_beams is not provided. Defaults to DETERMINISTIC.

early_stopping
boolean | null
default: false

Whether to stop the beam search when at least num_beams beams are finished with the EOS token. Only allowed for beam search. Defaults to false. This is similar to Hugging Face's early_stopping argument.

embedding_to_replace
number[] | null

A list of flattened embedding vectors used for replacing the tokens at the specified indices provided via token_index_to_replace.

encoder_no_repeat_ngram
integer | null
default: 1

If this exceeds 1, every ngram of that size occurring in the input token sequence cannot appear in the generated result. 1 means that this mechanism is disabled (i.e., you cannot prevent 1-gram from being generated repeatedly). Only allowed for encoder-decoder models. Defaults to 1. This is similar to Hugging Face's encoder_no_repeat_ngram_size argument.

encoder_repetition_penalty
number | null

Penalizes tokens that have already appeared in the input tokens. Should be greater than or equal to 1.0. 1.0 means no penalty. Only allowed for encoder-decoder models. See Keskar et al., 2019 for more details. This is similar to Hugging Face's encoder_repetition_penalty argument.

eos_token
integer[] | null

A list of endpoint sentence tokens.

forced_output_tokens
integer[] | null

A token sequence that is enforced as a generation output. This option can be used when evaluating the model for the datasets with multi-choice problems (e.g., HellaSwag, MMLU). Use this option with include_output_logprobs to get logprobs for the evaluation.

frequency_penalty
number | null

Number between -2.0 and 2.0. Positive values penalizes tokens that have been sampled, taking into account their frequency in the preceding text. This penalization diminishes the model's tendency to reproduce identical lines verbatim.

include_output_logits
boolean | null

Whether to include the output logits to the generation output.

include_output_logprobs
boolean | null

Whether to include the output logprobs to the generation output.

length_penalty
number | null

Coefficient for exponential length penalty that is used with beam search. Only allowed for beam search. Defaults to 1.0. This is similar to Hugging Face's length_penalty argument.

max_tokens
integer | null

The maximum number of tokens to generate. For decoder-only models like GPT, the length of your input tokens plus max_tokens should not exceed the model's maximum length (e.g., 2048 for OpenAI GPT-3). For encoder-decoder models like T5 or BlenderBot, max_tokens should not exceed the model's maximum output length. This is similar to Hugging Face's max_new_tokens argument.

max_total_tokens
integer | null

The maximum number of tokens including both the generated result and the input tokens. Only allowed for decoder-only models. Only one argument between max_tokens and max_total_tokens is allowed. Default value is the model's maximum length. This is similar to Hugging Face's max_length argument.

min_tokens
integer | null
default: 0

The minimum number of tokens to generate. Default value is 0. This is similar to Hugging Face's min_new_tokens argument.

min_total_tokens
integer | null

The minimum number of tokens including both the generated result and the input tokens. Only allowed for decoder-only models. Only one argument between min_tokens and min_total_tokens is allowed. This is similar to Hugging Face's min_length argument.

n
integer | null
default: 1

The number of independently generated results for the prompt. Not supported when using beam search. Defaults to 1. This is similar to Hugging Face's num_return_sequences argument.

no_repeat_ngram
integer | null
default: 1

If this exceeds 1, every ngram of that size can only occur once among the generated result (plus the input tokens for decoder-only models). 1 means that this mechanism is disabled (i.e., you cannot prevent 1-gram from being generated repeatedly). Defaults to 1. This is similar to Hugging Face's no_repeat_ngram_size argument.

num_beams
integer | null

Number of beams for beam search. Numbers between 1 and 31 (both inclusive) are allowed. Default behavior is no beam search. This is similar to Hugging Face's num_beams argument.

presence_penalty
number | null

Number between -2.0 and 2.0. Positive values penalizes tokens that have been sampled at least once in the existing text.

repetition_penalty
number | null

Penalizes tokens that have already appeared in the generated result (plus the input tokens for decoder-only models). Should be greater than or equal to 1.0 (1.0 means no penalty). See keskar et al., 2019 for more details. This is similar to Hugging Face's repetition_penalty argument.

response_format
object | null

The enforced format of the model's output.

Note that the content of the output message may be truncated if it exceeds the max_tokens. You can check this by verifying that the finish_reason of the output message is length.

Important You must explicitly instruct the model to produce the desired output format using a system prompt or user message (e.g., You are an API generating a valid JSON as output.). Otherwise, the model may result in an unending stream of whitespace or other characters.

seed
integer[] | null

Seed to control random procedure. If nothing is given, the API generate the seed randomly, use it for sampling, and return the seed along with the generated result. When using the n argument, you can pass a list of seed values to control all of the independent generations.

stop
string[] | null

When one of the stop phrases appears in the generation result, the API will stop generation. The stop phrases are excluded from the result. This option is incompatible with beam search (specified by num_beams); use stop_tokens for that case instead. Defaults to empty list.

stop_tokens
object[] | null

Stop generating further tokens when generated token corresponds to any of the tokens in the sequence. If beam search is enabled, all of the active beams should contain the stop token to terminate generation.

stream
boolean | null
default: false

Whether to stream generation result. When set true, each token will be sent as server-sent events once generated. Not supported when using beam search.

stream_options
object | null

Options related to stream. It can only be used when stream: true.

temperature
number | null
default: 1

Sampling temperature. Smaller temperature makes the generation result closer to greedy, argmax (i.e., top_k = 1) sampling. Defaults to 1.0. This is similar to Hugging Face's temperature argument.

timeout_microseconds
integer | null

Request timeout. Gives the HTTP 429 Too Many Requests response status code. Default behavior is no timeout.

token_index_to_replace
integer[] | null

A list of token indices where to replace the embeddings of input tokens provided via either tokens or prompt.

top_k
integer | null
default: 0

The number of highest probability tokens to keep for sampling. Numbers between 0 and the vocab size of the model (both inclusive) are allowed. The default value is 0, which means that the API does not apply top-k filtering. This is similar to Hugging Face's top_k argument.

top_p
number | null
default: 1

Tokens comprising the top top_p probability mass are kept for sampling. Numbers between 0.0 (exclusive) and 1.0 (inclusive) are allowed. Defaults to 1.0. This is similar to Hugging Face's top_p argument.

Response

200 - application/json
choices
object[]
required
usage
object
required