POST
/
serverless
/
v1
/
completions
curl --request POST \
  --url https://api.friendli.ai/serverless/v1/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "meta-llama-3.1-8b-instruct",
  "prompt": "Say this is a test!"
}'
{
  "id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
  "model": "meta-llama-3.1-8b-instruct",
  "object": "text_completion",
  "choices": [
    {
      "index": 0,
      "seed": 42,
      "text": "This is indeed a test",
      "tokens": [
        128000,
        2028,
        374,
        13118,
        264,
        1296
      ],
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 7,
    "completion_tokens": 6,
    "total_tokens": 13
  }
}

See available models at this pricing table.

To request successfully, it is mandatory to enter a Friendli Token (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your token.

When streaming mode is used (i.e., stream option is set to true), the response is in MIME type text/event-stream. Otherwise, the content type is application/json. You can view the schema of the streamed sequence of chunk objects in streaming mode here.

Authorizations

Authorization
string
header
required

When using Friendli Suite API for inference requests, you need to provide a Friendli Token for authentication and authorization purposes.

For more detailed information, please refer here.

Headers

X-Friendli-Team
string | null

ID of team to run requests as (optional parameter).

Body

application/json

Response

200
application/json
Successfully generated completions.

The response is of type object.