Build Smarter Agents with Nemotron 3 Nano Omni on FriendliAI — Explore models
curl --request GET \
--url https://api.friendli.ai/dedicated/beta/endpoint/{endpoint_id} \
--header 'Authorization: Bearer <token>'{
"name": "endpoint-name",
"gpuType": "NVIDIA H100",
"numGpu": 1,
"instanceId": "instance-id",
"projectId": "project-id",
"creatorId": "creator-id",
"teamId": "team-id",
"autoscalingMin": 0,
"autoscalingMax": 1,
"autoscalingCooldown": 300,
"maxBatchSize": 10,
"maxInputLength": 1024,
"tokenizerSkipSpecialTokens": true,
"tokenizerAddSpecialTokens": true,
"currReplicaCnt": 1,
"desiredReplicaCnt": 1,
"updatedReplicaCnt": 1
}Retrieve the full specification of a Friendli Dedicated Endpoint by ID, including model config, GPU type, replica count, and deployment settings.
curl --request GET \
--url https://api.friendli.ai/dedicated/beta/endpoint/{endpoint_id} \
--header 'Authorization: Bearer <token>'{
"name": "endpoint-name",
"gpuType": "NVIDIA H100",
"numGpu": 1,
"instanceId": "instance-id",
"projectId": "project-id",
"creatorId": "creator-id",
"teamId": "team-id",
"autoscalingMin": 0,
"autoscalingMax": 1,
"autoscalingCooldown": 300,
"maxBatchSize": 10,
"maxInputLength": 1024,
"tokenizerSkipSpecialTokens": true,
"tokenizerAddSpecialTokens": true,
"currReplicaCnt": 1,
"desiredReplicaCnt": 1,
"updatedReplicaCnt": 1
}Given an endpoint ID, return its specification. To request successfully, it is mandatory to enter a Personal API Key (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your API Key.Documentation Index
Fetch the complete documentation index at: https://friendli.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
ID of team to run requests as (optional parameter).
The ID of the endpoint
Successfully retrieved the endpoint specification.
Dedicated endpoint specification.
The name of the endpoint.
The type of GPU to use for the endpoint.
The number of GPUs to use per replica.
The ID of the project that owns the endpoint.
The ID of the user who created the endpoint.
The ID of the team that owns the endpoint.
The minimum number of replicas to maintain.
The maximum number of replicas allowed.
The cooldown period in seconds between scaling operations.
The maximum batch size for inference requests.
Whether to skip special tokens in tokenizer output.
Whether to add special tokens in tokenizer input.
The ID of the instance.
The maximum allowed input length.
The current number of replicas.
The desired number of replicas.
The updated number of replicas.