Build Smarter Agents with Nemotron 3 Nano Omni on FriendliAI — Explore models
curl --request POST \
--url https://api.friendli.ai/dedicated/v1/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "(endpoint-id)",
"prompt": "An orange Lamborghini driving down a hill road at night with a beautiful ocean view in the background."
}
'{
"data": [
{
"url": "(url-to-generated-image)",
"seed": 123,
"response_format": "url"
}
]
}Generate images from text descriptions using your Friendli Dedicated Endpoint. Supports configurable image size, count, and generation parameters.
curl --request POST \
--url https://api.friendli.ai/dedicated/v1/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "(endpoint-id)",
"prompt": "An orange Lamborghini driving down a hill road at night with a beautiful ocean view in the background."
}
'{
"data": [
{
"url": "(url-to-generated-image)",
"seed": 123,
"response_format": "url"
}
]
}Given a description, the model generates image(s). To request successfully, it is mandatory to enter a Personal API Key (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your API Key.Documentation Index
Fetch the complete documentation index at: https://friendli.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
ID of team to run requests as (optional parameter).
ID of target endpoint. If you want to send request to specific adapter, use the format "YOUR_ENDPOINT_ID:YOUR_ADAPTER_ROUTE". Otherwise, you can just use "YOUR_ENDPOINT_ID" alone.
"(endpoint-id)"
A text description of the desired image(s).
The number of inference steps to use during image generation. Defaults to 20. Supported range: [1, 50].
Adjusts the alignment of the generated image with the input prompt. Higher values (e.g., 8-10) make the output more faithful to the prompt, while lower values (e.g., 1-5) encourage more creative freedom. This parameter may be irrelevant for certain models, such as FLUX.Schnell.
The seed to use for image generation.
The format in which the generated image(s) will be returned. One of url(default), raw, png, jpeg, and jpg.
url, raw, png, jpeg, jpg Optional input images used to condition or guide the generation process (e.g., for ControlNet or image editing models). This field is only applicable when using ControlNet or image editing models.
Hide child attributes
An input image, provided either as a URL or a base64-encoded string. Maximum supported image size is 50 MiB.
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
"data:image/png;base64,..."
A list of weights that determine the influence of each ControlNet model in the generation process. Each value must be within [0, 1], where 0 disables the corresponding ControlNet and 1 applies it fully. When multiple ControlNet models are used, the list length must match the number of control images. If omitted, all ControlNet models default to full influence (1.0). This field is only applicable when using ControlNet models.
Successfully generated image(s).
Hide child attributes