Build Smarter Agents with Nemotron 3 Nano Omni on FriendliAI — Explore models
curl --request POST \
--url https://api.friendli.ai/dedicated/v1/images/edits \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"image": "@/path/to/file/image.png",
"model": "(endpoint-id)",
"prompt": "Add a red sports car in the foreground."
}
'{
"data": [
{
"url": "(url-to-edited-image)",
"seed": 789,
"response_format": "url"
}
]
}Edit images with text prompts using your Friendli Dedicated Endpoint. Upload an image and describe the desired modifications for the model to apply.
curl --request POST \
--url https://api.friendli.ai/dedicated/v1/images/edits \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"image": "@/path/to/file/image.png",
"model": "(endpoint-id)",
"prompt": "Add a red sports car in the foreground."
}
'{
"data": [
{
"url": "(url-to-edited-image)",
"seed": 789,
"response_format": "url"
}
]
}Given an image and a description, the model edits the image. To request successfully, it is mandatory to enter a Personal API Key (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your API Key.Documentation Index
Fetch the complete documentation index at: https://friendli.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
ID of team to run requests as (optional parameter).
The image(s) to edit. Must be in a supported image format.
Hide child attributes
An input image, provided either as a URL or a base64-encoded string. Maximum supported image size is 50 MiB.
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
"data:image/png;base64,..."
A text description of the desired image(s).
ID of target endpoint. If you want to send request to specific adapter, use the format "YOUR_ENDPOINT_ID:YOUR_ADAPTER_ROUTE". Otherwise, you can just use "YOUR_ENDPOINT_ID" alone.
"(endpoint-id)"
The number of inference steps to use during image generation. Defaults to 20. Supported range: [1, 50].
Adjusts the alignment of the generated image with the input prompt. Higher values (e.g., 8-10) make the output more faithful to the prompt, while lower values (e.g., 1-5) encourage more creative freedom. This parameter may be irrelevant for certain models, such as FLUX.Schnell.
The seed to use for image generation.
The format in which the generated image(s) will be returned. One of url(default), raw, png, jpeg, and jpg.
url, raw, png, jpeg, jpg Successfully edited image(s).
Hide child attributes