Dedicated inference at scale

Run inference with unmatched speed and reliability at scale

LG AI Research

“EXAONE models run incredibly fast on FriendliAI’s inference platform, and users are highly satisfied with the performance. With FriendliAI’s support, customers have been able to shorten the time required to test and evaluate EXAONE by several weeks. This has enabled them to integrate EXAONE into their services more quickly, accelerating adoption and driving real business impact.”

Clayton Park, AI Business Team Lead, LG AI Research

Benefits

Production-scale performance and reliability

Dedicated Endpoints allow you to deploy and run models fast, reliably, cost-efficiently at scale.

Maximize inference speed

Unlock low latency and high throughput with our optimized inference stack through our proprietary technology.

Run inference reliably

Ensure 99.99% uptime with our geo-distributed, multi-cloud infrastructure, engineered for reliability at scale.

Scale smarter, spend less

Slash costs with our purpose-built inference stack and scale seamlessly to handle fluctuating traffic.

Deploy the way you need

Serverless

The simplest way to run inference

  • Start instantly—no configuration needed
  • Use free built-in tools
  • Pay per token or GPU time

On Demand

Dedicated GPU instances

  • Guarantee performance
  • Supports custom and 500K+ open-source models
  • Pay for GPU time

Enterprise Reserved

Reserved GPU instances with discounts

  • Reserve GPUs for 1+ months
  • Access exclusive features
  • Discounted upfront payment

Features

The complete inference solution

Speed, reliability, scaling, deployment, and enterprise support. Everything you need to run inference at scale.

Blazing-fast inference

Deliver unmatched speed and throughput with our stack using custom kernels, caching, quantization, speculative decoding, and routing.

Always-on reliability

Guarantee uptime through resilient multi-cloud architecture, automated failover and recovery.

Effortless autoscaling

Scale inference dynamically across GPUs, instantly right-sizing capacity to match demand.

Powerful model tooling

Track performance, usage, and logs in real time, and perform live model updates without disruption.

Simple, optimized deployment

Deploy your models easily in an optimized way, with quantization and speculative decoding ready out of the box.

Enterprise-grade support

Get dedicated engineering, compliance, and VPC support in our SOC 2–compliant environment.

Read our docs

Access the model you want

Access the world’s largest collection of 507,494 models through seamless Hugging Face integration. From text generation to computer vision, launch any model with a single click.

Find your model
Speech & Audio

whisper-large-v3-egyptian-arabic

AbdelrahmanHassan

LLM
meta-llama

Llama-3.1-8B-Instruct

meta-llama

LLM
LGAI-EXAONE

K-EXAONE-236B-A23B

LGAI-EXAONE

Video
OpenGVLab

VideoChat-R1_5

OpenGVLab

LLM
Qwen

Qwen3-Coder-Next

Qwen

Speech & Audio
kotoba-tech

kotoba-whisper-v2.2

kotoba-tech

Speech & Audio
laion

BUD-E-Whisper

laion

Computer Vision
trillionlabs

gWorld-8B

trillionlabs

Speech & Audio
openai

whisper-large-v3-turbo

openai

Video
openbmb

MiniCPM-o-4_5-awq

openbmb

Video
lingshu-medical-mllm

Lingshu-7B

lingshu-medical-mllm

LLM
FutureMa

Eva-4B-V2

FutureMa

Computer Vision
Qwen

Qwen3-VL-Embedding-2B

Qwen

LLM
nvidia

Orchestrator-8B

nvidia

Speech & Audio
openai

whisper-large-v3

openai

Computer Vision
ibm-granite

granite-docling-258M

ibm-granite

Speech & Audio
bangla-speech-processing

BanglaASR

bangla-speech-processing

Computer Vision
numind

NuMarkdown-8B-Thinking

numind

Video
openbmb

MiniCPM-V-4

openbmb

Video
microsoft

X-Reasoner-7B

microsoft

LLM
zai-org

GLM-4.6

zai-org

LLM
nvidia

NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4

nvidia

Computer Vision
Hcompany

Holo2-235B-A22B

Hcompany

LLM
Qwen

Qwen3-Coder-Next-FP8

Qwen

Video
openbmb

MiniCPM-V-4_5

openbmb

Multimodal
Qwen

Qwen2.5-VL-7B-Instruct

Qwen

LLM
MiniMaxAI

MiniMax-M2.1

MiniMaxAI

LLM
zai-org

GLM-4.7-Flash

zai-org

LLM

Schematron-3B

inference-net

Speech & Audio
mistralai

Voxtral-Small-24B-2507

mistralai

Multimodal
moonshotai

Kimi-K2.5

moonshotai

Multimodal
meta-llama

Llama-4-Scout-17B-16E-Instruct

meta-llama

Video
tiny-random

kimi-k2.5

tiny-random

Video
Qwen

Qwen2.5-VL-3B-Instruct

Qwen

Speech & Audio
microsoft

paza-whisper-large-v3-turbo

microsoft

Computer Vision
deepseek-ai

DeepSeek-OCR

deepseek-ai

Speech & Audio
mistralai

Voxtral-Mini-3B-2507

mistralai

Video
allenai

olmOCR-2-7B-1025

allenai

Video
nvidia

NV-Reason-CXR-3B

nvidia

Speech & Audio
openai

whisper-small

openai

Speech & Audio
microsoft

Phi-4-multimodal-instruct

microsoft

Computer Vision
Qwen

Qwen3-VL-8B-Instruct

Qwen

LLM
openai

gpt-oss-120b

openai

Speech & Audio
distil-whisper

distil-large-v3

distil-whisper

Computer Vision
Qwen

Qwen3-VL-Embedding-8B

Qwen

Video
ByteDance-Seed

UI-TARS-1.5-7B

ByteDance-Seed

Video
numind

NuExtract-2.0-8B

numind

Speech & Audio
fixie-ai

ultravox-v0_7-glm-4_6

fixie-ai

Computer Vision
bytedance-research

UI-TARS-7B-DPO

bytedance-research

Speech & Audio
openai

whisper-large-v2

openai

Computer Vision
lightonai

LightOnOCR-2-1B

lightonai

LLM
mistralai

Magistral-Small-2506

mistralai

Have a custom or fine-tuned model?

We’ll help you deploy it just as easily. Contact us to deploy your model.

Contact us

Pricing

Pay per GPU second for faster speeds, higher rate limits, and lower costs at scale.

VRAM / GPU

$ / hour (billed per second)

On-demand NVIDIA B200

192GB

$8.9

On-demand NVIDIA H200

141GB

$4.5

On-demand NVIDIA H100

80GB

$3.9

On-demand NVIDIA A100

80GB

$2.9

Enterprise reserved

Explore FriendliAI today