• Models
  • Partners
  • Pricing

FriendliAI Secures $20M to Accelerate AI Inference Innovation — Read the Full Story

Dedicated inference at scale

Run inference with unmatched speed and reliability at scale

Start deployingTalk to an expert
Friendli Dedicated Endpoints
Friendli Dedicated Endpoints
LG AI Research

“EXAONE models run incredibly fast on FriendliAI’s inference platform, and users are highly satisfied with the performance. With FriendliAI’s support, customers have been able to shorten the time required to test and evaluate EXAONE by several weeks. This has enabled them to integrate EXAONE into their services more quickly, accelerating adoption and driving real business impact.”

Clayton Park, AI Business Team Lead, LG AI Research

Benefits

Production-scale performance and reliability

Dedicated Endpoints allow you to deploy and run models fast, reliably, cost-efficiently at scale.

Maximize inference speed

Unlock low latency and high throughput with our optimized inference stack through our proprietary technology.

Run inference reliably

Ensure 99.99% uptime with our geo-distributed, multi-cloud infrastructure, engineered for reliability at scale.

Scale smarter, spend less

Slash costs with our purpose-built inference stack and scale seamlessly to handle fluctuating traffic.

Deploy the way you need

Serverless

The simplest way to run inference

  • Start instantly—no configuration needed
  • Use free built-in tools
  • Pay per token or GPU time
Try now

On Demand

Dedicated GPU instances

  • Guarantee performance
  • Supports custom and 450K+ open-source models
  • Pay for GPU time
Start deploying

Enterprise Reserved

Reserved GPU instances with discounts

  • Reserve GPUs for 1+ months
  • Access exclusive features
  • Discounted upfront payment
Request reserved instances

Features

The Complete Inference solution

Speed, reliability, scaling, deployment, and enterprise support. Everything you need to run inference at scale.

Blazing-fast inference

Deliver unmatched speed and throughput with our stack using custom kernels, caching, quantization, speculative decoding, and routing.

Always-on reliability

Guarantee uptime through resilient multi-cloud architecture, automated failover and recovery.

Effortless autoscaling

Scale inference dynamically across GPUs, instantly right-sizing capacity to match demand.

Powerful model tooling

Track performance, usage, and logs in real time, and perform live model updates without disruption.

Simple, optimized deployment

Deploy your models easily in an optimized way, with quantization and speculative decoding ready out of the box.

Enterprise-grade support

Get dedicated engineering, compliance, and VPC support in our SOC 2–compliant environment.

Read our docs

Access the model you want

Access the world’s largest collection of 450,000 models through seamless Hugging Face integration. From text generation to computer vision, launch any model with a single click.

Find your model

Have a custom or fine-tuned model?

We’ll help you deploy it just as easily. Contact us to deploy your model.

Contact us

Pricing

Pay per GPU second for faster speeds, higher rate limits, and lower costs at scale.

VRAM / GPU

$ / hour (billed per second)

On-demand NVIDIA B200

192GB

$8.9

On-demand NVIDIA H200

141GB

$4.5

On-demand NVIDIA H100

80GB

$3.9

On-demand NVIDIA A100

80GB

$2.9

Enterprise reserved

Contact us

Explore FriendliAI today

Get startedTalk to an expert
AICPA SOC 2®

Products

Friendli Dedicated EndpointsFriendli Serverless EndpointsFriendli Container

Solutions

InferenceUse Cases
Models

Developers

DocsBlogResearch
Partners

Company

About usNewsCareersPatentsBrand ResourcesTrust centerContact us
Pricing

Contact us:

contact@friendli.ai

FriendliAI Corp:

Redwood City, CA

Hub:

Seoul, Korea

Privacy PolicyService Level AgreementTerms of ServiceCA Notice

Copyright © 2025 FriendliAI Corp. All rights reserved

Access the model you want

Access the world’s largest collection of 460,305 models through seamless Hugging Face integration. From text generation to computer vision, launch any model with a single click.

Find your model

Have a custom or fine-tuned model?

We’ll help you deploy it just as easily. Contact us to deploy your model.

Contact us