(function() { var utmInheritingDomain = "appstore.com", utmRegExp = /(&|\?)utm_[A-Za-z]+=[A-Za-z0-9]+/gi, links = document.getElementsByTagName("a"), utms = [ "utm_medium={{URL - utm_medium}}", "utm_source={{URL - utm_source}}", "utm_campaign={{URL - utm_campaign}}" ]; for (var index = 0; index < links.length; index += 1) { var tempLink = links[index].href, tempParts; if (tempLink.indexOf(utmInheritingDomain) > 0) { tempLink = tempLink.replace(utmRegExp, ""); tempParts = tempLink.split("#"); if (tempParts[0].indexOf("?") < 0 ) { tempParts[0] += "?" + utms.join("&"); } else { tempParts[0] += "&" + utms.join("&"); } tempLink = tempParts.join("#"); } links[index].href = tempLink; } }());

Friendli Engine
The fastest LLM inference engine on the market

GROUNDBREAKING PERFORMANCE

50~90%

Cost savings

up to

Fewer

GPUs required

1

10.7× Higher

Throughput

2

6.2× Lower

Latency

3
01

What Friendli Engine offers

Speed up the serving of LLMs, thus slashing costs by
50~90%

Friendli Engine is highly optimized to make LLM serving fast and cost-effective. Process LLM inference with Friendli Engine, the fastest engine on the market. Our performance testing shows that Friendli Engine is significantly faster than vLLM and TensorRT-LLM.

Read more
Cost drop

Multi-LoRA serving on a single GPU

Friendli Engine simultaneously supports multiple LoRA models on fewer GPUs (even on just a single GPU!), a remarkable leap in making LLM customization more accessible and efficient.

Read more
multilora

Deploy LLMs and more!

Friendli Engine supports a wide range of generative AI models, including quantized models and MoE.

View the full model list
modelList
02

Key Technology

Iteration batching
(aka continuous batching)

Iteration batching is a new batching technology we invented to handle concurrent generation requests very efficiently. Iteration batching can achieve up to tens of times higher LLM inference throughput than conventional batching while satisfying the same latency requirement. Our technology is protected by our patents in the US, Korea and China

Read more
Iteration-Batching-Graphic

DNN library

Friendli DNN Library is the set of optimized GPU kernels carefully curated and designed specifically for generative AI. Our novel library allows Friendli Engine to support faster LLM inference of various tensor shapes and datatypes, as well as support quantization, Mixture of Experts, LoRA adapters, and so on.

Multi-LoRA-Serving-Comparison

Friendli TCache

Friendli TCache intelligently identifies and stores frequently used computational results. The Friendli Engine leverages the cached results, significantly reducing the workload on the GPUs.

Read more
TCache-Comparison

Speculative decoding

Friendli Engine natively supports speculative decoding, an optimization technique that rapidly speeds up LLM/LMM inference by making educated guesses on future tokens in parallel while generating the current token. Through validation of the generated potential future tokens, speculative decoding ensures identical model outputs at a fraction of the inference time.

Speculative decoding
03

Highlights

mixtral 8x7B highlight

Running Quantized Mixtral 8x7B on a Single GPU

We quantized the Mixtral-7x8B-instruct v0.1 model with AWQ and ran it on a single NVIDIA A100 80GB GPU. Both the TTFT and TPOT outnumbers a baseline vLLM system. Friendli Engine achieves at least 4.1x faster response time and 3.8x ~ 23.8x higher token throughput.

Read more
llama 2 70B highlight

Quantized Llama 2 70B on Single GPU

With Friendli Engine, running AWQ-ed models is seamless. For example, one can run AWQ-ed LLMs (e.g., Llama 2 70B 4-bit on a single A100 80 GB GPU) natively on Friendli Engine. Running LLMs with AWQ on Friendli Engine enables you to achieve efficient LLM deployment and remarkable efficiency gains without sacrificing accuracy.

Read more
friendli tcache highlight

Even faster TTFT with Friendli TCache

Friendli TCache reuses recurring computations, optimizing TTFT (Time to First Token) by leveraging cached results. We show that our Engine delivers 11.3x to 23x faster TTFT compared to vLLM.

Read more
HOW TO USE

Three ways to run generative AI models with Friendli Engine:

01

Friendli Dedicated Endpoints

Build and run generative AI models on autopilot

Learn more

02

Friendli Container

Serve LLM and LMM inferences with Friendli Engine in your private environment

Learn more

03

Friendli Serverless Endpoints

Call our fast and affordable API for open-source generative AI models

Learn more

1. Testing conducted by FriendliAI in October 2023 using Llama-2-13B running on Friendli Engine. See the detailed results and methodology here.
2. Performance compared to vLLM on a single NVIDIA A100 80GB GPU running AWQ-ed Mixtral 8x7B from Mistral AI with the following settings: mean input token length = 500, mean output token length = 150. Evaluation conducted by FriendliAI.
3. Performance of Friendli Container compared to vLLM on a single NVIDIA A100 80GB GPU running AWQ-ed Mixtral 8x7B from Mistral AI with the following settings: mean input token length = 500, mean output token length = 150, mean request per second = 0.5. Evaluation conducted by FriendliAI.