(function() { var utmInheritingDomain = "appstore.com", utmRegExp = /(&|\?)utm_[A-Za-z]+=[A-Za-z0-9]+/gi, links = document.getElementsByTagName("a"), utms = [ "utm_medium={{URL - utm_medium}}", "utm_source={{URL - utm_source}}", "utm_campaign={{URL - utm_campaign}}" ]; for (var index = 0; index < links.length; index += 1) { var tempLink = links[index].href, tempParts; if (tempLink.indexOf(utmInheritingDomain) > 0) { tempLink = tempLink.replace(utmRegExp, ""); tempParts = tempLink.split("#"); if (tempParts[0].indexOf("?") < 0 ) { tempParts[0] += "?" + utms.join("&"); } else { tempParts[0] += "&" + utms.join("&"); } tempLink = tempParts.join("#"); } links[index].href = tempLink; } }());

Browse generative AI models
supported by Friendli Engine

Friendli Engine

Model library

Model architectureModelsHugging Face Hub repos (example)LoRA
ArcticForCausalLM
Arctic
BaichuanForCausalLM
Baichuan
BlenderbotForConditionalGeneration
Blenderbot
BloomForCausalLM
BLOOM BLOOMZ
CohereForCausalLM
Command R Command R+
DbrxForCausalLM
DBRX
DeepseekForCausalLM
DeepSeek
ExaoneForCausalLM
EXAONE
FalconForCausalLM
Falcon
Gemma2ForCausalLM
Gemma 2
GemmaForCausalLM
CodeGemma Gemma
GPT2LMHeadModel
GPT2
GPTJForCausalLM
GPT-J
GPTNeoXForCausalLM
GPT-NeoX Pythia Dolly StableLM
Grok1ForCausalLM
Grok-1
LlamaForCausalLM
Llama 3.1 Llama 3 Llama 2 CodeLlama OpenLLaMA Hermes 3 Vicuna Yi WizardLM WizardMath WizardCoder
MistralForCausalLM
Mistral 7B Mistral Nemo Mistral Large 2 Mathstral
MixtralForCausalLM
Mixtral 8x7B Mixtral 8x22B Zephyr
MPTForCausalLM
MPT
MT5ForConditionalGeneration
MT5
OPTForcausalLM
OPT
Phi3ForCausalLM
Phi-3.5 Phi-3
PhiForCausalLM
Phi-2 Phi-1
Qwen2ForCausalLM
Qwen2.5 Qwen2 Qwen1.5
SolarForCausalLM
Solar
Starcoder2ForCausalLM
StarCoder 2
T5ForConditionalGeneration
FLAN-T5

Friendli Engine supports a wide array of quantization techniques, including FP8, INT8, and AWQ in all models.
The list above may not exhaustive. If your model does not belong to one of the above models, please check our documentation for more information or contact us for support.

HOW TO USE

Three ways to run generative AI models with Friendli Engine:

01

Friendli Dedicated Endpoints

Build and run generative AI models on autopilot

Learn more

02

Friendli Container

Serve LLM and LMM inferences with Friendli Engine in your GPU environment

Learn more

03

Friendli Serverless Endpoints

Call our fast and affordable API for open-source generative AI models

Learn more