Dedicated EndpointsBuild and run generative AI models on autopilot
Autopilot LLM endpoints for production
convenient and dependable service
without the need for self-management”
Now you can find Friendli Dedicated Endpoints on AWS marketplace, making building and serving LLMs seamless and efficient.
Superior cost-efficiency
and performance
A performant LLM serving solution is the first step to operating your AI application in the cloud.
Compared to vLLM, we boast:
Custom model support
We offer comprehensive support for both open-source and custom LLMs, allowing organizations to deploy models tailored to their unique requirements and domain-specific challenges.With the flexibility to integrate proprietary datasets, businesses can unlock new opportunities for innovation and differentiation in their AI-driven applications.Create a new endpoint with your private Hugging Face Model Hub repository or upload your model directly to Dedicated Endpoints.
Dedicated GPU Resource Management
Friendli Dedicated Endpoints provides dedicated GPU instances ensuring consistent access to computing resources without contention or performance fluctuations.By eliminating resource sharing, organizations can rely on predictable performance levels for their LLM inference tasks, enhancing productivity and reliability.
Multi-LoRA serving on a single GPU
With our specialized optimization, you can serve multiple LoRA models on a single endpoint using just one GPU. Streamline your operations and maximize resource efficiency.Enjoy greater flexibility and performance as you customize your models with enhanced access and efficiency. Optimize your deployments while maintaining top-tier performance.
Train your model with Friendli Fine-tuning
Optimize your models using enterprise data to achieve business-specific goals. Friendli Fine-Tuning enhances performance, saving both time and resources.Seamlessly deploy your endpoints to serve inference requests, and maximize your business outcomes with tailored, optimized models.
Auto-scale your resources in the cloud
When deploying generative AI in the cloud, it’s important to scale as your business grows.Friendli Dedicated Endpoints employs intelligent auto-scaling mechanisms that dynamically adjust computing resources based on real-time demand and workload patterns.
Test your endpoints in the playground
Experiment with your model’s capabilities in the endpoint playground.Configure parameters like token length, temperature, top P, and frequency penalty.
Basic
Sign upGet $10 in free credits upon sign up
Build and run generative AI models on autopilot
Configurable autoscaling
Test your endpoints in the playground
Billed monthly
Enterprise
Contact SalesAdvanced features
Priority access to high-demand GPUs, including A100s and H100s
Monitor endpoints with Metrics & Logs
Dedicated support
Custom pricing
Pricing details
Endpoint
GPU Type
$ / hour
A100 80GB
$3.8
H100 80GB
$5.6
Fine-tuning
Model
$ / 1M tokens
Models up to 16B parameters
$0.50
Models 16.1B - 72B
$3.00
Read more from our blogs
- August 19, 2024
- 6 min read
Hassle-free LLM Fine-tuning with FriendliAI and Weights & Biases
- August 6, 2024
- 6 min read
Retrieval Augmented Generation (RAG) with MongoDB and FriendliAI
- June 20, 2024
- 4 min read
Deploying Weights & Biases Model Checkpoints on Friendli Dedicated Endpoints
Other ways to run generative AI models with Friendli
Friendli Container
Serve LLM and LMM inferences with Friendli Engine in your private environment
Learn more