• December 16, 2025
  • 2 min read

A Faster, Convenient Way to Discover and Deploy AI Models on FriendliAI

A Faster, Convenient Way to Discover and Deploy AI Models on FriendliAI thumbnail

As the open-source model ecosystem continues to grow at an unprecedented pace, model discovery and deployment should be just as fast and scalable as inference itself.

Today, we’re excited to share a refreshed Model page experience on FriendliAI designed to make browsing, evaluating, and deploying models significantly faster and more intuitive for developers and ML teams.

👉 Explore the updated Model page: https://friendli.ai/model

Built for Scale: A New Table-Based Model List

With hundreds of thousands of models available, clarity matters. The new Model page introduces a table-based model list, making it easier to:

  • Scan large model catalogs at a glance
  • Filter and navigate models by key attributes
  • Quickly identify the right model for your workload

Instead of scrolling through cards or opening multiple modals, you can now compare models efficiently in a single, structured view, optimized for speed and usability.


Rich Model Detail Pages

Each model now has its own dedicated detail page, replacing the previous modal-based experience.

These pages provide:

  • Clear model specifications and supported features
  • Deployment guidance and usage context
  • A more permanent, linkable reference for teams evaluating models

This change makes it easier to share model information internally and supports more informed deployment decisions especially when working across teams.

Faster Deployment with One-Click Setup

We’ve streamlined the path from discovery to deployment. From the Model page, you can now:

  • Click Deploy to open the Suite FDE create page with the selected model already pre-filled
  • Skip repetitive configuration steps and move straight to provisioning

For teams deploying frequently or testing multiple models, this reduces friction and speeds up iteration.

Instant Model Testing for Serverless Endpoints

For serverless models, exploration is now even faster. With a single click, you can:

  • Use the model directly in the Playground
  • Test prompts and behavior instantly. No setup required

This makes it easier to validate model quality and behavior before deploying into production workflows.

Designed for How Teams Actually Work

This refresh isn’t just a UI update, it’s a reflection of how developers and ML teams use FriendliAI every day:

  • Discovering new models quickly
  • Comparing capabilities at scale
  • Moving seamlessly from evaluation to deployment

As FriendliAI continues to support 484K+ open-source and custom models across text, vision, image, and audio, we’ll keep investing in tooling that makes high-performance inference easier to access and faster to operate.

Explore the New Model Experience

👉 Visit the updated Model page: https://friendli.ai/model


We’d love to hear your feedback as you explore the new experience, and as always, we’re building with real production workloads in mind.


Written by

Jiwon Park, Hyunsoo Kim


Share


General FAQ

What is FriendliAI?

FriendliAI is a GPU-inference platform that lets you deploy, scale, and monitor large language and multimodal models in production, without owning or managing GPU infrastructure. We offer three things for your AI models: Unmatched speed, cost efficiency, and operational simplicity. Find out which product is the best fit for you in here.

How does FriendliAI help my business?

Our Friendli Inference allows you to squeeze more tokens-per-second out of every GPU. Because you need fewer GPUs to serve the same load, the true metric—tokens per dollar—comes out higher even if the hourly GPU rate looks similar on paper. View pricing

Which models and modalities are supported?

Over 380,000 text, vision, audio, and multi-modal models are deployable out of the box. You can also upload custom models or LoRA adapters. Explore models

Can I deploy models from Hugging Face directly?

Yes. A one-click deploy by selecting “Friendli Endpoints” on the Hugging Face Hub will take you to our model deployment page. The page provides an easy-to-use interface for setting up Friendli Dedicated Endpoints, a managed service for generative AI inference. Learn more about our Hugging Face partnership

Still have questions?

If you want a customized solution for that key issue that is slowing your growth, contact@friendli.ai or click Talk to an engineer — our engineers (not a bot) will reply within one business day.


Explore FriendliAI today