- August 6, 2025
- 2 min read
WBA: The Community-Driven Platform for Blind Testing the World’s Best AI Models

If you’ve ever wondered which AI model gives the best answer—not just on benchmarks but in real-world, everyday questions—you’re not alone. And now, there’s a platform built just for that.
We’re excited to introduce World Best AI (WBA), a community-driven online platform where you get to judge which AI model performs best through a fun, side-by-side blind taste test.
WBA especially aims to improve user evaluations in underrepresented languages starting from Asia, where current benchmarks and leaderboards often fall short. We’re also putting a spotlight on models from K-AI companies like LG, Upstage, SKT, and Naver so they can be fairly compared on a global stage alongside leading models such as OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, DeepSeek, Meta’s LLaMA, Alibaba’s Qwen, xAI’s Grok, etc.
Why We Built WBA
Recent years have seen a surge in powerful language models. But which one actually performs best for you?
Most evaluations rely on benchmarks published by model developers themselves—metrics that don’t always reflect real-world usage or user experience. WBA flips the script by empowering everyone, not just AI experts, to compare models hands-on and vote for the best answers.
With deep experience in AI infrastructure and a commitment to openness and accessibility, FriendliAI created WBA as a natural extension of its mission—building a transparent, community-driven platform where users can fairly evaluate and discover the AI models that best meet their needs.
How It Works
WBA is designed to be as simple and accessible as possible:
- Ask a question.
- See answers from multiple AI models—without knowing which is which.
- Vote for the one you prefer.
That’s it.
Behind the scenes, your votes contribute to a live, community-driven leaderboard (coming soon) of the most preferred models. No brand names. No bias. Just honest feedback from real people.
Want more depth? You can also enable reasoning mode, which highlights prompts that require critical thinking, logical steps, or complex problem-solving—helping differentiate models on more than just surface-level fluency.
What’s Next?
We’re launching with full support for Korean, but this is just the beginning. In the coming months, WBA will expand to include English and other major languages, evolving into a truly global platform for AI model comparison and discovery.
Built by FriendliAI—a leading innovator in AI infrastructure—WBA leverages scalable generative AI with lightning-fast inference, extensive multilingual model support, and ultra-efficient GPU optimization. FriendliAI’s solutions, including Serverless Endpoints and Dedicated Endpoints, enable fast and cost-effective deployment of large AI models tailored to diverse needs, supporting over 420,000 open-source models—at a fraction of typical cloud computing costs.
Aligned with FriendliAI’s mission to democratize AI access, WBA offers a transparent, fair, and user-friendly space for anyone to compare Korean and global models through real-world interactions.
Whether you’re an AI researcher, developer, or simply curious about which model is trending, WBA.chat is your go-to destination to explore, test, and decide. And when you’re ready to build and deploy your own AI solutions, simply head to FriendliAI to access powerful, scalable infrastructure designed to bring your projects to life quickly, efficiently, and affordably.
Written by
FriendliAI Tech & Research
Share
General FAQ
What is FriendliAI?
FriendliAI is a GPU-inference platform that lets you deploy, scale, and monitor large language and multimodal models in production, without owning or managing GPU infrastructure. We offer three things for your AI models: Unmatched speed, cost efficiency, and operational simplicity. Find out which product is the best fit for you in here.
How does FriendliAI help my business?
Our Friendli Inference allows you to squeeze more tokens-per-second out of every GPU. Because you need fewer GPUs to serve the same load, the true metric—tokens per dollar—comes out higher even if the hourly GPU rate looks similar on paper. View pricing
Which models and modalities are supported?
Over 380,000 text, vision, audio, and multi-modal models are deployable out of the box. You can also upload custom models or LoRA adapters. Explore models
Can I deploy models from Hugging Face directly?
Yes. A one-click deploy by selecting “Friendli Endpoints” on the Hugging Face Hub will take you to our model deployment page. The page provides an easy-to-use interface for setting up Friendli Dedicated Endpoints, a managed service for generative AI inference. Learn more about our Hugging Face partnership
Still have questions?
If you want a customized solution for that key issue that is slowing your growth, contact@friendli.ai or click Talk to an expert — our experts (not a bot) will reply within one business day.