bunnycore

bunnycore

Llama-3.2-3B-Apex

Dedicated Endpoints

Run this model inference on single tenant GPU with unmatched speed and reliability at scale.

Learn more
Container

Run this model inference with full control and performance in your environment.

Learn more

Get help setting up a custom Dedicated Endpoints.

Talk with our engineer to get a quote for reserved GPU instances with discounts.

Model provider

bunnycore

bunnycore

Model tree

Base

bunnycore/Llama-3.2-3B-All-Mix

Base

bunnycore/Llama-3.2-3B-Sci-Think

Base

bunnycore/Llama-3.2-3B-Long-Think

Base

huihui-ai/Llama-3.2-3B-Instruct-abliterated

Base

djuna-test-lab/TEST-L3.2-ReWish-3B

Base

bunnycore/Llama-3.2-3B-CodeReactor

Base

bunnycore/Llama-3.2-3B-Booval

Base

bunnycore/Llama-3.2-3B-Pure-RP

Base

bunnycore/Llama-3.2-3B-Mix-Skill

Base

bunnycore/Llama-3.2-3B-Prodigy

Merged

this model

Modalities

Input

Text

Output

Text

Pricing

Dedicated Endpoints

View details

Supported Functionality

Model APIs

Dedicated Endpoints

Container

More information

Explore FriendliAI today