nbeerbower
llama-3-stinky-v2-8B
Run this model inference on single tenant GPU with unmatched speed and reliability at scale.
Run this model inference with full control and performance in your environment.
Get help setting up a custom Dedicated Endpoints.
Talk with our engineer to get a quote for reserved GPU instances with discounts.
Model provider
nbeerbower
Model tree
Base
mlabonne/ChimeraLlama-3-8B-v2
Base
grimjim/llama-3-merge-virt-req-8B
Base
VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
Base
grimjim/llama-3-nvidia-ChatQA-1.5-8B
Base
grimjim/llama-3-merge-pp-instruct-8B
Base
nbeerbower/llama-3-stella-8B
Base
elyn-dev/Llama-3-Soliloquy-8B-v2
Base
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
Base
flammenai/Mahou-1.1-llama3-8B
Base
flammenai/Mahou-1.0-llama3-8B
Base
uygarkurt/llama-3-merged-linear
Base
jeiku/Orthocopter_8B
Base
cloudyu/Meta-Llama-3-8B-Instruct-DPO
Merged
this model
Modalities
Input
Text
Output
Text
Pricing
Dedicated Endpoints
View detailsSupported Functionality
Model APIs
Dedicated Endpoints
Container
More information