saishf
Merge-Mayhem-L3-V2.1
Dedicated Endpoints
Run this model inference on single tenant GPU with unmatched speed and reliability at scale.
Container
Run this model inference with full control and performance in your environment.
Get help setting up a custom Dedicated Endpoints.
Talk with our engineer to get a quote for reserved GPU instances with discounts.
Model provider
saishf
Model tree
Base
ResplendentAI/BlueMoon_Llama3
Base
ResplendentAI/Smarts_Llama3
Base
elyn-dev/Llama-3-Soliloquy-8B-v2
Base
ResplendentAI/RP_Format_QuoteAsterisk_Llama3
Base
meta-llama/Meta-Llama-3-8B-Instruct
Base
ResplendentAI/Luna_Llama3
Base
ResplendentAI/Aura_Llama3
Merged
this model
Modalities
Input
Text
Output
Text
Pricing
Dedicated Endpoints
View detailsSupported Functionality
Model APIs
Dedicated Endpoints
Container
More information