Nexesenex
Llama_3.x_70b_Hexagon_Purple_V2
Dedicated Endpoints
Run this model inference on single tenant GPU with unmatched speed and reliability at scale.
Container
Run this model inference with full control and performance in your environment.
Get help setting up a custom Dedicated Endpoints.
Talk with our engineer to get a quote for reserved GPU instances with discounts.
Model provider
Nexesenex
Model tree
Base
NexesMess/Llama_3.1_70b_Priestess_V1
Base
nbeerbower/Llama3.1-Gutenberg-Doppel-70B
Base
NexesMess/Llama_3.3_70b_DoppelGanger_R1
Base
Nexesenex/Llama_3.x_70b_SmarTracks_V1.01
Base
migtissera/Tess-3-Llama-3.1-70B
Base
Steelskull/L3.3-Electra-R1-70b
Merged
this model
Modalities
Input
Text
Output
Text
Pricing
Dedicated Endpoints
View detailsSupported Functionality
Model APIs
Dedicated Endpoints
Container
More information