nitky
Oumuamua-7b-instruct-v2
Run this model inference on single tenant GPU with unmatched speed and reliability at scale.
Run this model inference with full control and performance in your environment.
Get help setting up a custom Dedicated Endpoints.
Talk with our engineer to get a quote for reserved GPU instances with discounts.
Model provider
nitky
Model tree
Base
tokyotech-llm/Swallow-MS-7b-v0.1
Base
dphn/dolphin-2.8-mistral-7b-v02
Base
nitky/Oumuamua-7b-base
Base
nitky/RP-7b-instruct
Base
kaist-ai/janus-dpo-7b
Base
openbmb/Eurus-7b-kto
Base
HachiML/Mistral-7B-v0.3-m3-lora
Base
mistralai/Mistral-7B-v0.1
Base
ZySec-AI/SecurityLLM
Base
internistai/base-7b-v0.2
Base
prometheus-eval/prometheus-7b-v2.0
Base
nitky/Oumuamua-7b-instruct
Base
stabilityai/japanese-stablelm-base-gamma-7b
Base
NTQAI/chatntq-ja-7b-v1.0
Base
Weyaxi/Einstein-v6-7B
Merged
this model
Modalities
Input
Text
Output
Text
Pricing
Dedicated Endpoints
View detailsSupported Functionality
Model APIs
Dedicated Endpoints
Container
More information