microsoft

microsoft

Phi-4-multimodal-instruct

A lightweight 5.6B parameter model unifying text, vision, and speech in a single neural network for cross-modal reasoning, speech recognition, and image understanding.

Model Summary

Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, generating text outputs, and comes with 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning, direct preference optimization and RLHF (Reinforcement Learning from Human Feedback) to support precise instruction adherence and safety measures. The languages that each modal supports are the following:

  • Text: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian
  • Vision: English
  • Audio: English, Chinese, German, French, Italian, Japanese, Spanish, Portuguese

License

The model is licensed under the MIT license.

Dedicated Endpoints

Run this model inference on single tenant GPU with unmatched speed and reliability at scale.

Learn more

Get help setting up a custom Dedicated Endpoints.

Talk with our engineer to get a quote for reserved GPU instances with discounts.

Model provider

microsoft

microsoft

Model tree

Base

this model

Modalities

Input

Audio, Text, Image

Output

Text

Pricing

Dedicated Endpoints

View details

Supported Functionality

Serverless Endpoints

Dedicated Endpoints

Container

More information

Explore FriendliAI today