- 
                Notifications
    
You must be signed in to change notification settings  - Fork 164
 
models Phi 3 mini 4k instruct openvino npu
This model is an optimized version of Phi-3-Mini-4K-Instruct to enable local inference on Intel NPUs.
- Developed by: Microsoft
 - Model type: ONNX
 - License: MIT
 - Model Description: This is a conversion of the Phi-3-Mini-4K-Instruct for local inference on Intel NPUs.
 - Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
 
See Hugging Face model Phi-3-Mini-4K-Instruct for details.
Version: 1
foundryLocal  license : MIT  licenseDescription : This model is provided under the License Terms available at <https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE>.  author : Microsoft  inputModalities : text  outputModalities : text  task : chat-completion  maxOutputTokens : 2048  alias : phi-3-mini-4k  directoryPath : model  promptTemplate : {"system": "<|system|>\n{Content}<|end|>", "user": "<|user|>\n{Content}<|end|>", "assistant": "<|assistant|>\n{Content}<|end|>", "prompt": "<|user|>\n{Content}<|end|>\n<|assistant|>"}
View in Studio: https://ml.azure.com/registries/azureml/models/Phi-3-mini-4k-instruct-openvino-npu/version/1
License: MIT