Meta-Llama-3.2-3B-Instruct
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model Conversion Contributor: APLUX
Model Stats:
- Input sequence length for Prompt Processor: 128
- Maximum context length: 4096
- Quantization Type: w4 + w8 (few layers) with fp16 activations and w4a16 + w8a16 (few layers) are supported
- Supported languages: English.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
Model Download
| Model | Chipset | Target Runtime | Precision | Primary Compute Unit | Target Model | Performance |
|---|---|---|---|---|---|---|
| Meta-Llama-3.2-3B-Instruct | QCS9075 | QNN 2.32 | W4A16 | NPU | Meta-Llama-3.2-3B-Instruct | Check in Model Farm |
Model Inference & Conversion
Please search model by model name in Model Farm
License
Source Model: LLAMA-3.2-LICENSE
Deployable Model: LLAMA-3.2-LICENSE
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support