YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Meta-Llama-3.2-1B-Instruct

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.

Model Stats:

  • Input sequence length for Prompt Processor: 128
  • Maximum context length: 4096
  • Quantization Type: w4 + w8 (few layers) with fp16 activations and w4a16 + w8a16 (few layers) are supported
  • Supported languages: English.
  • TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
  • Response Rate: Rate of response generation after the first response token.

Model Details

Model Developer: Meta

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Training Data Params Input modalities Output modalities Context Length GQA Shared Embeddings Token count Knowledge cutoff
Llama 3.2 (text only) A new mix of publicly available online data. 1B (1.23B) Multilingual Text Multilingual Text and code 128k Yes Yes Up to 9T tokens December 2023
3B (3.21B) Multilingual Text Multilingual Text and code
Llama 3.2 Quantized (text only) A new mix of publicly available online data. 1B (1.23B) Multilingual Text Multilingual Text and code 8k Yes Yes Up to 9T tokens December 2023
3B (3.21B) Multilingual Text Multilingual Text and code

Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.

Llama 3.2 Model Family: Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date: Sept 25, 2024

Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.

License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).

Feedback: Instructions on how to provide feedback or comments on the model can be found in the Llama Models README. For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go here.

Model Download

Model Chipset Target Runtime Precision Primary Compute Unit Target Model Performance
Meta-Llama-3.2-1B-Instruct QCS9075 QNN 2.32 W4A16 NPU Meta-Llama-3.2-1B-Instruct Check in Model Farm

Model Inference & Conversion

Please search model by model name in Model Farm

License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support