Open LLaMA 3B v2 — 4-bit NF4

4-bit quantized version of openlm-research/open_llama_3b_v2 using bitsandbytes NF4, ready for QLoRA fine-tuning.

Quantization Details

Parameter Value
Quant method bitsandbytes NF4
Double quant Yes
Compute dtype bfloat16
Model size ~1.93 GB

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ping98k/open_llama_3b_v2_4bit", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ping98k/open_llama_3b_v2_4bit")

QLoRA Fine-tuning

from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training

model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
)
model = get_peft_model(model, lora_config)
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ping98k/open_llama_3b_v2_4bit

Quantized
(27)
this model