Open LLaMA 3B v2 — 4-bit NF4
4-bit quantized version of openlm-research/open_llama_3b_v2 using bitsandbytes NF4, ready for QLoRA fine-tuning.
Quantization Details
| Parameter | Value |
|---|---|
| Quant method | bitsandbytes NF4 |
| Double quant | Yes |
| Compute dtype | bfloat16 |
| Model size | ~1.93 GB |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ping98k/open_llama_3b_v2_4bit", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ping98k/open_llama_3b_v2_4bit")
QLoRA Fine-tuning
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, lora_config)
- Downloads last month
- -
Model tree for ping98k/open_llama_3b_v2_4bit
Base model
openlm-research/open_llama_3b_v2