open-llama-3b-opus-reasoning-sft-2k-4bit
Merged and 4-bit quantized version of open_llama_3b_v2 fine-tuned on reasoning data with <think> tags.
How it was made
- Base: openlm-research/open_llama_3b_v2
- QLoRA fine-tuned with LoRA adapter
- Merged LoRA into base (16bit)
- Quantized back to 4-bit NF4
Training Data
- Crownelius/Opus-4.6-Reasoning-3300x (2,160 samples)
- Roman1111111/claude-opus-4.6-10000x (9,633 samples)
- 11,669 samples after filtering (<= 2024 tokens)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ping98k/open-llama-3b-opus-reasoning-sft-2k-4bit", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ping98k/open-llama-3b-opus-reasoning-sft-2k-4bit")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 10 * 5?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=False))
LoRA adapter
- Downloads last month
- 721
Model tree for ping98k/open-llama-3b-opus-reasoning-sft-2k-4bit
Base model
openlm-research/open_llama_3b_v2