duyhv1411/Llama-1.1B-qlora-ft
This model is an advanced iteration of the powerful TinyLlama/TinyLlama-1.1B-Chat-v1.0, specifically fine-tuned to enhance its capabilities in generic domains.
⚡ Quantized GGUF
How to use
# Use a pipeline as a high-level helper
from transformers import pipeline
prompt = """<|user|>
Hello, how are you?</s>
<|assistant|>
"""
# Run our instruction-tuned model
pipe = pipeline(task="text-generation", model="duyhv1411/Llama-1.1B-qlora-ft", return_full_text=False,)
pipe(prompt)[0]["generated_text"]
- Downloads last month
- 2
Model tree for duyhv1411/Llama-1.1B-qlora-ft
Unable to build the model tree, the base model loops to the model itself. Learn more.