Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
HYUNAHKO
/
Iterative-DPO-Final
like
0
Safetensors
llama
4-bit precision
bitsandbytes
Model card
Files
Files and versions
xet
Community
main
Iterative-DPO-Final
1.12 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
HYUNAHKO
Upload merged Iterative DPO model
9d5aff8
verified
11 months ago
.gitattributes
Safe
1.57 kB
Upload merged Iterative DPO model
11 months ago
config.json
Safe
1.49 kB
Upload merged Iterative DPO model
11 months ago
generation_config.json
Safe
230 Bytes
Upload merged Iterative DPO model
11 months ago
model.safetensors
1.1 GB
xet
Upload merged Iterative DPO model
11 months ago
special_tokens_map.json
Safe
459 Bytes
Upload merged Iterative DPO model
11 months ago
tokenizer.json
Safe
17.2 MB
xet
Upload merged Iterative DPO model
11 months ago
tokenizer_config.json
Safe
50.6 kB
Upload merged Iterative DPO model
11 months ago