gpt2-multilingual-20-zh-repair_3epochs
This model is a fine-tuned version of CausalNLP/gpt2-hf_multilingual-20 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.5681
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 3.5916 | 0.0626 | 500 | 3.6450 |
| 3.6159 | 0.1252 | 1000 | 3.6299 |
| 3.5824 | 0.1879 | 1500 | 3.6205 |
| 3.5986 | 0.2505 | 2000 | 3.6152 |
| 3.5522 | 0.3131 | 2500 | 3.6103 |
| 3.5757 | 0.3757 | 3000 | 3.6064 |
| 3.5319 | 0.4384 | 3500 | 3.6034 |
| 3.6002 | 0.5010 | 4000 | 3.6007 |
| 3.5922 | 0.5636 | 4500 | 3.5978 |
| 3.5069 | 0.6262 | 5000 | 3.5957 |
| 3.5616 | 0.6889 | 5500 | 3.5933 |
| 3.4412 | 0.7515 | 6000 | 3.5917 |
| 3.5299 | 0.8141 | 6500 | 3.5897 |
| 3.4957 | 0.8767 | 7000 | 3.5882 |
| 3.574 | 0.9393 | 7500 | 3.5863 |
| 3.6165 | 1.0019 | 8000 | 3.5852 |
| 3.5208 | 1.0645 | 8500 | 3.5844 |
| 3.5485 | 1.1271 | 9000 | 3.5829 |
| 3.5339 | 1.1897 | 9500 | 3.5816 |
| 3.5774 | 1.2524 | 10000 | 3.5802 |
| 3.5327 | 1.3150 | 10500 | 3.5790 |
| 3.4909 | 1.3776 | 11000 | 3.5780 |
| 3.5583 | 1.4402 | 11500 | 3.5768 |
| 3.5347 | 1.5029 | 12000 | 3.5755 |
| 3.5236 | 1.5655 | 12500 | 3.5744 |
| 3.4773 | 1.6281 | 13000 | 3.5733 |
| 3.5296 | 1.6907 | 13500 | 3.5726 |
| 3.4551 | 1.7534 | 14000 | 3.5718 |
| 3.5473 | 1.8160 | 14500 | 3.5710 |
| 3.5095 | 1.8786 | 15000 | 3.5704 |
| 3.5057 | 1.9412 | 15500 | 3.5700 |
| 3.5425 | 2.0038 | 16000 | 3.5695 |
| 3.5358 | 2.0664 | 16500 | 3.5695 |
| 3.4944 | 2.1290 | 17000 | 3.5692 |
| 3.4901 | 2.1916 | 17500 | 3.5690 |
| 3.4932 | 2.2543 | 18000 | 3.5688 |
| 3.5356 | 2.3169 | 18500 | 3.5686 |
| 3.5352 | 2.3795 | 19000 | 3.5683 |
| 3.4779 | 2.4421 | 19500 | 3.5682 |
| 3.5639 | 2.5047 | 20000 | 3.5682 |
| 3.4919 | 2.5674 | 20500 | 3.5681 |
| 3.5491 | 2.6300 | 21000 | 3.5681 |
| 3.5216 | 2.6926 | 21500 | 3.5681 |
| 3.4751 | 2.7552 | 22000 | 3.5681 |
| 3.4998 | 2.8179 | 22500 | 3.5681 |
| 3.4943 | 2.8805 | 23000 | 3.5681 |
| 3.4764 | 2.9431 | 23500 | 3.5681 |
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.0
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 25
Model tree for CausalNLP/gpt2-multilingual-20-zh-repair_3epochs
Base model
CausalNLP/gpt2-hf_multilingual-20