timesformer-base-finetuned-k400-finetuned-snapdata_short_classification-sample_rate32
This model is a fine-tuned version of facebook/timesformer-base-finetuned-k400 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.9266
- Accuracy: 0.6862
- 0 Precision: 0.6305
- 0 Recall: 0.7881
- 0 F1-score: 0.7005
- 0 Support: 1024.0
- 1 Precision: 0.7639
- 1 Recall: 0.5974
- 1 F1-score: 0.6705
- 1 Support: 1175.0
- Accuracy F1-score: 0.6862
- Macro avg Precision: 0.6972
- Macro avg Recall: 0.6928
- Macro avg F1-score: 0.6855
- Macro avg Support: 2199.0
- Weighted avg Precision: 0.7018
- Weighted avg Recall: 0.6862
- Weighted avg F1-score: 0.6845
- Weighted avg Support: 2199.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 31000
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | Accuracy F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.664 | 0.0200 | 621 | 0.6462 | 0.6271 | 0.5733 | 0.7793 | 0.6606 | 1024.0 | 0.7200 | 0.4945 | 0.5863 | 1175.0 | 0.6271 | 0.6466 | 0.6369 | 0.6234 | 2199.0 | 0.6516 | 0.6271 | 0.6209 | 2199.0 |
| 0.5917 | 1.0200 | 1242 | 0.6000 | 0.6885 | 0.6680 | 0.6582 | 0.6631 | 1024.0 | 0.7059 | 0.7149 | 0.7104 | 1175.0 | 0.6885 | 0.6869 | 0.6865 | 0.6867 | 2199.0 | 0.6882 | 0.6885 | 0.6883 | 2199.0 |
| 0.5387 | 2.0200 | 1863 | 0.5982 | 0.6758 | 0.6474 | 0.6670 | 0.6570 | 1024.0 | 0.7019 | 0.6834 | 0.6925 | 1175.0 | 0.6758 | 0.6747 | 0.6752 | 0.6748 | 2199.0 | 0.6765 | 0.6758 | 0.6760 | 2199.0 |
| 0.5298 | 3.0200 | 2484 | 0.5978 | 0.6894 | 0.7155 | 0.5527 | 0.6237 | 1024.0 | 0.6747 | 0.8085 | 0.7356 | 1175.0 | 0.6894 | 0.6951 | 0.6806 | 0.6796 | 2199.0 | 0.6937 | 0.6894 | 0.6835 | 2199.0 |
| 0.472 | 4.0200 | 3105 | 0.6734 | 0.6457 | 0.5886 | 0.7949 | 0.6764 | 1024.0 | 0.7426 | 0.5157 | 0.6087 | 1175.0 | 0.6457 | 0.6656 | 0.6553 | 0.6425 | 2199.0 | 0.6709 | 0.6457 | 0.6402 | 2199.0 |
| 0.4621 | 5.0200 | 3726 | 0.6309 | 0.6903 | 0.6862 | 0.6172 | 0.6499 | 1024.0 | 0.6933 | 0.7540 | 0.7224 | 1175.0 | 0.6903 | 0.6897 | 0.6856 | 0.6861 | 2199.0 | 0.6900 | 0.6903 | 0.6886 | 2199.0 |
| 0.4311 | 6.0200 | 4347 | 0.6399 | 0.7058 | 0.6682 | 0.7314 | 0.6984 | 1024.0 | 0.7449 | 0.6834 | 0.7128 | 1175.0 | 0.7058 | 0.7065 | 0.7074 | 0.7056 | 2199.0 | 0.7092 | 0.7058 | 0.7061 | 2199.0 |
| 0.3747 | 7.0200 | 4968 | 0.7701 | 0.6735 | 0.6957 | 0.5312 | 0.6024 | 1024.0 | 0.6613 | 0.7974 | 0.7230 | 1175.0 | 0.6735 | 0.6785 | 0.6643 | 0.6627 | 2199.0 | 0.6773 | 0.6735 | 0.6669 | 2199.0 |
| 0.3247 | 8.0200 | 5589 | 0.7543 | 0.7003 | 0.6605 | 0.7334 | 0.6950 | 1024.0 | 0.7429 | 0.6715 | 0.7054 | 1175.0 | 0.7003 | 0.7017 | 0.7024 | 0.7002 | 2199.0 | 0.7046 | 0.7003 | 0.7006 | 2199.0 |
| 0.3022 | 9.0200 | 6210 | 0.9266 | 0.6862 | 0.6305 | 0.7881 | 0.7005 | 1024.0 | 0.7639 | 0.5974 | 0.6705 | 1175.0 | 0.6862 | 0.6972 | 0.6928 | 0.6855 | 2199.0 | 0.7018 | 0.6862 | 0.6845 | 2199.0 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.0.0+cu117
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Kartikeya/timesformer-base-finetuned-k400-finetuned-snapdata_short_classification-sample_rate32
Base model
facebook/timesformer-base-finetuned-k400