dino-vitb16-finetuned-galaxy10-decals

This model is a fine-tuned version of facebook/dino-vitb16 on the matthieulel/galaxy10_decals dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3956
  • Accuracy: 0.5062

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.8951 0.9940 124 2.7751 0.0868
2.5724 1.9960 249 2.5352 0.0817
2.2714 2.9980 374 2.2738 0.1285
2.0997 4.0 499 2.0739 0.2182
1.9182 4.9940 623 1.9210 0.3027
1.8259 5.9960 748 1.8074 0.3608
1.7096 6.9980 873 1.7133 0.3923
1.667 8.0 998 1.6444 0.4177
1.5932 8.9940 1122 1.5951 0.4335
1.5421 9.9960 1247 1.5479 0.4464
1.5117 10.9980 1372 1.5130 0.4600
1.489 12.0 1497 1.4821 0.4735
1.4881 12.9940 1621 1.4592 0.4786
1.4648 13.9960 1746 1.4420 0.4865
1.4387 14.9980 1871 1.4277 0.4927
1.46 16.0 1996 1.4154 0.4994
1.4393 16.9940 2120 1.4056 0.5
1.4471 17.9960 2245 1.3995 0.5045
1.4085 18.9980 2370 1.3956 0.5062
1.4227 19.8798 2480 1.3948 0.5056

Framework versions

  • Transformers 4.40.1
  • Pytorch 1.12.1+cu116
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for matthieulel/dino-vitb16-finetuned-galaxy10-decals

Finetuned
(1004)
this model