You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SWACH_V1

1. Model Specifications

  • Architecture: YOLOv8s (small variant)
  • Parameters: 11,137,922
  • Gradients: 11,137,906
  • Computational Load: 28.7 GFLOPs
  • Input Size: 640Γ—640 pixels
  • Classes: 6 garbage categories (overridden from default COCO 80-class setup)

2. Training Configuration

  • Epochs: 100
  • Batch Size: 16
  • Device: NVIDIA Tesla T4 GPU (CUDA)
  • Optimizer: AdamW
    • Learning Rate (lr0): 0.001
    • Momentum: 0.9
    • Weight Decay: 0.0005
  • Data Augmentation:
    • Albumentations: Blur, MedianBlur, CLAHE, Grayscale conversion
    • Mosaic Augmentation: Enabled until epoch 90 (improves small-object detection)
  • Mixed Precision: AMP (Automatic Mixed Precision) βœ…
  • Loss Functions:
    • box_loss: Bounding box regression
    • cls_loss: Class prediction
    • dfl_loss: Distribution Focal Loss (for localization accuracy)

3. Dataset

  • Training Images: 1,077 (+15 background)
  • Validation Images: 157 (+1 background)
  • Instances: 587 annotated objects
  • Class Distribution:
    • garbage, sampah-detection, trash, and 3 additional classes

4. Performance Metrics

Best Model Validation Results (Saved as swach_v1.pt):
metrics:

Metric Value
mAP50 0.324
mAP50-95 0.197
Precision (P) 0.438
Recall (R) 0.356

Per-Class Results:

Class Precision Recall mAP50
garbage 0.155 0.597 0.173
sampah-detection 0.447 0.676 0.599
trash 0.459 0.500 0.662
Class 0 (Unknown) 0.128 0.005 0.187

Key Insight: The model excels at detecting sampah-detection and trash (mAP50 > 0.59) but struggles with class 0 (low recall).


5. Training Trajectory

  • Total Time: 0.69 hours (~41 minutes)
  • Loss Reduction:
    • box_loss: 1.656 β†’ 0.623 (↓62.4%)
    • cls_loss: 3.373 β†’ 0.427 (↓87.3%)
  • mAP50 Progress: Started at 0.068 (Epoch 1), peaked at 0.323 (Epoch 58).
  • Critical Improvement: Epoch 58 saw a 52% mAP50 jump (0.222 β†’ 0.323) due to mosaic augmentation effects.

6. Techniques Applied

  1. Transfer Learning:
    • Initialized with yolov8s.pt COCO weights (349/355 layers transferred).
  2. Dynamic Learning Rate:
    • Warmup: 3 epochs (bias LR: 0.1 β†’ 0.001).
    • Cosine annealing (automated by YOLOv8).
  3. Advanced Augmentation:
    • Mosaic: 90% of training (random image stitching).
    • Geometric: FlipLR (50%), translation, scaling.
  4. Efficiency Optimizations:
    • AMP for FP16/FP32 hybrid training β†’ faster computation.
    • Dataloader workers: 8 (parallel data loading).

7. Deployment Readiness

  • Output Format: TorchScript-optimized (22.5MB stripped weights).
  • Inference Speed:
    • Preprocess: 0.3 ms/image
    • Inference: 5.8 ms/image (Tesla T4)
  • Model Saved At: garbage_detection_training/train/weights/best.pt

Conclusion

The YOLOv8s model achieved 32.4% mAP50 on garbage detection, with strong performance on identifiable trash categories (sampah-detection, trash). To improve results:

  1. Address Class Imbalance: Collect more data for low-recall classes (e.g., class 0).
  2. Tune Augmentation: Increase copy-paste or mixup augmentations for rare classes.
  3. Hyperparameter Tuning: Adjust cls_loss weight to reduce false negatives.

Final Model: Ready for deployment in roadside garbage monitoring systems.


Report Generated By: Vishesh Yadav @Bhasa Date: June 30, 2025

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Model tree for vishesh1234/swach_v1

Finetuned
(149)
this model