YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

🏎 Drift Car Tracking & Zone Analysis Model

πŸ“Œ Overview

This project is a computer vision model designed to track drifting cars and quantify driver performance using aerial (drone) footage. The system detects and tracks vehicles during tandem runs and measures how they interact with predefined drift zones.

The current implementation is a proof of concept, developed specifically for footage from Evergreen Speedway in Monroe, Washington.


🧠 Model Description

This model uses a YOLO-based framework to:

  • Detect drift cars in tandem runs
  • Classify vehicles as:
    • leader
    • chaser
  • Classify zones as:
    • FrontZone
    • RearZone
  • Track vehicles across frames
  • Enable downstream analysis of zone interaction and timing

Training Details

  • Fine-tuned from a pretrained YOLO model
  • Custom dataset manually annotated
  • Two datasets:
    • Cars: Bounding boxes for leader and chaser
    • Zones: Segmentation masks for drift zones

Zone interaction is computed using geometric methods (polygon overlap + time tracking), not learned directly by the model.


🎯 Intended Use

Designed for:

  • Formula Drift-style competitions
  • Grassroots drifting events
  • Experimental motorsports analytics

Example Applications

  • Measuring time spent in drift zones
  • Analyzing tandem behavior (leader vs. chaser)
  • Supporting judging with quantitative insights
  • Enhancing broadcast overlays

πŸ“Š Training Data

πŸ“ Source


πŸ”’ Dataset Size

πŸš— Cars

  • 1,204 original β†’ 2,724 augmented
  • Split: 84% train / 12% val / 5% test

🟣 Zones

  • 724 original β†’ 1,666 augmented
  • Split: 85% train / 9% val / 6% test

🏷 Class Distribution

Class Count
Leader 1,204
Chaser 1,201
FrontZone 137
RearZone 588

✏️ Annotation

  • Fully manual annotation
  • Consistent labeling across frames
  • Handled occlusion, overlap, and tandem proximity

πŸ”§ Augmentation

  • Rotation Β± 8Β°
  • Saturation Β± 15%
  • Brightness Β± 10%
  • Blur 2px
  • Mosaic = 0.2
  • Scale Β± 15%
  • Translate Β± 5%
  • Hsv_h (Color tint) = 0.01

βš™οΈ Training Procedure

  • Framework: Ultralytics YOLO
  • Models:
    • Cars: YOLO26s
    • Zones: YOLO26s-seg

πŸ’» Hardware

  • NVIDIA A100 (Google Colab)

⏱ Training Time

  • Cars: 80 epochs (~42 min)
  • Zones: 140 epochs (~1h 14min)

βš™οΈ Settings

  • Batch size: 16
  • Image size: 1024
  • Workers: 8
  • Cls: 2.5 (Only for Object Detection)
  • No early stopping
  • Default preprocessing

πŸ“ˆ Evaluation Results

πŸš— Car Model

Metric Value
Precision 0.9904
Recall 0.9792
mAP@50 0.9882
mAP@50-95 0.8937

🟣 Zone Model

Metric Value
Precision 0.9919
Recall 0.9952
mAP@50 0.9948
mAP@50-95 0.7064

πŸ“‰ Key Visualizations

Car Results Zone Results

🧠 Performance Analysis

πŸš— Cars

Strengths: - Very high precision and recall
- Reliable detection and classification
- Strong tracking foundation
Limitations: - Smoke occlusion affects detection
- Close tandem overlap can cause confusion
- Limited generalization beyond training conditions

🟣 Zones

  • High detection accuracy (mAP@50)
  • Lower boundary precision (mAP@50-95) Implication:
  • Good at identifying zones
  • Less accurate for exact boundaries β†’ impacts timing precision
    Note: Since zones are static, polygon-based methods may be more reliable than segmentation.

⚠️ Limitations and Biases

🚨 Failure Cases

  • Heavy smoke β†’ missed or unstable detections
  • Close tandem β†’ overlap confusion
  • Camera motion β†’ inconsistent zone alignment
  • Edge-of-frame β†’ partial detections

πŸ“‰ Weak Areas

  • Zone boundary precision
  • Leader vs. chaser ambiguity in tight proximity

πŸ“Š Data Bias

  • Single track (Evergreen Speedway)
  • Single event and lighting condition
  • Fixed drone perspective

🌦 Environmental Limits

Performance may degrade with: - Smoke, blur, or occlusion
- Lighting changes
- Drone altitude variation
- Camera movement

🚫 Not Suitable For

  • Official judging systems
  • General vehicle detection
  • Different tracks without recalibration
  • Other motorsports without adaptation

πŸ“ Dataset Limitations

  • Underrepresented zone classes
  • Limited diversity (track, cars, conditions)
  • Few edge-case scenarios (spins, collisions)

🏁 Summary

This model performs strongly within a controlled environment but is highly specialized. It should be viewed as a proof-of-concept system for drift analytics rather than a fully generalized or production-ready solution.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support