# 🏎 Drift Car Tracking & Zone Analysis Model
## 📌 Overview
This project is a computer vision model designed to **track drifting cars and quantify driver performance** using aerial (drone) footage. The system detects and tracks vehicles during tandem runs and measures how they interact with predefined drift zones.
The current implementation is a **proof of concept**, developed specifically for footage from **Evergreen Speedway in Monroe, Washington**.
---
## 🧠 Model Description
This model uses a YOLO-based framework to:
- Detect drift cars in tandem runs
- Classify vehicles as:
- `leader`
- `chaser`
- Classify zones as:
- `FrontZone`
- `RearZone`
- Track vehicles across frames
- Enable downstream analysis of zone interaction and timing
### Training Details
- Fine-tuned from a pretrained YOLO model
- Custom dataset manually annotated
- Two datasets:
- **Cars:** Bounding boxes for leader and chaser
- **Zones:** Segmentation masks for drift zones
Zone interaction is computed using geometric methods (polygon overlap + time tracking), not learned directly by the model.
---
## 🎯 Intended Use
Designed for:
- Formula Drift-style competitions
- Grassroots drifting events
- Experimental motorsports analytics
### Example Applications
- Measuring time spent in drift zones
- Analyzing tandem behavior (leader vs. chaser)
- Supporting judging with quantitative insights
- Enhancing broadcast overlays
---
## 📊 Training Data
### 📁 Source
- Formula Drift Seattle 2025 PRO, Round 6 - Top 32
https://www.youtube.com/watch?v=MuD-uxGQnrg&t=879s
---
### 🔢 Dataset Size
#### 🚗 Cars
- 1,204 original → 2,724 augmented
- Split: 84% train / 12% val / 5% test
#### 🟣 Zones
- 724 original → 1,666 augmented
- Split: 85% train / 9% val / 6% test
---
### 🏷 Class Distribution
| Class | Count |
|-----------|------|
| Leader | 1,204 |
| Chaser | 1,201 |
| FrontZone | 137 |
| RearZone | 588 |
---
### ✏️ Annotation
- Fully manual annotation
- Consistent labeling across frames
- Handled occlusion, overlap, and tandem proximity
---
### 🔧 Augmentation
- Rotation ± 8°
- Saturation ± 15%
- Brightness ± 10%
- Blur 2px
- Mosaic = 0.2
- Scale ± 15%
- Translate ± 5%
- Hsv_h (Color tint) = 0.01
---
## ⚙️ Training Procedure
- **Framework:** Ultralytics YOLO
- **Models:**
- Cars: YOLO26s
- Zones: YOLO26s-seg
### 💻 Hardware
- NVIDIA A100 (Google Colab)
### ⏱ Training Time
- Cars: 80 epochs (~42 min)
- Zones: 140 epochs (~1h 14min)
### ⚙️ Settings
- Batch size: 16
- Image size: 1024
- Workers: 8
- Cls: 2.5 (Only for Object Detection)
- No early stopping
- Default preprocessing
---
## 📈 Evaluation Results
### 🚗 Car Model
| Metric | Value |
|----------|-------|
| Precision | 0.9904 |
| Recall | 0.9792 |
| mAP@50 | 0.9882 |
| mAP@50-95 | 0.8937 |
### 🟣 Zone Model
| Metric | Value |
|----------|-------|
| Precision | 0.9919 |
| Recall | 0.9952 |
| mAP@50 | 0.9948 |
| mAP@50-95 | 0.7064 |
---
## 📉 Key Visualizations
**Car Results**
**Zone Results**
---
## 🧠 Performance Analysis
### 🚗 Cars
**Strengths:**
- Very high precision and recall
- Reliable detection and classification
- Strong tracking foundation
**Limitations:**
- Smoke occlusion affects detection
- Close tandem overlap can cause confusion
- Limited generalization beyond training conditions
---
### 🟣 Zones
- High detection accuracy (mAP@50)
- Lower boundary precision (mAP@50-95)
**Implication:**
- Good at identifying zones
- Less accurate for exact boundaries → impacts timing precision
**Note:** Since zones are static, polygon-based methods may be more reliable than segmentation.
---
## ⚠️ Limitations and Biases
### 🚨 Failure Cases
- Heavy smoke → missed or unstable detections
- Close tandem → overlap confusion
- Camera motion → inconsistent zone alignment
- Edge-of-frame → partial detections
---
### 📉 Weak Areas
- Zone boundary precision
- Leader vs. chaser ambiguity in tight proximity
---
### 📊 Data Bias
- Single track (Evergreen Speedway)
- Single event and lighting condition
- Fixed drone perspective
---
### 🌦 Environmental Limits
Performance may degrade with:
- Smoke, blur, or occlusion
- Lighting changes
- Drone altitude variation
- Camera movement
---
### 🚫 Not Suitable For
- Official judging systems
- General vehicle detection
- Different tracks without recalibration
- Other motorsports without adaptation
---
### 📏 Dataset Limitations
- Underrepresented zone classes
- Limited diversity (track, cars, conditions)
- Few edge-case scenarios (spins, collisions)
---
## 🏁 Summary
This model performs strongly within a controlled environment but is highly specialized. It should be viewed as a **proof-of-concept system** for drift analytics rather than a fully generalized or production-ready solution.