File size: 8,256 Bytes
61d88db | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 | ---
viewer: false
tags: [uv-script, object-detection]
---
# Object Detection Dataset Scripts
5 scripts to convert, validate, inspect, diff, and sample object detection datasets on the Hub. Supports 6 bbox formats — no setup required.
This repository is inspired by [panlabel](https://github.com/strickvl/panlabel)
## Quick Start
Convert bounding box formats without cloning anything:
```bash
# Convert COCO-style bboxes to YOLO normalized format
uv run convert-hf-dataset.py merve/coco-dataset merve/coco-yolo \
--from coco_xywh --to yolo --max-samples 100
```
That's it! The script will:
- Load the dataset from the Hub
- Convert all bounding boxes in-place
- Push the result to a new dataset repo
- View results at: `https://huggingface.co/datasets/merve/coco-yolo`
## Scripts
| Script | Description |
|--------|-------------|
| `convert-hf-dataset.py` | Convert between 6 bbox formats and push to Hub |
| `validate-hf-dataset.py` | Check annotations for errors (invalid bboxes, duplicates, bounds) |
| `stats-hf-dataset.py` | Compute statistics (counts, label histogram, area, co-occurrence) |
| `diff-hf-datasets.py` | Compare two datasets semantically (IoU-based annotation matching) |
| `sample-hf-dataset.py` | Create subsets (random or stratified) and push to Hub |
## Supported Bbox Formats
All scripts support these 6 bounding box formats, matching the [panlabel](https://github.com/strickvl/panlabel) Rust CLI:
| Format | Encoding | Coordinate Space |
|--------|----------|------------------|
| `coco_xywh` | `[x, y, width, height]` | Pixels |
| `xyxy` | `[xmin, ymin, xmax, ymax]` | Pixels |
| `voc` | `[xmin, ymin, xmax, ymax]` | Pixels (alias for `xyxy`) |
| `yolo` | `[center_x, center_y, width, height]` | Normalized 0–1 |
| `tfod` | `[xmin, ymin, xmax, ymax]` | Normalized 0–1 |
| `label_studio` | `[x, y, width, height]` | Percentage 0–100 |
Conversions go through XYXY pixel-space as the intermediate representation, so any format can be converted to any other format.
## Common Options
All scripts accept flexible column mapping. Datasets can store annotations as flat columns or nested under an `objects` dict — both layouts are handled automatically.
| Option | Description |
|--------|-------------|
| `--bbox-column` | Column containing bboxes (default: `bbox`) |
| `--category-column` | Column containing category labels (default: `category`) |
| `--width-column` | Column for image width (default: `width`) |
| `--height-column` | Column for image height (default: `height`) |
| `--split` | Dataset split (default: `train`) |
| `--max-samples` | Limit number of samples (useful for testing) |
| `--hf-token` | HF API token (or set `HF_TOKEN` env var) |
| `--private` | Make output dataset private |
Every script supports `--help` to see all available options:
```bash
uv run convert-hf-dataset.py --help
```
## Convert (`convert-hf-dataset.py`)
Convert bounding boxes between any of the 6 supported formats:
```bash
# COCO -> XYXY
uv run convert-hf-dataset.py merve/license-plates merve/license-plates-voc \
--from coco_xywh --to voc
# YOLO -> COCO
uv run convert-hf-dataset.py merve/license-plates merve/license-plates-yolo \
--from coco_xywh --to yolo
# TFOD (normalized xyxy) -> COCO
uv run convert-hf-dataset.py merve/license-plates-tfod merve/license-plates-coco \
--from tfod --to coco_xywh
# Label Studio (percentage xywh) -> XYXY
uv run convert-hf-dataset.py merve/ls-dataset merve/ls-xyxy \
--from label_studio --to xyxy
# Test on 10 samples first
uv run convert-hf-dataset.py merve/dataset merve/converted \
--from xyxy --to yolo --max-samples 10
# Shuffle before converting a subset
uv run convert-hf-dataset.py merve/dataset merve/converted \
--from coco_xywh --to tfod --max-samples 500 --shuffle
```
| Option | Description |
|--------|-------------|
| `--from` | Source bbox format (required) |
| `--to` | Target bbox format (required) |
| `--batch-size` | Batch size for map (default: 1000) |
| `--create-pr` | Push as PR instead of direct commit |
| `--shuffle` | Shuffle dataset before processing |
| `--seed` | Random seed for shuffling (default: 42) |
## Validate (`validate-hf-dataset.py`)
Check annotations for common issues:
```bash
# Basic validation
uv run validate-hf-dataset.py merve/coco-dataset
# Validate YOLO-format dataset
uv run validate-hf-dataset.py merve/yolo-dataset --bbox-format yolo
# Validate TFOD-format dataset
uv run validate-hf-dataset.py merve/tfod-dataset --bbox-format tfod
# Strict mode (warnings become errors)
uv run validate-hf-dataset.py merve/dataset --strict
# JSON report
uv run validate-hf-dataset.py merve/dataset --report json
# Stream large datasets without full download
uv run validate-hf-dataset.py merve/huge-dataset --streaming --max-samples 5000
# Push validation report to Hub
uv run validate-hf-dataset.py merve/dataset --output-dataset merve/validation-report
```
**Issue Codes:**
| Code | Level | Description |
|------|-------|-------------|
| E001 | Error | Bbox/category count mismatch |
| E002 | Error | Invalid bbox (missing values) |
| E003 | Error | Non-finite coordinates (NaN/Inf) |
| E004 | Error | xmin > xmax |
| E005 | Error | ymin > ymax |
| W001 | Warning | No annotations in example |
| W002 | Warning | Zero or negative area |
| W003 | Warning | Bbox before image origin |
| W004 | Warning | Bbox beyond image bounds |
| W005 | Warning | Empty category label |
| W006 | Warning | Duplicate file name |
## Stats (`stats-hf-dataset.py`)
Compute rich statistics for a dataset:
```bash
# Basic stats
uv run stats-hf-dataset.py merve/coco-dataset
# Top 20 label histogram, JSON output
uv run stats-hf-dataset.py merve/dataset --top 20 --report json
# Stats for TFOD-format dataset
uv run stats-hf-dataset.py merve/dataset --bbox-format tfod
# Stream large datasets
uv run stats-hf-dataset.py merve/huge-dataset --streaming --max-samples 10000
# Push stats report to Hub
uv run stats-hf-dataset.py merve/dataset --output-dataset merve/stats-report
```
Reports include: summary counts, label distribution, annotation density, bbox area/aspect ratio distributions, per-category area stats, category co-occurrence pairs, and image resolution distribution.
## Diff (`diff-hf-datasets.py`)
Compare two datasets semantically using IoU-based annotation matching:
```bash
# Basic diff
uv run diff-hf-datasets.py merve/dataset-v1 merve/dataset-v2
# Stricter matching
uv run diff-hf-datasets.py merve/old merve/new --iou-threshold 0.7
# Per-annotation change details
uv run diff-hf-datasets.py merve/old merve/new --detail
# JSON report
uv run diff-hf-datasets.py merve/old merve/new --report json
```
Reports include: shared/unique images, shared/unique categories, matched/added/removed/modified annotations.
## Sample (`sample-hf-dataset.py`)
Create random or stratified subsets:
```bash
# Random 500 samples
uv run sample-hf-dataset.py merve/dataset merve/subset -n 500
# 10% fraction
uv run sample-hf-dataset.py merve/dataset merve/subset --fraction 0.1
# Stratified sampling (preserves class distribution)
uv run sample-hf-dataset.py merve/dataset merve/subset \
-n 200 --strategy stratified
# Filter by categories
uv run sample-hf-dataset.py merve/dataset merve/subset \
-n 100 --categories "cat,dog,bird"
# Reproducible sampling
uv run sample-hf-dataset.py merve/dataset merve/subset \
-n 500 --seed 42
```
| Option | Description |
|--------|-------------|
| `-n` | Number of samples to select |
| `--fraction` | Fraction of dataset (0.0–1.0) |
| `--strategy` | `random` (default) or `stratified` |
| `--categories` | Comma-separated list of categories to filter by |
| `--category-mode` | `images` (default) or `annotations` |
## Run Locally
```bash
# Clone and run
git clone https://huggingface.co/datasets/uv-scripts/panlabel
cd panlabel
uv run convert-hf-dataset.py input-dataset output-dataset --from coco_xywh --to yolo
# Or run directly from URL
uv run https://huggingface.co/datasets/uv-scripts/panlabel/raw/main/convert-hf-dataset.py \
input-dataset output-dataset --from coco_xywh --to yolo
```
Works with any Hugging Face dataset containing object detection annotations — COCO, YOLO, VOC, TFOD, or Label Studio format.
|