Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    UnidentifiedImageError
Message:      cannot identify image file <_io.BytesIO object at 0x7fef494235b0>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2674, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2209, in __iter__
                  batch = formatter.format_batch(pa_table)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
                  batch = self.python_features_decoder.decode_batch(batch)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
                  return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2254, in decode_batch
                  decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1508, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 190, in decode_example
                  image = PIL.Image.open(bytes_)
                          ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/PIL/Image.py", line 3498, in open
                  raise UnidentifiedImageError(msg)
              PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7fef494235b0>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Road Damage Detection — YOLOv11 (US Roads)

Model Description

This model is a YOLOv11 object detection model that is meant to detect and identify road damage in images captured by cameras mounted to vehicles. Given an image, the model will output a bounding box and a label that shows the location and the type of damage there is.

Training approach: This model was fine tuned with pretrained weights on a subset of the Road Damage Detector dataset, using only images from the USA.

Intended use cases:

  • Automated roadway monitoring using cameras on vehicles
  • Prioritizing road maintenance by flagging damage
  • Supporting infrastructure by the ability to scale monitoring and coverage
  • Further research in road damage detection

Training Data

Dataset Source

Maeda, H., Sekimoto, Y., Seto, T., Kashiyama, T., & Omata, H. (2018). Road Damage Detection and Classification Using Deep Neural Networks with Smartphone Images. Computer-Aided Civil and Infrastructure Engineering, 33(12), 1127–1141.

Dataset available at: https://datasetninja.com/road-damage-detector

Overview

The full dataset contains over 47,000 images form multiple countries across multiple continents. All images were collected via camerias on vehicles. For this model, training was used with a subset of only data colected in the United States.

Split Images
Initial US subset 4,805
After quality filtering 3,844
Train (70%) ~2,691
Validation (15%) ~577
Test (15%) ~577

Classes

The origional dataset contained seven classes. This model was trained on the four most common:

Class Code Damage Type Description
D00 Longitudinal Crack Cracks running parallel to the direction of travel
D10 Transverse Crack Cracks running perpendicular to the direction of travel
D20 Alligator Crack Interconnected cracking forming a mesh or grid pattern
D40 Pothole Bowl-shaped depressions in the road surface

Class Distribution (Test Set)

Based on confusion matrix totals:

Class Approx. Test Instances
D00 (Longitudinal Crack) ~1,605
D10 (Transverse Crack) ~682
D20 (Alligator Crack) ~163
D40 (Pothole) ~37

Note: There is a significant class imbalance. There is a domination of longitudinal cracks and a underrepresentation of potholes.

Data Collection Methodology

Images were from cameras mounted on vehicles driving around normal roadways. The dataset covers many different environments from urban areas ro rural roads, many different lighting conditions, and varying weather conditions.

Annotation Process

This dataset came with preexisting annotations and bounding boxes, created by professional researchers in Tokyo. These annotations were confirmed and ensured to be in YOLO format. This process included:

  1. Class Consolidation: The origional seven classes were consolidated down to four, eliminating the three least occurring classes.
  2. Format Verification: All annotations were verified to be in YOLO format.
  3. Class Label Verification: Each labels ID was verified to make sure that there were no errors when the dataset was exported.
  4. Quality Filtering: 961 images from the initial US subset were removed because they were either poor quality, had missing annotations, or were corrupted. This left 3,844 sufficient images.

Data Augmentation

The following augmentations were applied:

  • Random scaling
  • Horizontal and vertical flipping
  • Color jitter (brightness, contrast, saturation variations)

Known Biases and Limitations in Training Data

  • Geographic Bias: All images were from the USA. Performance might be worse on foreign roadways.
  • Class Imbalance: Potholes are severly underrepresented. This model has limited exposure and that is reflected in the potholes recall metrics.
  • Camera Perspective Bias: All images were from vehicle mountedd cameras. Drone footage or human level imagery are not included in this training distribution.
  • Temporal Bias: This dataset does not have controlled seasonal variation. Damage can change based on seasonal conditions.

Training Procedure

Parameter Value
Framework Ultralytics YOLOv11
Base weights YOLOv11 pretrained (transfer learning)
Hardware NVIDIA A100 GPU, T4 GPU
Image size 640 × 640 px
Batch size 64
Epochs 100 (with early stopping)
Early stopping patience 50 epochs
Train / Val / Test split 70% / 15% / 15%

Preprocessing

  • Images resized to 640 × 640 pixels
  • Pixel values normalized to [0, 1]
  • Annotations converted to YOLO normalized bounding box format

Evaluation Results

Key Metrics (Test Set)

Metric Value
Precision 69.5%
Recall 52.0%
mAP50 55.7%
Overall F1 53.0%

The best overall F1 was 0.53 at confidence level 0.25, as shown in the F1 curve. This shows that the model tends to have low confidence predictions, especially with more subtle damage.

Per-Class Performance

Class Precision Recall mAP50 F1 Test Instances
D00 – Longitudinal Crack 70.6% 74.0% 71.2% 70% ~1,605
D10 – Transverse Crack 66.9% 63.1% 65.0% 61% ~682
D20 – Alligator Crack 65.1% 52.8% 67.2% 62% ~163
D40 – Pothole 75.4% 18.2% 18.5% 28% ~37

Note: Test instance counts are approximated from confusion matrix totals.

Visual Examples of Each Class

D00 — Longitudinal Crack Linear cracks running parallel to the road's direction of travel.

D10 — Transverse Crack Cracks running perpendicular to the road surface.

D20 — Alligator Crack Interconnected cracks forming patterns across the roadway.

D40 — Pothole Bowl-shaped depressions or holes in the road surface.

Key Visualizations

F1-Confidence Curve

F1

Precision-Recall Curve

PR P R

Confusion Matricies

Matrix Matrix Normalized

Performance Analysis

This model is able to detect and localize damage, it has reasonable performance on cracks but it underperforms on potholes.

  • What the model does well: Longitudinal crack detection is the best, with an F1 of 0.7 and a mAP50 of 0.712. This is most likely die to a strong representation and consistent appearance of the damage.
  • The precision/recall tradeoff: The overall precision of 69.5% is higher than the recall of 52%. This means when there is a detection, it is correct most of the time but it often fails to detect it at all. In a real scenario, it is probably more important to have a higher recall than precision, making this not the best performance.
  • Why potholes fail: Potholes had a recall of only 18.2% with a precision of 75.4%. Looking at the confusion matrix it shows us that 84% of all pothole instances were considered to be background. This is probably due to the class imbalance as well as high variability in pothole appearance.
  • Confidence calibration: The optimal confidence threshold is 0.25. This is pretty low, and it shows that the model is not confident in its predictions.

Limitations and Biases

Known Failure Cases

  • Potholes in shadow: The model already is not great with potholes, and this gets worse when there are shadows due to buildings or trees.
  • Small or early-stage cracks: Smaller cracks are frequently missed.
  • Densely overlapping damage: Detection quality gets worse when there are is overlap in a heavily deteriorated section of the road.

Poor Performing Classes

D40 (Pothole) is the weakest class by far. It has an F1 of 0.28 and a mAP50 of 0.185. This is because of underrepresentation and poor consistency.

D20 (Alligator Crack) has the seconf worst recall of 52.8%. This is most likely caused by irregular patterns and inconsistency.

Data Biases

  • Geographic: Data from the United States only. Foreign roads are constructed differently and damage may appear differently.
  • Camera type: All images were collected by vehicle mounted cameras. There is no imagery from human level perspectives or drone fotage.
  • Environmental: Lighting conditions and seasonal variations are not controlled.
  • Class imbalance: Longitudinal cracks dominate the dataset, while potholes are severly underrepresented.

Environmental and Contextual Limitations

  • Low light / nighttime: All images were taken during daytime. Darker conditions would likely make performance worse.
  • Wet or reflective surfaces: Reflections could alter textures and make images harder to identify due to not being represented consistently in training.
  • Occlusion: Painted markings on roadways and other vehicles could occlude damaged areas.
  • Scale variation: The model had a 640×640 resolution. Small cracks could fall outside the effective detection range.

Inappropriate Use Cases

  • Safety-critical autonomous vehicle decisions: This model is not reliable enough for safety analysis and detection in real time.
  • Legal or insurance damage assessment: The outputs should not be used for evidence in legal cases without human verification.
  • Non-US road infrastructure: Foreign country performance is unknown.
  • Severity classification: This model is only for classifying damage type. It does not assess severity, depth, or area.

Ethical Considerations

Road damage is not uniform. It is also not always consistent throughout cities. There is more in lower income areas and neighborhoods that are not invested in as much. Models that are trained in certian infrastructures could underperform in areas that need it the most. In the future, performance should be sudited across different neighborhoods and income levels.

Sample Size Limitations

Potholes are severly underrepresented. This is not enough to make any conclusions on the pothole performance of this model. These metrics should be taken with a grain of salt, as varience across multiple samples would be high. If we had at minimum 500 instances of each class, the model woudld perform better on each one and we would have more reliable metrics.

Downloads last month
38