Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringclasses
10 values
matplotlib>=3.7
numpy>=1.25
opencv-contrib-python>=4.5
Pillow>=10.0
pytorch-fid>=0.3.0
lpips>=0.1.4
scikit-image>=0.17
torch>=1.9
torchvision>=0.10
tqdm>=4.66

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Cloud Removal Visualization & Evaluation

Benchmark evaluation workspace for the DiffCR paper (Diffusion-Based Cloud Removal for Sentinel-2 Multi-Temporal Imagery).

Two test datasets are covered:

Dataset Samples Methods
Sen2_MTC_Old 313 12
Sen2_MTC_New 687 12

Directory Layout

visualization/
├── paper-report.png          ← reference metrics table from the paper
│
├── data/
│   ├── Sen2_MTC_New/
│   │   ├── GT/               ← 687 cloud-free ground-truth images  ({id}.png)
│   │   └── inputs/           ← 687 × 3 cloudy input images
│   │                            ({id}_A1.png  {id}_A2.png  {id}_A3.png)
│   └── Sen2_MTC_Old/
│       ├── GT/               ← 313 ground-truth images
│       └── inputs/           ← 313 × 3 cloudy inputs
│
├── results/
│   ├── Sen2_MTC_New/
│   │   ├── ae/               ← prediction images for each method ({id}.png)
│   │   ├── crtsnet/
│   │   ├── ctgan/
│   │   ├── ddpmcr/
│   │   ├── diffcr/           ← DiffCR [Ours]
│   │   ├── dsen2cr/
│   │   ├── mcgan/
│   │   ├── pix2pix/
│   │   ├── pmaa/
│   │   ├── stgan/
│   │   ├── stnet/
│   │   └── uncrtaints/
│   └── Sen2_MTC_Old/
│       └── (same 12 methods)
│
└── eval/
    ├── metrics.py            ← PSNR / SSIM / FID / LPIPS evaluation
    ├── plot.py               ← comparison figure generation
    └── requirements.txt      ← Python dependencies

Quick Start

1. Install dependencies

pip install -r eval/requirements.txt

CUDA note – SSIM uses the 3-D Gaussian kernel from the paper, which requires a CUDA-enabled PyTorch installation to reproduce the exact paper values. PSNR, FID and LPIPS are fully reproducible on CPU. Install the correct torch wheel for your GPU from https://pytorch.org.


2. Run evaluation

# Evaluate all 12 methods on both datasets (prints a full summary table):
python eval/metrics.py

# One specific method:
python eval/metrics.py --method diffcr

# One specific dataset:
python eval/metrics.py --dataset Sen2_MTC_New

# One method + one dataset:
python eval/metrics.py --dataset Sen2_MTC_Old --method diffcr

# Fast check (skip FID and LPIPS):
python eval/metrics.py --no-fid --no-lpips

# Arbitrary directory pair:
python eval/metrics.py --gt /path/to/GT --pred /path/to/Out

Expected output (excerpt, requires CUDA for exact SSIM):

Method          | Sen2_MTC Old                        | Sen2_MTC New
                |    PSNR   SSIM       FID  LPIPS |    PSNR   SSIM       FID  LPIPS
--------------------------------------------------------------------------------
...
diffcr          |  29.112  0.886    89.845   0.258 |  19.150  0.671    83.162   0.291

3. Generate comparison figures

# Generate the exact figures used in the paper:
python eval/plot.py --paper-samples

# Paper figures for one dataset:
python eval/plot.py --paper-samples --dataset Sen2_MTC_New
python eval/plot.py --paper-samples --dataset Sen2_MTC_Old

# Any specific sample:
python eval/plot.py --dataset Sen2_MTC_New --id T12TUR_R027_55

# List all available sample IDs:
python eval/plot.py --dataset Sen2_MTC_New --list

# Generate figures for every sample:
python eval/plot.py --dataset Sen2_MTC_New --all

Figures are saved as PDF to eval/plots/ by default.


Methods

Method Venue Abbrev
MCGAN CVPRW 2017 mcgan
Pix2Pix CVPR 2017 pix2pix
AE ECTI-CON 2018 ae
STNet TGRS 2020 stnet
DSen2-CR ISPRS J PHOTOGRAM 2020 dsen2cr
STGAN WACV 2020 stgan
CTGAN ICIP 2022 ctgan
CR-TS-Net TGRS 2022 crtsnet
PMAA arXiv 2023 pmaa
UnCRtainTS CVPRW 2023 uncrtaints
DDPM-CR Remote Sensing 2023 ddpmcr
DiffCR [Ours] TGRS 2024 diffcr

Paper Results

Paper metrics table


Notes

  • All images use the unified naming scheme {id}.png (GT and predictions) and {id}_A{1,2,3}.png (cloudy inputs).
  • results/Sen2_MTC_Old/diffcr/ images are stored in their original coordinate convention; eval/plot.py applies a horizontal flip automatically when rendering the Old-dataset comparison figure so that all panels share a consistent visual orientation.
  • migrate.py in the project root was the one-time script used to produce the current layout from the original raw experiment directories. It is kept for reference but does not need to be re-run.
Downloads last month
70