XavierJiezou commited on
Commit
dfe630b
·
verified ·
1 Parent(s): 8c9f301

Upload visualization folder

Browse files
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cloud Removal Visualization & Evaluation
2
+
3
+ Benchmark evaluation workspace for the **DiffCR** paper
4
+ (*Diffusion-Based Cloud Removal for Sentinel-2 Multi-Temporal Imagery*).
5
+
6
+ Two test datasets are covered:
7
+
8
+ | Dataset | Samples | Methods |
9
+ |---|---|---|
10
+ | Sen2\_MTC\_Old | 313 | 12 |
11
+ | Sen2\_MTC\_New | 687 | 12 |
12
+
13
+ ---
14
+
15
+ ## Directory Layout
16
+
17
+ ```
18
+ visualization/
19
+ ├── paper-report.png ← reference metrics table from the paper
20
+
21
+ ├── data/
22
+ │ ├── Sen2_MTC_New/
23
+ │ │ ├── GT/ ← 687 cloud-free ground-truth images ({id}.png)
24
+ │ │ └── inputs/ ← 687 × 3 cloudy input images
25
+ │ │ ({id}_A1.png {id}_A2.png {id}_A3.png)
26
+ │ └── Sen2_MTC_Old/
27
+ │ ├── GT/ ← 313 ground-truth images
28
+ │ └── inputs/ ← 313 × 3 cloudy inputs
29
+
30
+ ├── results/
31
+ │ ├── Sen2_MTC_New/
32
+ │ │ ├── ae/ ← prediction images for each method ({id}.png)
33
+ │ │ ├── crtsnet/
34
+ │ │ ├── ctgan/
35
+ │ │ ├── ddpmcr/
36
+ │ │ ├── diffcr/ ← DiffCR [Ours]
37
+ │ │ ├── dsen2cr/
38
+ │ │ ├── mcgan/
39
+ │ │ ├── pix2pix/
40
+ │ │ ├── pmaa/
41
+ │ │ ├── stgan/
42
+ │ │ ├── stnet/
43
+ │ │ └── uncrtaints/
44
+ │ └── Sen2_MTC_Old/
45
+ │ └── (same 12 methods)
46
+
47
+ └── eval/
48
+ ├── metrics.py ← PSNR / SSIM / FID / LPIPS evaluation
49
+ ├── plot.py ← comparison figure generation
50
+ └── requirements.txt ← Python dependencies
51
+ ```
52
+
53
+ ---
54
+
55
+ ## Quick Start
56
+
57
+ ### 1. Install dependencies
58
+
59
+ ```bash
60
+ pip install -r eval/requirements.txt
61
+ ```
62
+
63
+ > **CUDA note** – SSIM uses the 3-D Gaussian kernel from the paper, which
64
+ > requires a CUDA-enabled PyTorch installation to reproduce the exact paper
65
+ > values. PSNR, FID and LPIPS are fully reproducible on CPU.
66
+ > Install the correct torch wheel for your GPU from https://pytorch.org.
67
+
68
+ ---
69
+
70
+ ### 2. Run evaluation
71
+
72
+ ```bash
73
+ # Evaluate all 12 methods on both datasets (prints a full summary table):
74
+ python eval/metrics.py
75
+
76
+ # One specific method:
77
+ python eval/metrics.py --method diffcr
78
+
79
+ # One specific dataset:
80
+ python eval/metrics.py --dataset Sen2_MTC_New
81
+
82
+ # One method + one dataset:
83
+ python eval/metrics.py --dataset Sen2_MTC_Old --method diffcr
84
+
85
+ # Fast check (skip FID and LPIPS):
86
+ python eval/metrics.py --no-fid --no-lpips
87
+
88
+ # Arbitrary directory pair:
89
+ python eval/metrics.py --gt /path/to/GT --pred /path/to/Out
90
+ ```
91
+
92
+ Expected output (excerpt, requires CUDA for exact SSIM):
93
+
94
+ ```
95
+ Method | Sen2_MTC Old | Sen2_MTC New
96
+ | PSNR SSIM FID LPIPS | PSNR SSIM FID LPIPS
97
+ --------------------------------------------------------------------------------
98
+ ...
99
+ diffcr | 29.112 0.886 89.845 0.258 | 19.150 0.671 83.162 0.291
100
+ ```
101
+
102
+ ---
103
+
104
+ ### 3. Generate comparison figures
105
+
106
+ ```bash
107
+ # Generate the exact figures used in the paper:
108
+ python eval/plot.py --paper-samples
109
+
110
+ # Paper figures for one dataset:
111
+ python eval/plot.py --paper-samples --dataset Sen2_MTC_New
112
+ python eval/plot.py --paper-samples --dataset Sen2_MTC_Old
113
+
114
+ # Any specific sample:
115
+ python eval/plot.py --dataset Sen2_MTC_New --id T12TUR_R027_55
116
+
117
+ # List all available sample IDs:
118
+ python eval/plot.py --dataset Sen2_MTC_New --list
119
+
120
+ # Generate figures for every sample:
121
+ python eval/plot.py --dataset Sen2_MTC_New --all
122
+ ```
123
+
124
+ Figures are saved as PDF to `eval/plots/` by default.
125
+
126
+ ---
127
+
128
+ ## Methods
129
+
130
+ | Method | Venue | Abbrev |
131
+ |---|---|---|
132
+ | MCGAN | CVPRW 2017 | mcgan |
133
+ | Pix2Pix | CVPR 2017 | pix2pix |
134
+ | AE | ECTI-CON 2018 | ae |
135
+ | STNet | TGRS 2020 | stnet |
136
+ | DSen2-CR | ISPRS J PHOTOGRAM 2020 | dsen2cr |
137
+ | STGAN | WACV 2020 | stgan |
138
+ | CTGAN | ICIP 2022 | ctgan |
139
+ | CR-TS-Net | TGRS 2022 | crtsnet |
140
+ | PMAA | arXiv 2023 | pmaa |
141
+ | UnCRtainTS | CVPRW 2023 | uncrtaints |
142
+ | DDPM-CR | Remote Sensing 2023 | ddpmcr |
143
+ | **DiffCR [Ours]** | **TGRS 2024** | **diffcr** |
144
+
145
+ ---
146
+
147
+ ## Paper Results
148
+
149
+ ![Paper metrics table](paper-report.png)
150
+
151
+ ---
152
+
153
+ ## Notes
154
+
155
+ - All images use the unified naming scheme `{id}.png` (GT and predictions)
156
+ and `{id}_A{1,2,3}.png` (cloudy inputs).
157
+ - `results/Sen2_MTC_Old/diffcr/` images are stored in their original
158
+ coordinate convention; `eval/plot.py` applies a horizontal flip
159
+ automatically when rendering the Old-dataset comparison figure so that
160
+ all panels share a consistent visual orientation.
161
+ - `migrate.py` in the project root was the one-time script used to produce
162
+ the current layout from the original raw experiment directories.
163
+ It is kept for reference but does not need to be re-run.
data/Sen2_MTC_New/GT.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f9ee0b141b46bae7d711cdfc0d2868616ebc01f366ed88e6423bb8f18677f5c
3
+ size 84162925
data/Sen2_MTC_New/inputs.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a3cf90a863ca6281e1bb0c27425877b68c1397bf791cb80bd821501ddc7a688
3
+ size 188730536
data/Sen2_MTC_Old/GT.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a063dc216d3824eec8723365c8f03c2d5f357c733ef54269c181c8247243d855
3
+ size 17036847
data/Sen2_MTC_Old/inputs.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ecbb77390f60f2d6dd708b7e13369ec574bae4efe71337bf5544ea6ec579220
3
+ size 69421346
eval/metrics.py ADDED
@@ -0,0 +1,572 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ eval/metrics.py
3
+
4
+ Unified evaluation script for cloud-removal methods on Sen2_MTC datasets.
5
+ Computes PSNR, SSIM, FID and LPIPS – same implementation used in the paper.
6
+
7
+ Usage
8
+ -----
9
+ # Evaluate every method on both datasets (prints a summary table):
10
+ python metrics.py
11
+
12
+ # Evaluate one specific method / dataset:
13
+ python metrics.py --dataset Sen2_MTC_New --method diffcr
14
+
15
+ # Evaluate an arbitrary pair of directories:
16
+ python metrics.py --gt /path/to/GT --pred /path/to/Out
17
+
18
+ .. note on reproducibility::
19
+
20
+ PSNR, FID and LPIPS are fully reproducible on CPU.
21
+ **SSIM requires a CUDA-enabled PyTorch build** to match the exact paper
22
+ values; the 3-D Gaussian kernel implementation uses `.cuda()` internally
23
+ and its floating-point accumulation order differs on CPU, leading to
24
+ slightly different SSIM numbers. Install the correct torch wheel for
25
+ your GPU from https://pytorch.org before running a full benchmark.
26
+
27
+ The script expects the following layout (created by migrate.py):
28
+
29
+ visualization/
30
+ ├── data/
31
+ │ ├── Sen2_MTC_New/GT/ ← ground-truth images ({id}.png)
32
+ │ └── Sen2_MTC_Old/GT/
33
+ └── results/
34
+ ├── Sen2_MTC_New/{method}/ ← prediction images ({id}.png)
35
+ └── Sen2_MTC_Old/{method}/
36
+ """
37
+
38
+ from __future__ import annotations
39
+
40
+ import argparse
41
+ import os
42
+ import subprocess
43
+ import sys
44
+ from glob import glob
45
+
46
+ import cv2
47
+ import lpips
48
+ import numpy as np
49
+ import torch
50
+ from tqdm import tqdm
51
+
52
+ # ---------------------------------------------------------------------------
53
+ # Paths
54
+ # ---------------------------------------------------------------------------
55
+ # eval/ lives one level below the project root
56
+ ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
57
+
58
+ DATASETS: list[str] = ["Sen2_MTC_Old", "Sen2_MTC_New"]
59
+
60
+ # Order matches the paper table (top → bottom)
61
+ METHODS: list[str] = [
62
+ "mcgan",
63
+ "pix2pix",
64
+ "ae",
65
+ "stnet",
66
+ "dsen2cr",
67
+ "stgan",
68
+ "ctgan",
69
+ "crtsnet",
70
+ "pmaa",
71
+ "uncrtaints",
72
+ "ddpmcr",
73
+ "diffcr",
74
+ ]
75
+
76
+
77
+ # ---------------------------------------------------------------------------
78
+ # Image helpers
79
+ # ---------------------------------------------------------------------------
80
+
81
+
82
+ def _convert_input_type_range(img: np.ndarray) -> np.ndarray:
83
+ """Convert image to float32 in [0, 1]."""
84
+ img_type = img.dtype
85
+ img = img.astype(np.float32)
86
+ if img_type == np.uint8:
87
+ img /= 255.0
88
+ elif img_type != np.float32:
89
+ raise TypeError(f"Unsupported dtype: {img_type}")
90
+ return img
91
+
92
+
93
+ def _convert_output_type_range(img: np.ndarray, dst_type) -> np.ndarray:
94
+ if dst_type == np.uint8:
95
+ img = img.round()
96
+ else:
97
+ img /= 255.0
98
+ return img.astype(dst_type)
99
+
100
+
101
+ def reorder_image(img: np.ndarray, input_order: str = "HWC") -> np.ndarray:
102
+ if input_order not in ("HWC", "CHW"):
103
+ raise ValueError(f"input_order must be 'HWC' or 'CHW', got '{input_order}'")
104
+ if img.ndim == 2:
105
+ img = img[..., None]
106
+ if input_order == "CHW":
107
+ img = img.transpose(1, 2, 0)
108
+ return img
109
+
110
+
111
+ # ---------------------------------------------------------------------------
112
+ # PSNR
113
+ # ---------------------------------------------------------------------------
114
+
115
+
116
+ def calculate_psnr(
117
+ img1: np.ndarray,
118
+ img2: np.ndarray,
119
+ crop_border: int,
120
+ input_order: str = "HWC",
121
+ ) -> float:
122
+ """Peak Signal-to-Noise Ratio.
123
+
124
+ Accepts uint8 [0, 255] or float32 [0, 1] images.
125
+ """
126
+ assert img1.shape == img2.shape, f"Shape mismatch: {img1.shape} vs {img2.shape}"
127
+ img1 = reorder_image(img1, input_order).astype(np.float64)
128
+ img2 = reorder_image(img2, input_order).astype(np.float64)
129
+
130
+ if crop_border:
131
+ img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
132
+ img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
133
+
134
+ mse = np.mean((img1 - img2) ** 2)
135
+ if mse == 0:
136
+ return float("inf")
137
+ max_val = 1.0 if img1.max() <= 1.0 else 255.0
138
+ return 20.0 * np.log10(max_val / np.sqrt(mse))
139
+
140
+
141
+ # ---------------------------------------------------------------------------
142
+ # SSIM – 3-D Gaussian kernel (paper implementation)
143
+ # ---------------------------------------------------------------------------
144
+
145
+
146
+ def _generate_3d_gaussian_kernel(device: torch.device) -> torch.nn.Conv3d:
147
+ """Build the 11×11×11 separable Gaussian Conv3d used in the paper."""
148
+ kernel_1d = cv2.getGaussianKernel(11, 1.5) # (11, 1)
149
+ window_2d = np.outer(kernel_1d, kernel_1d.T) # (11, 11)
150
+ kernel_3d = np.stack(
151
+ [window_2d * k for k in kernel_1d],
152
+ axis=0, # (11, 11, 11)
153
+ )
154
+ conv3d = torch.nn.Conv3d(
155
+ 1,
156
+ 1,
157
+ (11, 11, 11),
158
+ stride=1,
159
+ padding=(5, 5, 5),
160
+ bias=False,
161
+ padding_mode="replicate",
162
+ )
163
+ conv3d.weight.requires_grad_(False)
164
+ conv3d.weight[0, 0] = torch.tensor(kernel_3d)
165
+ return conv3d.to(device)
166
+
167
+
168
+ def _apply_3d_gaussian(img: torch.Tensor, conv3d: torch.nn.Conv3d) -> torch.Tensor:
169
+ return conv3d(img.unsqueeze(0).unsqueeze(0)).squeeze(0).squeeze(0)
170
+
171
+
172
+ def _ssim_3d(
173
+ img1: np.ndarray,
174
+ img2: np.ndarray,
175
+ max_value: float,
176
+ device: torch.device,
177
+ ) -> float:
178
+ """3-D SSIM over all three channels simultaneously (paper metric)."""
179
+ assert img1.ndim == 3 and img2.ndim == 3
180
+ C1 = (0.01 * max_value) ** 2
181
+ C2 = (0.03 * max_value) ** 2
182
+
183
+ kernel = _generate_3d_gaussian_kernel(device)
184
+
185
+ t1 = torch.tensor(img1.astype(np.float64)).float().to(device)
186
+ t2 = torch.tensor(img2.astype(np.float64)).float().to(device)
187
+
188
+ mu1 = _apply_3d_gaussian(t1, kernel)
189
+ mu2 = _apply_3d_gaussian(t2, kernel)
190
+
191
+ mu1_sq = mu1**2
192
+ mu2_sq = mu2**2
193
+ mu1_mu2 = mu1 * mu2
194
+
195
+ sigma1_sq = _apply_3d_gaussian(t1**2, kernel) - mu1_sq
196
+ sigma2_sq = _apply_3d_gaussian(t2**2, kernel) - mu2_sq
197
+ sigma12 = _apply_3d_gaussian(t1 * t2, kernel) - mu1_mu2
198
+
199
+ ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / (
200
+ (mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)
201
+ )
202
+ return float(ssim_map.mean())
203
+
204
+
205
+ def calculate_ssim(
206
+ img1: np.ndarray,
207
+ img2: np.ndarray,
208
+ crop_border: int,
209
+ input_order: str = "HWC",
210
+ device: torch.device | None = None,
211
+ ) -> float:
212
+ """Structural Similarity using the 3-D Gaussian kernel (paper implementation).
213
+
214
+ Requires CUDA by default; falls back to CPU if no GPU is available.
215
+ """
216
+ if device is None:
217
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
218
+
219
+ assert img1.shape == img2.shape, f"Shape mismatch: {img1.shape} vs {img2.shape}"
220
+
221
+ img1 = reorder_image(img1, input_order).astype(np.float64)
222
+ img2 = reorder_image(img2, input_order).astype(np.float64)
223
+
224
+ if crop_border:
225
+ img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
226
+ img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
227
+
228
+ max_val = 1 if img1.max() <= 1.0 else 255
229
+
230
+ with torch.no_grad():
231
+ return _ssim_3d(img1, img2, max_val, device)
232
+
233
+
234
+ # ---------------------------------------------------------------------------
235
+ # FID
236
+ # ---------------------------------------------------------------------------
237
+
238
+
239
+ def calculate_fid(gt_dir: str, pred_dir: str) -> float:
240
+ """Compute FID via the pytorch-fid command-line tool."""
241
+ device = "cuda" if torch.cuda.is_available() else "cpu"
242
+ num_workers = 0 if sys.platform == "win32" else 4
243
+
244
+ result = subprocess.run(
245
+ [
246
+ sys.executable,
247
+ "-m",
248
+ "pytorch_fid",
249
+ gt_dir,
250
+ pred_dir,
251
+ "--device",
252
+ device,
253
+ "--batch-size",
254
+ "4",
255
+ "--num-workers",
256
+ str(num_workers),
257
+ ],
258
+ capture_output=True,
259
+ text=True,
260
+ )
261
+ output = result.stdout + result.stderr
262
+ for line in output.splitlines():
263
+ line = line.strip()
264
+ if "fid" in line.lower():
265
+ try:
266
+ return float(line.split()[-1])
267
+ except ValueError:
268
+ pass
269
+ print(f"[WARN] Could not parse FID output:\n{output}", file=sys.stderr)
270
+ return float("nan")
271
+
272
+
273
+ # ---------------------------------------------------------------------------
274
+ # LPIPS
275
+ # ---------------------------------------------------------------------------
276
+
277
+
278
+ def calculate_lpips(
279
+ gt_dir: str,
280
+ pred_dir: str,
281
+ device: torch.device | None = None,
282
+ ) -> float:
283
+ """Mean LPIPS (AlexNet backbone) over matched image pairs."""
284
+ if device is None:
285
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
286
+
287
+ loss_fn = lpips.LPIPS(net="alex", verbose=False).to(device)
288
+
289
+ gt_map = _stem_map(gt_dir)
290
+ pred_map = _stem_map(pred_dir)
291
+ keys = sorted(set(gt_map) & set(pred_map))
292
+
293
+ if not keys:
294
+ print(
295
+ f"[WARN] No common files between {gt_dir} and {pred_dir}", file=sys.stderr
296
+ )
297
+ return float("nan")
298
+
299
+ scores: list[float] = []
300
+ for k in tqdm(keys, desc="LPIPS", leave=False):
301
+ t1 = lpips.im2tensor(lpips.load_image(gt_map[k])).to(device)
302
+ t2 = lpips.im2tensor(lpips.load_image(pred_map[k])).to(device)
303
+ scores.append(loss_fn(t1, t2).item())
304
+
305
+ return float(np.mean(scores))
306
+
307
+
308
+ # ---------------------------------------------------------------------------
309
+ # Helpers
310
+ # ---------------------------------------------------------------------------
311
+
312
+
313
+ def _stem_map(directory: str) -> dict[str, str]:
314
+ """Return {stem: full_path} for every .png in *directory*."""
315
+ return {
316
+ os.path.splitext(os.path.basename(f))[0]: f
317
+ for f in glob(os.path.join(directory, "*.png"))
318
+ }
319
+
320
+
321
+ def evaluate_pair(
322
+ gt_dir: str,
323
+ pred_dir: str,
324
+ desc: str = "",
325
+ device: torch.device | None = None,
326
+ ) -> dict | None:
327
+ """Compute all four metrics for one (GT, prediction) directory pair.
328
+
329
+ Returns a dict with keys: n, PSNR, SSIM, FID, LPIPS.
330
+ Returns None if no common files are found.
331
+ """
332
+ if device is None:
333
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
334
+
335
+ gt_map = _stem_map(gt_dir)
336
+ pred_map = _stem_map(pred_dir)
337
+ keys = sorted(set(gt_map) & set(pred_map))
338
+
339
+ if not keys:
340
+ print(
341
+ f"[WARN] No common files – GT: {gt_dir!r} Pred: {pred_dir!r}",
342
+ file=sys.stderr,
343
+ )
344
+ return None
345
+
346
+ psnr_list: list[float] = []
347
+ ssim_list: list[float] = []
348
+
349
+ for k in tqdm(keys, desc=desc or "PSNR/SSIM", leave=False):
350
+ img_gt = cv2.imread(gt_map[k])
351
+ img_pred = cv2.imread(pred_map[k])
352
+ if img_gt is None or img_pred is None:
353
+ print(f"[WARN] Could not read image for key '{k}'", file=sys.stderr)
354
+ continue
355
+ if img_gt.shape != img_pred.shape:
356
+ print(
357
+ f"[WARN] Shape mismatch for '{k}': {img_gt.shape} vs {img_pred.shape}",
358
+ file=sys.stderr,
359
+ )
360
+ continue
361
+ psnr_list.append(calculate_psnr(img_gt, img_pred, crop_border=0))
362
+ ssim_list.append(calculate_ssim(img_gt, img_pred, crop_border=0, device=device))
363
+
364
+ if not psnr_list:
365
+ return None
366
+
367
+ fid_score = calculate_fid(gt_dir, pred_dir)
368
+ lpips_score = calculate_lpips(gt_dir, pred_dir, device=device)
369
+
370
+ return {
371
+ "n": len(psnr_list),
372
+ "PSNR": float(np.mean(psnr_list)),
373
+ "SSIM": float(np.mean(ssim_list)),
374
+ "FID": fid_score,
375
+ "LPIPS": lpips_score,
376
+ }
377
+
378
+
379
+ # ---------------------------------------------------------------------------
380
+ # Pretty table
381
+ # ---------------------------------------------------------------------------
382
+
383
+
384
+ def _fmt(v: float | None, width: int, decimals: int) -> str:
385
+ if v is None or (isinstance(v, float) and np.isnan(v)):
386
+ return f"{'N/A':>{width}}"
387
+ return f"{v:>{width}.{decimals}f}"
388
+
389
+
390
+ def print_table(
391
+ all_results: dict[str, dict[str, dict]],
392
+ datasets: list[str],
393
+ methods: list[str],
394
+ ) -> None:
395
+ col = 38 # width of one dataset block
396
+ sep = "-" * (16 + col * len(datasets))
397
+
398
+ # Header
399
+ print("\n" + "=" * len(sep))
400
+ header = f"{'Method':<16}"
401
+ for ds in datasets:
402
+ label = ds.replace("Sen2_MTC_", "Sen2_MTC ")
403
+ header += f"{'| ' + label:<{col}}"
404
+ print(header)
405
+
406
+ sub = f"{'':16}"
407
+ for _ in datasets:
408
+ sub += f"| {'PSNR':>7} {'SSIM':>6} {'FID':>9} {'LPIPS':>6} "
409
+ print(sub)
410
+ print(sep)
411
+
412
+ for m in methods:
413
+ row = f"{m:<16}"
414
+ for ds in datasets:
415
+ r = all_results.get(ds, {}).get(m)
416
+ if r:
417
+ row += (
418
+ f"| {_fmt(r['PSNR'], 7, 3)}"
419
+ f" {_fmt(r['SSIM'], 6, 3)}"
420
+ f" {_fmt(r['FID'], 9, 3)}"
421
+ f" {_fmt(r['LPIPS'], 6, 3)} "
422
+ )
423
+ else:
424
+ row += f"|{'SKIP':^{col - 2}} "
425
+ print(row)
426
+
427
+ print("=" * len(sep))
428
+
429
+
430
+ # ---------------------------------------------------------------------------
431
+ # CLI
432
+ # ---------------------------------------------------------------------------
433
+
434
+
435
+ def _parse_args() -> argparse.Namespace:
436
+ p = argparse.ArgumentParser(
437
+ description="Evaluate cloud-removal metrics (PSNR / SSIM / FID / LPIPS)"
438
+ )
439
+ p.add_argument(
440
+ "--dataset",
441
+ type=str,
442
+ default=None,
443
+ choices=DATASETS,
444
+ help="Evaluate only this dataset (default: both)",
445
+ )
446
+ p.add_argument(
447
+ "--method",
448
+ type=str,
449
+ default=None,
450
+ help="Evaluate only this method (default: all)",
451
+ )
452
+ p.add_argument(
453
+ "--gt",
454
+ type=str,
455
+ default=None,
456
+ help="Ground-truth directory (use together with --pred for a custom pair)",
457
+ )
458
+ p.add_argument(
459
+ "--pred",
460
+ type=str,
461
+ default=None,
462
+ help="Prediction directory",
463
+ )
464
+ p.add_argument(
465
+ "--no-fid",
466
+ action="store_true",
467
+ help="Skip FID computation (much faster, useful for quick checks)",
468
+ )
469
+ p.add_argument(
470
+ "--no-lpips",
471
+ action="store_true",
472
+ help="Skip LPIPS computation",
473
+ )
474
+ return p.parse_args()
475
+
476
+
477
+ def main() -> None:
478
+ args = _parse_args()
479
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
480
+ print(f"Device: {device}")
481
+
482
+ # ---- custom pair -------------------------------------------------------
483
+ if args.gt and args.pred:
484
+ print(f"GT : {args.gt}")
485
+ print(f"Pred: {args.pred}")
486
+ res = evaluate_pair(args.gt, args.pred, desc="custom", device=device)
487
+ if res:
488
+ print(
489
+ f"\nPSNR = {res['PSNR']:.3f}\n"
490
+ f"SSIM = {res['SSIM']:.3f}\n"
491
+ f"FID = {res['FID']:.3f}\n"
492
+ f"LPIPS = {res['LPIPS']:.3f}\n"
493
+ f"(n = {res['n']})"
494
+ )
495
+ return
496
+
497
+ # ---- standard evaluation loop ------------------------------------------
498
+ datasets = [args.dataset] if args.dataset else DATASETS
499
+ methods = [args.method] if args.method else METHODS
500
+
501
+ all_results: dict[str, dict[str, dict]] = {ds: {} for ds in datasets}
502
+
503
+ for ds in datasets:
504
+ gt_dir = os.path.join(ROOT, "data", ds, "GT")
505
+ if not os.path.isdir(gt_dir):
506
+ print(f"[ERROR] GT directory not found: {gt_dir}", file=sys.stderr)
507
+ continue
508
+
509
+ for m in methods:
510
+ pred_dir = os.path.join(ROOT, "results", ds, m)
511
+ if not os.path.isdir(pred_dir):
512
+ print(f" SKIP {ds}/{m} (not found)")
513
+ continue
514
+
515
+ print(f"\n[{ds}] [{m}]")
516
+
517
+ gt_map = _stem_map(gt_dir)
518
+ pred_map = _stem_map(pred_dir)
519
+ n_common = len(set(gt_map) & set(pred_map))
520
+ print(
521
+ f" GT: {len(gt_map)} imgs | Pred: {len(pred_map)} imgs | Common: {n_common}"
522
+ )
523
+
524
+ # PSNR / SSIM
525
+ psnr_list: list[float] = []
526
+ ssim_list: list[float] = []
527
+ keys = sorted(set(gt_map) & set(pred_map))
528
+
529
+ for k in tqdm(keys, desc="PSNR/SSIM", leave=False):
530
+ ig = cv2.imread(gt_map[k])
531
+ ip = cv2.imread(pred_map[k])
532
+ if ig is None or ip is None or ig.shape != ip.shape:
533
+ continue
534
+ psnr_list.append(calculate_psnr(ig, ip, crop_border=0))
535
+ ssim_list.append(calculate_ssim(ig, ip, crop_border=0, device=device))
536
+
537
+ if not psnr_list:
538
+ print(" [WARN] No valid image pairs found.")
539
+ continue
540
+
541
+ psnr_mean = float(np.mean(psnr_list))
542
+ ssim_mean = float(np.mean(ssim_list))
543
+ print(f" PSNR = {psnr_mean:.3f} | SSIM = {ssim_mean:.3f}")
544
+
545
+ # FID
546
+ fid_score: float = float("nan")
547
+ if not args.no_fid:
548
+ print(" Computing FID ...", end=" ", flush=True)
549
+ fid_score = calculate_fid(gt_dir, pred_dir)
550
+ print(f"{fid_score:.3f}")
551
+
552
+ # LPIPS
553
+ lpips_score: float = float("nan")
554
+ if not args.no_lpips:
555
+ lpips_score = calculate_lpips(gt_dir, pred_dir, device=device)
556
+ print(f" LPIPS = {lpips_score:.3f}")
557
+
558
+ all_results[ds][m] = {
559
+ "n": len(psnr_list),
560
+ "PSNR": psnr_mean,
561
+ "SSIM": ssim_mean,
562
+ "FID": fid_score,
563
+ "LPIPS": lpips_score,
564
+ }
565
+
566
+ # ---- summary table -----------------------------------------------------
567
+ if any(all_results[ds] for ds in datasets):
568
+ print_table(all_results, datasets, methods)
569
+
570
+
571
+ if __name__ == "__main__":
572
+ main()
eval/plot.py ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ eval/plot.py
3
+
4
+ Generate comparison figures for all cloud-removal methods.
5
+
6
+ The script reads from the cleaned-up directory layout created by migrate.py:
7
+
8
+ visualization/
9
+ ├── data/
10
+ │ ├── Sen2_MTC_New/
11
+ │ │ ├── GT/ {id}.png
12
+ │ │ └── inputs/ {id}_A1.png {id}_A2.png {id}_A3.png
13
+ │ └── Sen2_MTC_Old/
14
+ │ ├── GT/
15
+ │ └── inputs/
16
+ └── results/
17
+ ├── Sen2_MTC_New/{method}/{id}.png
18
+ └── Sen2_MTC_Old/{method}/{id}.png
19
+
20
+ Usage
21
+ -----
22
+ # Generate the exact figures that appear in the paper:
23
+ python plot.py --paper-samples
24
+
25
+ # Generate paper figures for one dataset only:
26
+ python plot.py --paper-samples --dataset Sen2_MTC_New
27
+ python plot.py --paper-samples --dataset Sen2_MTC_Old
28
+
29
+ # Generate a figure for any arbitrary sample ID:
30
+ python plot.py --dataset Sen2_MTC_New --id T12TUR_R027_55
31
+
32
+ # List all available sample IDs for a dataset:
33
+ python plot.py --dataset Sen2_MTC_New --list
34
+
35
+ # Custom output directory:
36
+ python plot.py --paper-samples --out-dir /path/to/figures
37
+ """
38
+
39
+ from __future__ import annotations
40
+
41
+ import argparse
42
+ import os
43
+ from glob import glob
44
+ from typing import Optional
45
+
46
+ import matplotlib
47
+ import matplotlib.pyplot as plt
48
+ import numpy as np
49
+
50
+ matplotlib.rcParams["font.family"] = "Times New Roman"
51
+
52
+ # ---------------------------------------------------------------------------
53
+ # Paths
54
+ # ---------------------------------------------------------------------------
55
+ # eval/ is one level below the project root
56
+ ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
57
+
58
+ # ---------------------------------------------------------------------------
59
+ # Constants
60
+ # ---------------------------------------------------------------------------
61
+ DATASETS = ["Sen2_MTC_Old", "Sen2_MTC_New"]
62
+
63
+ # Display order in the 4×4 grid (row-major, after the 4 input/GT panels)
64
+ METHODS: list[str] = [
65
+ "mcgan",
66
+ "pix2pix",
67
+ "ae",
68
+ "stnet",
69
+ "dsen2cr",
70
+ "stgan",
71
+ "ctgan",
72
+ "crtsnet",
73
+ "pmaa",
74
+ "uncrtaints",
75
+ "ddpmcr",
76
+ "diffcr",
77
+ ]
78
+
79
+ METHOD_LABELS: list[str] = [
80
+ "MCGAN",
81
+ "Pix2Pix",
82
+ "AE",
83
+ "STNet",
84
+ "DSen2-CR",
85
+ "STGAN",
86
+ "CTGAN",
87
+ "CR-TS-Net",
88
+ "PMAA",
89
+ "UnCRtainTS",
90
+ "DDPM-CR",
91
+ "DiffCR [Ours]",
92
+ ]
93
+
94
+ INPUT_LABELS: list[str] = [
95
+ r"Cloudy $T_1$",
96
+ r"Cloudy $T_2$",
97
+ r"Cloudy $T_3$",
98
+ "Ground-Truth",
99
+ ]
100
+
101
+ ALL_LABELS: list[str] = INPUT_LABELS + METHOD_LABELS
102
+
103
+ # Some methods in the Old dataset store outputs with a horizontal flip
104
+ # relative to the other methods' spatial convention. We correct for display.
105
+ FLIP_H_FOR_DISPLAY: dict[str, set[str]] = {
106
+ "Sen2_MTC_Old": {"diffcr"},
107
+ }
108
+
109
+ # The exact sample IDs used in the paper figures
110
+ PAPER_SAMPLES: dict[str, list[str]] = {
111
+ "Sen2_MTC_New": ["T12TUR_R027_55"],
112
+ "Sen2_MTC_Old": ["42WVD_70008000", "14SQB_20006000"],
113
+ }
114
+
115
+
116
+ # ---------------------------------------------------------------------------
117
+ # I/O helpers
118
+ # ---------------------------------------------------------------------------
119
+
120
+
121
+ def _find_input(inputs_dir: str, sample_id: str, channel: str) -> Optional[str]:
122
+ """Locate {id}_A{1|2|3}.png in *inputs_dir*."""
123
+ direct = os.path.join(inputs_dir, f"{sample_id}_{channel}.png")
124
+ if os.path.exists(direct):
125
+ return direct
126
+ # Fallback – glob for any file containing the id and channel tag
127
+ hits = glob(os.path.join(inputs_dir, f"{sample_id}*{channel}*"))
128
+ return hits[0] if hits else None
129
+
130
+
131
+ def _load(path: str, flip_h: bool = False) -> np.ndarray:
132
+ """Load an image as float [0,1] RGBA/RGB via matplotlib.
133
+
134
+ matplotlib.imread returns:
135
+ - PNG: float32 [0,1] (RGBA or RGB depending on file)
136
+ - other: uint8 [0,255]
137
+ We normalise everything to float32 [0,1] and strip the alpha channel.
138
+ """
139
+ img = plt.imread(path)
140
+ # Normalise uint8 to float
141
+ if img.dtype == np.uint8:
142
+ img = img.astype(np.float32) / 255.0
143
+ # Drop alpha channel if present
144
+ if img.ndim == 3 and img.shape[2] == 4:
145
+ img = img[:, :, :3]
146
+ # Clip to valid range (handles tiny float rounding errors)
147
+ img = np.clip(img, 0.0, 1.0)
148
+ if flip_h:
149
+ img = img[:, ::-1, :]
150
+ return img
151
+
152
+
153
+ # ---------------------------------------------------------------------------
154
+ # Core plotting function
155
+ # ---------------------------------------------------------------------------
156
+
157
+
158
+ def plot_sample(
159
+ dataset: str,
160
+ sample_id: str,
161
+ out_dir: Optional[str] = None,
162
+ dpi: int = 300,
163
+ verbose: bool = True,
164
+ ) -> Optional[str]:
165
+ """Generate a 4×4 comparison grid for *sample_id* in *dataset*.
166
+
167
+ Returns the path of the saved figure, or None on failure.
168
+ """
169
+ data_dir = os.path.join(ROOT, "data", dataset)
170
+ results_dir = os.path.join(ROOT, "results", dataset)
171
+ inputs_dir = os.path.join(data_dir, "inputs")
172
+ gt_dir = os.path.join(data_dir, "GT")
173
+
174
+ # ---- Locate source files -----------------------------------------------
175
+ a1 = _find_input(inputs_dir, sample_id, "A1")
176
+ a2 = _find_input(inputs_dir, sample_id, "A2")
177
+ a3 = _find_input(inputs_dir, sample_id, "A3")
178
+ gt = os.path.join(gt_dir, f"{sample_id}.png")
179
+
180
+ missing: list[str] = []
181
+ for tag, path in [("A1", a1), ("A2", a2), ("A3", a3), ("GT", gt)]:
182
+ if not path or not os.path.exists(path):
183
+ missing.append(tag)
184
+
185
+ if missing:
186
+ print(f"[WARN] {dataset}/{sample_id}: missing {missing} – skipping.")
187
+ return None
188
+
189
+ # ---- Build image grid --------------------------------------------------
190
+ flip_set = FLIP_H_FOR_DISPLAY.get(dataset, set())
191
+
192
+ grid: list[np.ndarray] = [
193
+ _load(a1),
194
+ _load(a2),
195
+ _load(a3),
196
+ _load(gt),
197
+ ]
198
+
199
+ for method in METHODS:
200
+ pred_path = os.path.join(results_dir, method, f"{sample_id}.png")
201
+ flip = method in flip_set
202
+ if os.path.exists(pred_path):
203
+ grid.append(_load(pred_path, flip_h=flip))
204
+ else:
205
+ if verbose:
206
+ print(
207
+ f" [WARN] missing {dataset}/{method}/{sample_id}.png → black panel"
208
+ )
209
+ # Placeholder: black image with same shape as GT
210
+ grid.append(np.zeros_like(grid[3]))
211
+
212
+ assert len(grid) == 16, f"Expected 16 panels, got {len(grid)}"
213
+
214
+ # ---- Render figure -----------------------------------------------------
215
+ fig, axes = plt.subplots(4, 4, figsize=(8, 8), dpi=dpi)
216
+ fig.subplots_adjust(
217
+ left=0.01,
218
+ right=0.99,
219
+ top=0.99,
220
+ bottom=0.06,
221
+ wspace=0.04,
222
+ hspace=0.10,
223
+ )
224
+
225
+ for idx, (ax, img, label) in enumerate(zip(axes.flat, grid, ALL_LABELS)):
226
+ ax.imshow(img)
227
+ ax.set_title(label, y=-0.18, fontsize=7)
228
+ ax.axis("off")
229
+
230
+ # ---- Save --------------------------------------------------------------
231
+ if out_dir is None:
232
+ out_dir = os.path.join(ROOT, "eval", "plots")
233
+ os.makedirs(out_dir, exist_ok=True)
234
+
235
+ out_path = os.path.join(out_dir, f"{dataset}_{sample_id}.pdf")
236
+ fig.savefig(out_path, bbox_inches="tight")
237
+ plt.close(fig)
238
+
239
+ if verbose:
240
+ print(f"Saved: {out_path}")
241
+ return out_path
242
+
243
+
244
+ # ---------------------------------------------------------------------------
245
+ # Batch helpers
246
+ # ---------------------------------------------------------------------------
247
+
248
+
249
+ def available_ids(dataset: str) -> list[str]:
250
+ """Return sorted list of sample IDs that have at least one input image."""
251
+ inputs_dir = os.path.join(ROOT, "data", dataset, "inputs")
252
+ a1_files = sorted(glob(os.path.join(inputs_dir, "*_A1.png")))
253
+ return [os.path.basename(f).replace("_A1.png", "") for f in a1_files]
254
+
255
+
256
+ def generate_paper_figures(
257
+ datasets: Optional[list[str]] = None,
258
+ out_dir: Optional[str] = None,
259
+ ) -> list[str]:
260
+ """Generate all figures referenced in the paper."""
261
+ if datasets is None:
262
+ datasets = DATASETS
263
+ saved: list[str] = []
264
+ for ds in datasets:
265
+ for sid in PAPER_SAMPLES.get(ds, []):
266
+ print(f"\n--- {ds} / {sid} ---")
267
+ path = plot_sample(ds, sid, out_dir=out_dir)
268
+ if path:
269
+ saved.append(path)
270
+ return saved
271
+
272
+
273
+ # ---------------------------------------------------------------------------
274
+ # CLI
275
+ # ---------------------------------------------------------------------------
276
+
277
+
278
+ def _parse_args() -> argparse.Namespace:
279
+ p = argparse.ArgumentParser(
280
+ description="Generate comparison figures for cloud-removal methods"
281
+ )
282
+ p.add_argument(
283
+ "--dataset",
284
+ type=str,
285
+ default=None,
286
+ choices=DATASETS,
287
+ help="Dataset to use (default: both when --paper-samples is set)",
288
+ )
289
+ p.add_argument(
290
+ "--id",
291
+ type=str,
292
+ default=None,
293
+ metavar="SAMPLE_ID",
294
+ help="Generate a figure for this specific sample ID",
295
+ )
296
+ p.add_argument(
297
+ "--paper-samples",
298
+ action="store_true",
299
+ help="Generate the exact figures used in the paper",
300
+ )
301
+ p.add_argument(
302
+ "--all",
303
+ action="store_true",
304
+ help="Generate figures for ALL available samples in the chosen dataset",
305
+ )
306
+ p.add_argument(
307
+ "--list",
308
+ action="store_true",
309
+ help="List available sample IDs and exit",
310
+ )
311
+ p.add_argument(
312
+ "--out-dir",
313
+ type=str,
314
+ default=None,
315
+ help="Output directory (default: eval/plots/)",
316
+ )
317
+ p.add_argument(
318
+ "--dpi",
319
+ type=int,
320
+ default=300,
321
+ help="Figure resolution in DPI (default: 300)",
322
+ )
323
+ return p.parse_args()
324
+
325
+
326
+ def main() -> None:
327
+ args = _parse_args()
328
+
329
+ # Determine which datasets to process
330
+ if args.dataset:
331
+ datasets = [args.dataset]
332
+ else:
333
+ datasets = DATASETS
334
+
335
+ # ---- list mode ---------------------------------------------------------
336
+ if args.list:
337
+ for ds in datasets:
338
+ ids = available_ids(ds)
339
+ print(f"\n{ds} ({len(ids)} samples)")
340
+ for i, sid in enumerate(ids):
341
+ print(f" {sid}")
342
+ if i >= 29 and len(ids) > 30:
343
+ print(f" ... and {len(ids) - 30} more (use --all to see all)")
344
+ break
345
+ return
346
+
347
+ # ---- paper figures -----------------------------------------------------
348
+ if args.paper_samples:
349
+ saved = generate_paper_figures(datasets=datasets, out_dir=args.out_dir)
350
+ print(f"\n{len(saved)} figure(s) saved.")
351
+ return
352
+
353
+ # ---- single sample -----------------------------------------------------
354
+ if args.id:
355
+ if len(datasets) > 1:
356
+ print("[INFO] --id specified without --dataset; trying both datasets.")
357
+ for ds in datasets:
358
+ plot_sample(ds, args.id, out_dir=args.out_dir, dpi=args.dpi)
359
+ return
360
+
361
+ # ---- all samples -------------------------------------------------------
362
+ if args.all:
363
+ if not args.dataset:
364
+ print("[ERROR] Please specify --dataset when using --all.")
365
+ return
366
+ ids = available_ids(args.dataset)
367
+ print(f"Generating {len(ids)} figures for {args.dataset} …")
368
+ for sid in ids:
369
+ plot_sample(
370
+ args.dataset, sid, out_dir=args.out_dir, dpi=args.dpi, verbose=False
371
+ )
372
+ print(f" done: {sid}")
373
+ print("Finished.")
374
+ return
375
+
376
+ # ---- no action specified -----------------------------------------------
377
+ print(
378
+ "No action specified. Examples:\n"
379
+ " python plot.py --paper-samples\n"
380
+ " python plot.py --dataset Sen2_MTC_New --id T12TUR_R027_55\n"
381
+ " python plot.py --dataset Sen2_MTC_New --list\n"
382
+ " python plot.py --dataset Sen2_MTC_New --all\n"
383
+ )
384
+
385
+
386
+ if __name__ == "__main__":
387
+ main()
eval/requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ matplotlib>=3.7
2
+ numpy>=1.25
3
+ opencv-contrib-python>=4.5
4
+ Pillow>=10.0
5
+ pytorch-fid>=0.3.0
6
+ lpips>=0.1.4
7
+ scikit-image>=0.17
8
+ torch>=1.9
9
+ torchvision>=0.10
10
+ tqdm>=4.66
migrate.py ADDED
@@ -0,0 +1,396 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ migrate.py – One-time reorganisation of the visualization directory.
4
+
5
+ Run from D:\\visualization:
6
+ python migrate.py # full migration + cleanup
7
+ python migrate.py --dry-run # preview only, no files touched
8
+
9
+ What the script does
10
+ --------------------
11
+ 1. Creates:
12
+ data/Sen2_MTC_{New,Old}/GT/ one shared ground-truth copy
13
+ data/Sen2_MTC_{New,Old}/inputs/ cloudy inputs (_A1 / _A2 / _A3)
14
+ results/Sen2_MTC_{New,Old}/{method}/ per-method predictions
15
+
16
+ 2. Copies images with a unified naming scheme:
17
+ {id}_real_B.png → GT/{id}.png
18
+ {id}_fake_B.png → {method}/{id}.png
19
+ Out_{id}.png → {method}/{id}.png (diffcr convention)
20
+ {id}_real_A1.png → inputs/{id}_A1.png
21
+
22
+ 3. After verifying every expected directory is non-empty, deletes the
23
+ original Sen2_MTC_New and Sen2_MTC_Old trees.
24
+
25
+ Special cases handled
26
+ ---------------------
27
+ - pmaa / Sen2_MTC_New has no Out/ folder: outputs are extracted from the
28
+ per-sample test/{psnr_ssim}/ sub-directories.
29
+ - diffcr / Sen2_MTC_New has no test/ folder: Out/ already contains flat
30
+ Out_{id}.png files.
31
+ - diffcr / Sen2_MTC_Old same as above.
32
+ - ctgan / Sen2_MTC_New has an extra save/ directory (ignored).
33
+ """
34
+
35
+ from __future__ import annotations
36
+
37
+ import argparse
38
+ import os
39
+ import shutil
40
+ import sys
41
+ from glob import glob
42
+
43
+ from tqdm import tqdm
44
+
45
+ # ---------------------------------------------------------------------------
46
+ # Project root (this file lives directly in D:\visualization\)
47
+ # ---------------------------------------------------------------------------
48
+ ROOT = os.path.dirname(os.path.abspath(__file__))
49
+
50
+ METHODS: list[str] = [
51
+ "ae",
52
+ "crtsnet",
53
+ "ctgan",
54
+ "ddpmcr",
55
+ "diffcr",
56
+ "dsen2cr",
57
+ "mcgan",
58
+ "pix2pix",
59
+ "pmaa",
60
+ "stgan",
61
+ "stnet",
62
+ "uncrtaints",
63
+ ]
64
+
65
+ # ---------------------------------------------------------------------------
66
+ # Naming helpers
67
+ # ---------------------------------------------------------------------------
68
+
69
+
70
+ def strip_id(path: str) -> str:
71
+ """Extract the clean sample ID from any of the naming conventions used.
72
+
73
+ Examples
74
+ --------
75
+ T12TUR_R027_0_real_B.png -> T12TUR_R027_0
76
+ T12TUR_R027_0_fake_B.png -> T12TUR_R027_0
77
+ Out_T12TUR_R027_0.png -> T12TUR_R027_0
78
+ GT_T12TUR_R027_0.png -> T12TUR_R027_0
79
+ 01WFN_60009000_real_B.png -> 01WFN_60009000
80
+ """
81
+ stem = os.path.splitext(os.path.basename(path))[0]
82
+
83
+ # Prefix conventions used by diffcr
84
+ for pfx in ("GT_", "Out_"):
85
+ if stem.startswith(pfx):
86
+ return stem[len(pfx) :]
87
+
88
+ # Suffix conventions used by most other methods
89
+ for sfx in ("_real_B", "_fake_B"):
90
+ if stem.endswith(sfx):
91
+ return stem[: -len(sfx)]
92
+
93
+ # Already a bare ID (should not normally happen)
94
+ return stem
95
+
96
+
97
+ def _copy(src: str, dst: str) -> None:
98
+ """Copy *src* to *dst*, creating parent directories as needed."""
99
+ os.makedirs(os.path.dirname(dst), exist_ok=True)
100
+ shutil.copy2(src, dst)
101
+
102
+
103
+ # ---------------------------------------------------------------------------
104
+ # Migration steps
105
+ # ---------------------------------------------------------------------------
106
+
107
+
108
+ def migrate_gt(
109
+ src_base: str,
110
+ dst_data: str,
111
+ dry_run: bool,
112
+ ) -> int:
113
+ """Copy GT images from ae/GT/ → data/{dataset}/GT/ with clean names.
114
+
115
+ Source name: {id}_real_B.png
116
+ Dest name: {id}.png
117
+ """
118
+ src_dir = os.path.join(src_base, "ae", "GT")
119
+ dst_dir = os.path.join(dst_data, "GT")
120
+
121
+ files = sorted(glob(os.path.join(src_dir, "*.png")))
122
+ if not files:
123
+ print(f" [WARN] No GT images found in {src_dir}", file=sys.stderr)
124
+ return 0
125
+
126
+ if not dry_run:
127
+ os.makedirs(dst_dir, exist_ok=True)
128
+ for f in tqdm(files, desc=" GT", leave=False):
129
+ _copy(f, os.path.join(dst_dir, strip_id(f) + ".png"))
130
+
131
+ return len(files)
132
+
133
+
134
+ def migrate_inputs(
135
+ src_base: str,
136
+ dst_data: str,
137
+ dry_run: bool,
138
+ ) -> int:
139
+ """Extract real_A1 / real_A2 / real_A3 from ae/test/**/
140
+ → data/{dataset}/inputs/{id}_A{1,2,3}.png
141
+
142
+ Source name: {id}_real_A1.png
143
+ Dest name: {id}_A1.png
144
+ """
145
+ src_test = os.path.join(src_base, "ae", "test")
146
+ dst_dir = os.path.join(dst_data, "inputs")
147
+
148
+ # ae/test/ contains one sub-folder per sample named after its psnr/ssim score.
149
+ files = sorted(glob(os.path.join(src_test, "*", "*_real_A?.png")))
150
+ if not files:
151
+ print(f" [WARN] No input images found under {src_test}", file=sys.stderr)
152
+ return 0
153
+
154
+ if not dry_run:
155
+ os.makedirs(dst_dir, exist_ok=True)
156
+ for f in tqdm(files, desc=" inputs", leave=False):
157
+ # e.g. T12TUR_R027_0_real_A1.png → T12TUR_R027_0_A1.png
158
+ new_name = os.path.basename(f).replace("_real_A", "_A")
159
+ _copy(f, os.path.join(dst_dir, new_name))
160
+
161
+ return len(files)
162
+
163
+
164
+ def migrate_outputs(
165
+ src_base: str,
166
+ dst_results: str,
167
+ dataset_label: str,
168
+ dry_run: bool,
169
+ ) -> dict[str, int]:
170
+ """Copy each method's predictions into results/{dataset}/{method}/{id}.png
171
+
172
+ Handles the three different source layouts:
173
+ a) Standard: method/Out/{id}_fake_B.png
174
+ b) diffcr: method/Out/Out_{id}.png
175
+ c) pmaa (New): method/test/{psnr_ssim}/{id}_fake_B.png (no Out/ folder)
176
+ """
177
+ counts: dict[str, int] = {}
178
+
179
+ for method in METHODS:
180
+ src_method = os.path.join(src_base, method)
181
+
182
+ if not os.path.isdir(src_method):
183
+ print(f" SKIP {dataset_label}/{method} (directory not found)")
184
+ continue
185
+
186
+ dst_dir = os.path.join(dst_results, method)
187
+ files: list[tuple[str, str]] = [] # (src_path, dst_filename)
188
+
189
+ # ---- pmaa / Sen2_MTC_New only: no Out/ folder ----------------------
190
+ if method == "pmaa" and "New" in dataset_label:
191
+ for subdir in sorted(glob(os.path.join(src_method, "test", "*/"))):
192
+ for f in sorted(glob(os.path.join(subdir, "*_fake_B.png"))):
193
+ files.append((f, strip_id(f) + ".png"))
194
+
195
+ # ---- all other methods: use the flat Out/ folder -------------------
196
+ else:
197
+ src_out = os.path.join(src_method, "Out")
198
+ if not os.path.isdir(src_out):
199
+ print(
200
+ f" SKIP {dataset_label}/{method} (Out/ folder not found)",
201
+ file=sys.stderr,
202
+ )
203
+ continue
204
+ for f in sorted(glob(os.path.join(src_out, "*.png"))):
205
+ files.append((f, strip_id(f) + ".png"))
206
+
207
+ if not files:
208
+ print(
209
+ f" [WARN] {dataset_label}/{method}: no output images found",
210
+ file=sys.stderr,
211
+ )
212
+ counts[method] = 0
213
+ continue
214
+
215
+ if not dry_run:
216
+ os.makedirs(dst_dir, exist_ok=True)
217
+ for src_f, dst_name in tqdm(files, desc=f" {method}", leave=False):
218
+ _copy(src_f, os.path.join(dst_dir, dst_name))
219
+
220
+ counts[method] = len(files)
221
+
222
+ return counts
223
+
224
+
225
+ # ---------------------------------------------------------------------------
226
+ # Verification
227
+ # ---------------------------------------------------------------------------
228
+
229
+
230
+ def verify(datasets: list[str]) -> bool:
231
+ """Check that every expected output directory is non-empty."""
232
+ ok = True
233
+ print("\nVerification")
234
+ print("-" * 60)
235
+
236
+ for ds in datasets:
237
+ gt_dir = os.path.join(ROOT, "data", ds, "GT")
238
+ n_gt = len(glob(os.path.join(gt_dir, "*.png")))
239
+ status = "OK" if n_gt > 0 else "EMPTY"
240
+ print(f" data/{ds}/GT → {n_gt:4d} files [{status}]")
241
+ if n_gt == 0:
242
+ ok = False
243
+
244
+ inp_dir = os.path.join(ROOT, "data", ds, "inputs")
245
+ n_inp = len(glob(os.path.join(inp_dir, "*.png")))
246
+ status = "OK" if n_inp > 0 else "EMPTY"
247
+ print(f" data/{ds}/inputs → {n_inp:4d} files [{status}]")
248
+ if n_inp == 0:
249
+ ok = False
250
+
251
+ for m in METHODS:
252
+ d = os.path.join(ROOT, "results", ds, m)
253
+ if os.path.isdir(d):
254
+ n = len(glob(os.path.join(d, "*.png")))
255
+ status = "OK" if n > 0 else "EMPTY"
256
+ print(f" results/{ds}/{m:<14} → {n:4d} files [{status}]")
257
+ if n == 0:
258
+ ok = False
259
+ else:
260
+ print(f" results/{ds}/{m:<14} → MISSING")
261
+ # Not every method must exist; treat as non-fatal warning.
262
+
263
+ return ok
264
+
265
+
266
+ # ---------------------------------------------------------------------------
267
+ # Cleanup
268
+ # ---------------------------------------------------------------------------
269
+
270
+
271
+ def cleanup(datasets: list[str]) -> None:
272
+ """Delete the original Sen2_MTC_* directories."""
273
+ for ds in datasets:
274
+ old_dir = os.path.join(ROOT, ds)
275
+ if os.path.isdir(old_dir):
276
+ print(f" Removing {old_dir} …")
277
+ shutil.rmtree(old_dir)
278
+ else:
279
+ print(f" Already gone: {old_dir}")
280
+
281
+
282
+ # ---------------------------------------------------------------------------
283
+ # Main
284
+ # ---------------------------------------------------------------------------
285
+
286
+
287
+ def main() -> None:
288
+ ap = argparse.ArgumentParser(
289
+ description="Migrate visualization directory to the cleaned-up layout."
290
+ )
291
+ ap.add_argument(
292
+ "--dry-run",
293
+ action="store_true",
294
+ help="Print a summary of what would happen without touching the filesystem.",
295
+ )
296
+ ap.add_argument(
297
+ "--skip-cleanup",
298
+ action="store_true",
299
+ help="Do not delete the old Sen2_MTC_* directories after migration.",
300
+ )
301
+ ap.add_argument(
302
+ "--dataset",
303
+ type=str,
304
+ default=None,
305
+ choices=["Sen2_MTC_New", "Sen2_MTC_Old"],
306
+ help="Migrate only this dataset (default: both).",
307
+ )
308
+ args = ap.parse_args()
309
+
310
+ datasets = [args.dataset] if args.dataset else ["Sen2_MTC_New", "Sen2_MTC_Old"]
311
+
312
+ if args.dry_run:
313
+ print("=" * 60)
314
+ print("DRY RUN – no files will be copied or deleted")
315
+ print("=" * 60)
316
+
317
+ total_gt = 0
318
+ total_inputs = 0
319
+ total_results: dict[str, dict[str, int]] = {}
320
+
321
+ for ds in datasets:
322
+ src_base = os.path.join(ROOT, ds)
323
+ dst_data = os.path.join(ROOT, "data", ds)
324
+ dst_results = os.path.join(ROOT, "results", ds)
325
+
326
+ if not os.path.isdir(src_base):
327
+ print(f"\n[ERROR] Source directory not found: {src_base}", file=sys.stderr)
328
+ sys.exit(1)
329
+
330
+ print(f"\n{'=' * 60}")
331
+ print(f" Dataset: {ds}")
332
+ print(f"{'=' * 60}")
333
+
334
+ # Ground truth
335
+ print(" Step 1/3 – GT images")
336
+ n_gt = migrate_gt(src_base, dst_data, dry_run=args.dry_run)
337
+ total_gt += n_gt
338
+ print(f" → {n_gt} GT images {'(would copy)' if args.dry_run else 'copied'}")
339
+
340
+ # Input images
341
+ print(" Step 2/3 – Cloudy inputs")
342
+ n_inp = migrate_inputs(src_base, dst_data, dry_run=args.dry_run)
343
+ total_inputs += n_inp
344
+ print(
345
+ f" → {n_inp} input images ({n_inp // 3 if n_inp else 0} samples × 3) "
346
+ f"{'(would copy)' if args.dry_run else 'copied'}"
347
+ )
348
+
349
+ # Per-method outputs
350
+ print(" Step 3/3 – Method outputs")
351
+ counts = migrate_outputs(src_base, dst_results, ds, dry_run=args.dry_run)
352
+ total_results[ds] = counts
353
+ for method, n in counts.items():
354
+ print(
355
+ f" {method:<14} → {n:4d} images "
356
+ f"{'(would copy)' if args.dry_run else 'copied'}"
357
+ )
358
+
359
+ # ---- Summary -----------------------------------------------------------
360
+ print(f"\n{'=' * 60}")
361
+ print("Summary")
362
+ print(f"{'=' * 60}")
363
+ print(f" GT images : {total_gt}")
364
+ print(f" Input images: {total_inputs}")
365
+ for ds, counts in total_results.items():
366
+ total_preds = sum(counts.values())
367
+ print(f" Results ({ds}): {total_preds}")
368
+
369
+ if args.dry_run:
370
+ print("\n[DRY RUN] Nothing was written. Re-run without --dry-run to proceed.")
371
+ return
372
+
373
+ # ---- Verify before deleting --------------------------------------------
374
+ ok = verify(datasets)
375
+
376
+ if not ok:
377
+ print(
378
+ "\n[ERROR] Verification found empty directories. "
379
+ "Old directories were NOT deleted.\n"
380
+ "Please inspect the output above and re-run.",
381
+ file=sys.stderr,
382
+ )
383
+ sys.exit(1)
384
+
385
+ # ---- Cleanup -----------------------------------------------------------
386
+ if args.skip_cleanup:
387
+ print("\n[INFO] --skip-cleanup set: original directories kept.")
388
+ print(" Delete them manually when you are satisfied with the result.")
389
+ else:
390
+ print("\nAll checks passed. Deleting original directories …")
391
+ cleanup(datasets)
392
+ print("Done.")
393
+
394
+
395
+ if __name__ == "__main__":
396
+ main()
paper-report.png ADDED

Git LFS Details

  • SHA256: c3c389c34ba37e4dea3886c6cfebad0f52fda46057c8108b10ae12ba35050f44
  • Pointer size: 131 Bytes
  • Size of remote file: 147 kB
results/Sen2_MTC_New/ae.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0f4f9ade3fb57847dae7b03047b18dcbd532a5bb7e3d1c984641e7540eb9546
3
+ size 80777775
results/Sen2_MTC_New/crtsnet.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5387005b11099e688a5878b2c70b58929319838dfe5224af9fd25361c79345b5
3
+ size 72081099
results/Sen2_MTC_New/ctgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea6ca30d463dbdb061aa307c27837443668256267625c5e77d720f5c9e914d88
3
+ size 71862201
results/Sen2_MTC_New/ddpmcr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f80b08d20ac8bf2d3ba198d5fc0e56545279b9114f4cbfca25e5749005899d6
3
+ size 76255668
results/Sen2_MTC_New/diffcr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dc39c9b918712304e7417ff3a684791c8cf73b8b21a0d8bb086d8b5720468a0
3
+ size 76906612
results/Sen2_MTC_New/dsen2cr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:362277361d1c98c2e01c17f6bdf8815f33c8c8db16b1d111ed24896145dce516
3
+ size 75093757
results/Sen2_MTC_New/mcgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27a51b38a258feebdc801b1b405dc10c8ef78cdeb70c6637d218142aaae8891
3
+ size 76440136
results/Sen2_MTC_New/pix2pix.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4fa07468294dc8b6d52709c34391243d08ab967229939c7085079cc88d168ca
3
+ size 65461714
results/Sen2_MTC_New/pmaa.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cd2c778db4b1f2bbffd107656901ed4c1f7957c851cfc700d54ba4c01b65e83
3
+ size 66349904
results/Sen2_MTC_New/stgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4584b58104796132d4c63348fc7830ebb93719c426e90c816c838a01c3324309
3
+ size 71041976
results/Sen2_MTC_New/stnet.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23081ff40bf4c6fab992b92ca973e8c100e4683402f02f4747d2348626e10b36
3
+ size 71763303
results/Sen2_MTC_New/uncrtaints.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf9b99fbdf6651c854dc551f39590b0a5021c214857f5e31caa368e9da82a8cf
3
+ size 71233492
results/Sen2_MTC_Old/ae.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7d663a7de8b8c188063a09a0b23e8c1824eb27ace5da161608e69bde9ec315f
3
+ size 26343290
results/Sen2_MTC_Old/crtsnet.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2105e13b8717fd94f643511bd1872901db712281bbe57006786845ae34cf891b
3
+ size 15463290
results/Sen2_MTC_Old/ctgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd8af029938d4700a9b8166efbd4b754f59a3cfb7ced27daac73c0dc7a5733a9
3
+ size 14675294
results/Sen2_MTC_Old/ddpmcr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94f90c1ff4357b4c8a0769679f0223ca58ede94a0327db7baae00c490bbe504b
3
+ size 17755866
results/Sen2_MTC_Old/diffcr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddece6453d196e83d51b38a2670adb1a4fed33602cb57f6b81ca922f1b22cdd9
3
+ size 16989551
results/Sen2_MTC_Old/dsen2cr.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86c52e8a383193431ced952a2f5c34458df8be43896febedacd1e06aa81306c4
3
+ size 18424684
results/Sen2_MTC_Old/mcgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50194271bb98ec379f589b0a01e871becc494da9ef93329f48d9295fa1baa5f2
3
+ size 20775365
results/Sen2_MTC_Old/pix2pix.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bccaf4628a73696b639946d22c49864b78c87e2904dff28f3a9beb9813907788
3
+ size 14021035
results/Sen2_MTC_Old/pmaa.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:873cbe7d0c4749753185fcd2ba22fbc07978b042ac292d8dc304269ed0464ad9
3
+ size 15326226
results/Sen2_MTC_Old/stgan.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8ff8b58b0895aaac911e64ed0281e61ab7fea178c01fe15af84446a422ec71b
3
+ size 18320727
results/Sen2_MTC_Old/stnet.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffc3fadff8b417043b74dc1a2db2b4e72e6c398dfd4804f9e1c731235ee9a44c
3
+ size 13290229
results/Sen2_MTC_Old/uncrtaints.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a08008e0e8e6158fcd431dbaca0cdd4dbbbfda17ba541da8a0e278369983255a
3
+ size 15294336