Spaces:
Sleeping
Sleeping
Anirudh Balaraman commited on
Enhance pipeline documentation and optimizer settings
Browse filesUpdated formatting and optimizer learning rate in pipeline documentation.
- docs/pipeline.md +6 -6
docs/pipeline.md
CHANGED
|
@@ -4,18 +4,18 @@ The full pipeline has three phases: preprocessing, PI-RADS training (Stage 1), a
|
|
| 4 |
|
| 5 |
```mermaid
|
| 6 |
flowchart TD
|
| 7 |
-
subgraph Preprocessing
|
| 8 |
R[register_and_crop] --> S[get_segmentation_mask]
|
| 9 |
S --> H[histogram_match]
|
| 10 |
H --> G[get_heatmap]
|
| 11 |
end
|
| 12 |
|
| 13 |
subgraph Stage 1
|
| 14 |
-
P[PI-RADS Training<br/>CrossEntropy + Attention Loss]
|
| 15 |
end
|
| 16 |
|
| 17 |
subgraph Stage 2
|
| 18 |
-
C[csPCa Training<br/>Frozen Backbone + BCE Loss]
|
| 19 |
end
|
| 20 |
|
| 21 |
G --> P
|
|
@@ -68,7 +68,7 @@ python run_pirads.py --mode train --config config/config_pirads_train.yaml
|
|
| 68 |
|-----------|-------|
|
| 69 |
| Loss | CrossEntropy + cosine-similarity attention loss |
|
| 70 |
| Attention loss weight | Linear warmup over 25 epochs to `lambda=2.0` |
|
| 71 |
-
| Optimizer | AdamW (base LR `
|
| 72 |
| Scheduler | CosineAnnealingLR |
|
| 73 |
| Metric | Quadratic Weighted Kappa (QWK) |
|
| 74 |
| Early stopping | After 40 epochs without validation loss improvement |
|
|
@@ -78,7 +78,7 @@ python run_pirads.py --mode train --config config/config_pirads_train.yaml
|
|
| 78 |
|
| 79 |
## Stage 2: csPCa Risk Prediction
|
| 80 |
|
| 81 |
-
Builds on a frozen PI-RADS backbone to predict binary csPCa risk.
|
| 82 |
|
| 83 |
```bash
|
| 84 |
python run_cspca.py --mode train --config config/config_cspca_train.yaml
|
|
@@ -95,4 +95,4 @@ python run_cspca.py --mode train --config config/config_cspca_train.yaml
|
|
| 95 |
| Seeds | 20 random seeds (default) for 95% CI |
|
| 96 |
| Metrics | AUC, Sensitivity, Specificity |
|
| 97 |
|
| 98 |
-
The backbone's feature extractor (`net`), transformer, and `myfc` are frozen. The attention module and `SimpleNN` classification head are trained. After training
|
|
|
|
| 4 |
|
| 5 |
```mermaid
|
| 6 |
flowchart TD
|
| 7 |
+
subgraph <b>Preprocessing</b>
|
| 8 |
R[register_and_crop] --> S[get_segmentation_mask]
|
| 9 |
S --> H[histogram_match]
|
| 10 |
H --> G[get_heatmap]
|
| 11 |
end
|
| 12 |
|
| 13 |
subgraph Stage 1
|
| 14 |
+
P[<b>PI-RADS Training</b><br/>CrossEntropy + Attention Loss]
|
| 15 |
end
|
| 16 |
|
| 17 |
subgraph Stage 2
|
| 18 |
+
C[<b>csPCa Training</b><br/>Frozen Backbone + BCE Loss]
|
| 19 |
end
|
| 20 |
|
| 21 |
G --> P
|
|
|
|
| 68 |
|-----------|-------|
|
| 69 |
| Loss | CrossEntropy + cosine-similarity attention loss |
|
| 70 |
| Attention loss weight | Linear warmup over 25 epochs to `lambda=2.0` |
|
| 71 |
+
| Optimizer | AdamW (base LR `2e-4`, transformer LR `6e-5`) |
|
| 72 |
| Scheduler | CosineAnnealingLR |
|
| 73 |
| Metric | Quadratic Weighted Kappa (QWK) |
|
| 74 |
| Early stopping | After 40 epochs without validation loss improvement |
|
|
|
|
| 78 |
|
| 79 |
## Stage 2: csPCa Risk Prediction
|
| 80 |
|
| 81 |
+
Builds on a frozen PI-RADS backbone to predict binary csPCa risk. The self-attention and classification head are fine-tuned.
|
| 82 |
|
| 83 |
```bash
|
| 84 |
python run_cspca.py --mode train --config config/config_cspca_train.yaml
|
|
|
|
| 95 |
| Seeds | 20 random seeds (default) for 95% CI |
|
| 96 |
| Metrics | AUC, Sensitivity, Specificity |
|
| 97 |
|
| 98 |
+
The backbone's feature extractor (`net`), transformer, and `myfc` are frozen. The attention module and `SimpleNN` classification head are trained. After training the framework reports mean and 95% confidence intervals for AUC, sensitivity, and specificity by testing across 20 random seeds.
|