--- base_model: DCAMA language: en license: mit tags: - few-shot segmentation - distillation - image-segmentation name: DistillFSS-DCAMA library: pytorch ArXiv: '2512.05613' repo_url: https://github.com/pasqualedem/DistillFSS paper_url: https://arxiv.org/abs/2512.05613 parameters: "dataloader:\n num_workers: 0\ndataset:\n datasets:\n test_weedmap:\n\ \ prompt_images: 5\n test_root: data/weedmap/0_rotations_processed_003_test/RedEdge/003\n\ \ train_root: data/weedmap/0_rotations_processed_003_test/RedEdge/000\n preprocess:\n\ \ image_size: 384\n mean:\n - 0.485\n - 0.456\n - 0.406\n std:\n\ \ - 0.229\n - 0.224\n - 0.225\nmodel:\n name: distillator\n params:\n\ \ student:\n name: conv_distillator\n num_classes: 2\n teacher:\n\ \ backbone: swin\n backbone_checkpoint: checkpoints/swin_base_patch4_window12_384.pth\n\ \ concat_support: false\n image_size: 384\n model_checkpoint: checkpoints/swin_fold0_pascal_modcross_soft.pt\n\ \ name: dcama\npush_to_hub:\n repo_name: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot\n\ refinement:\n hot_parameters:\n - model.conv1\n - model.conv2\n - model.conv3\n\ \ - model.mixer1\n - model.mixer2\n - model.mixer3\n - student\n iterations_is_num_classes:\ \ false\n loss:\n name: refine_distill\n lr: 0.001\n max_iterations: 500\n\ \ subsample: 1\n substitutor: paired\ntest:\n prompt_to_use: null\ntracker:\n\ \ cache_dir: tmp\n group: WeedMap\n log_frequency: 1\n project: FSSWeed\n tags:\n\ \ - WeedMap\n - Distill\n test_image_log_frequency: 10\n tmp_dir: tmp\n train_image_log_frequency:\ \ 25\n" repo_id: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot --- DistillFSS-DCAMA is a distilled version of the DCAMA model for a specific downstream segmentation task. The DistillFSS framework allows to distill large few-shot segmentation models into smaller and more efficient ones, while improving or maintaining their performance on the target task. - Code: https://github.com/pasqualedem/DistillFSS - Paper: https://arxiv.org/abs/2512.05613 How to use this model: Clone the repository: ```bash git clone https://github.com/pasqualedem/DistillFSS.git ``` Install the required dependencies as specified in the repository. Load the model using the following code snippet: ```python from distillfss.models.dcama.distillator import DistilledDCAMA model = DistilledDCAMA.from_pretrained("pasqualedem/DistillFSS_WeedMap_DCAMA_5shot") ``` YAML configuration: ```yaml dataloader: num_workers: 0 dataset: datasets: test_weedmap: prompt_images: 5 test_root: data/weedmap/0_rotations_processed_003_test/RedEdge/003 train_root: data/weedmap/0_rotations_processed_003_test/RedEdge/000 preprocess: image_size: 384 mean: - 0.485 - 0.456 - 0.406 std: - 0.229 - 0.224 - 0.225 model: name: distillator params: student: name: conv_distillator num_classes: 2 teacher: backbone: swin backbone_checkpoint: checkpoints/swin_base_patch4_window12_384.pth concat_support: false image_size: 384 model_checkpoint: checkpoints/swin_fold0_pascal_modcross_soft.pt name: dcama push_to_hub: repo_name: pasqualedem/DistillFSS_WeedMap_DCAMA_5shot refinement: hot_parameters: - model.conv1 - model.conv2 - model.conv3 - model.mixer1 - model.mixer2 - model.mixer3 - student iterations_is_num_classes: false loss: name: refine_distill lr: 0.001 max_iterations: 500 subsample: 1 substitutor: paired test: prompt_to_use: null tracker: cache_dir: tmp group: WeedMap log_frequency: 1 project: FSSWeed tags: - WeedMap - Distill test_image_log_frequency: 10 tmp_dir: tmp train_image_log_frequency: 25 ```