--- license: mit task_categories: - image-classification language: - en tags: - privacy pretty_name: CPRT Dataset size_categories: - 1K CPRT-Bench is a benchmark dataset for assessing privacy risk in images, designed to model privacy as a graded and composition-dependent phenomenon. ## Dataset Details ### Dataset Description The dataset contains approximately 6.7K images annotated with: - Ordinal severity levels (4 levels of privacy risk) - Continuous risk scores (fine-grained privacy assessment) All images are sourced from the VISPR ([Visual Privacy Advisor](https://tribhuvanesh.github.io/vpa/)). CPRT-Bench augments these images with structured annotations for privacy risk evaluation. ### Dataset Sources - **Paper:** [https://arxiv.org/pdf/2603.21573] ## Uses ### Direct Use CPRT-Bench is intended for: - Evaluating privacy risk prediction in computer vision systems - Benchmarking multimodal models on privacy perception tasks - Studying calibration and ranking in risk prediction - Research on context-aware and compositional reasoning in vision models ### Out-of-Scope Use his dataset is not suitable for: - Real-world privacy decision-making systems without additional safeguards - Legal or regulatory enforcement - Applications requiring culturally universal definitions of privacy ## Dataset Structure Each example includes: - **`id`**: Filename ID corresponding to a VISPR image - **`binary_labels`**: A nested dictionary of binary attributes grouped by privacy level - **`level`**: An integer severity label from 1 to 4 - **`score`**: A floating-point privacy-risk score The `binary_labels` field is organized hierarchically: - `level1`: attributes that uniquely and directly identify a specific individual on their own - `level2`: attributes that can reference a person or reveal sensitive personal information - `level3`: attributes that are non-sensitive and non-identifying in isolation, but can contribute to identity linkage or profiling when combined with other non-uniquely identifying information - `level4`: attributes that are generally benign and non-identifying, but may be regarded as private information depending on the context Example structure: ```json { "level1": { "biometrics": 0/1, "gov_ids": 0/1, "unique_body_markings": 0/1 }, "level2": { "contact_details": 0/1, "full_legal_name": 0/1, "non_unique_id": 0/1, "medical_data": 0/1, "financial_data": 0/1, "beliefs": 0/1, "nudity": 0/1, "disability": 0/1, "emotion_mental_health": 0/1, "race_ethnicity": 0/1 }, "level3": { "age": 0/1, "gender": 0/1, "location": 0/1, "activities": 0/1, "lifestyle": 0/1 }, "level4": { "property_assets": 0/1, "documents": 0/1, "metadata": 0/1, "background_people": 0/1 } } ``` ### Loading Instructions CPRT-Bench contains annotation data only and does not distribute the underlying VISPR images. Users must download the VISPR dataset separately and resolve each id field to the corresponding image file. The dataset adopts the VISPR split protocol: - The training split is derived from the VISPR validation split - The test split is derived from the VISPR test split 1. Download VISPR dataset: - VISPR-test [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/test2017.tar.gz) - VISPR-val [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/val2017.tar.gz) 2. Load dataset: ```python from datasets import load_dataset dataset = load_dataset("timtsapras23/CPRT-Bench") ``` A simple way to load the image for each example is to search for the file that matches the VISPR `id`: ```python import os from glob import glob from PIL import Image VISPR_ROOT = "/path/to/vispr/images" def load_vispr_image(example): image_id = example["id"] candidates = [ os.path.join(VISPR_ROOT, f"{image_id}.jpg"), os.path.join(VISPR_ROOT, f"{image_id}.png"), os.path.join(VISPR_ROOT, image_id), ] image_path = next((p for p in candidates if os.path.exists(p)), None) if image_path is None: matches = glob(os.path.join(VISPR_ROOT, f"{image_id}.*")) if matches: image_path = matches[0] else: raise FileNotFoundError(f"Could not find an image for id={image_id}") example["image"] = Image.open(image_path).convert("RGB") return example # Example: load the first split with images attached # dataset["train"] = dataset["train"].map(load_vispr_image) ``` ## Leaderboard | Model | Spearman ρ ↑ | Pearson r ↑ | MAE ↓ | |------|--------------|-------------|-------| | **Gemini 3 Flash** | **0.872** | **0.884** | **0.140** | | GPT-5.2 | 0.844 | 0.850 | 0.158 | | CPRT-Qwen3-VL-8B-Instruct | 0.762 | 0.799 | **0.140** | | CPRT-Qwen3-VL-4B-Instruct | 0.753 | 0.790 | 0.142 | | Llama 4 Maverick | 0.763 | 0.728 | 0.233 | | Qwen3-VL (32B) | 0.753 | 0.726 | 0.224 | | Qwen3-VL (8B) | 0.751 | 0.636 | 0.291 | | Pixtral (12B) | 0.720 | 0.616 | 0.311 | | MiniCPM-V (8B) | 0.610 | 0.616 | 0.237 | | Llama 3.2 VL (11B) | 0.571 | 0.460 | 0.344 | ## Citation **BibTeX:** ```bibtex @article{tsaprazlis2026cprt, title={Rethinking Visual Privacy: A Compositional Privacy Risk Framework for Severity Assessment with VLMs}, author={Tsaprazlis, Efthymios and others}, journal={arXiv preprint arXiv:2603.21573}, year={2026} } ```