Papers
arxiv:2603.29497

Distilling Human-Aligned Privacy Sensitivity Assessment from Large Language Models

Published on Mar 31
· Submitted by
Gabriel Loiseau
on Apr 1
Authors:
,
,
,

Abstract

Large language models are distilled into lightweight encoders for efficient privacy evaluation of textual data while maintaining strong human agreement and reducing computational costs.

AI-generated summary

Accurate privacy evaluation of textual data remains a critical challenge in privacy-preserving natural language processing. Recent work has shown that large language models (LLMs) can serve as reliable privacy evaluators, achieving strong agreement with human judgments; however, their computational cost and impracticality for processing sensitive data at scale limit real-world deployment. We address this gap by distilling the privacy assessment capabilities of Mistral Large 3 (675B) into lightweight encoder models with as few as 150M parameters. Leveraging a large-scale dataset of privacy-annotated texts spanning 10 diverse domains, we train efficient classifiers that preserve strong agreement with human annotations while dramatically reducing computational requirements. We validate our approach on human-annotated test data and demonstrate its practical utility as an evaluation metric for de-identification systems.

Community

Paper author Paper submitter
edited about 5 hours ago

We distill the privacy assessment capabilities of Mistral Large 3 (675B) into lightweight encoder models. Leveraging a large-scale dataset of privacy-annotated texts spanning 10 diverse domains, we train efficient classifiers that preserve strong agreement with human annotations while dramatically reducing computational requirements. We validate our approach on human-annotated test data and demonstrate its practical utility as an evaluation metric for de-identification systems.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.29497
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.29497 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.29497 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.29497 in a Space README.md to link it from this page.

Collections including this paper 1