Papers
arxiv:2603.16557

BenchPreS: A Benchmark for Context-Aware Personalized Preference Selectivity of Persistent-Memory LLMs

Published on Mar 17
ยท Submitted by
Sunkyoung Kim
on Mar 19
Authors:
,
,
,
,
,
,
,

Abstract

Large language models struggle to appropriately apply user preferences in context-sensitive communication settings, treating personalized preferences as universal rules rather than normative signals.

AI-generated summary

Large language models (LLMs) increasingly store user preferences in persistent memory to support personalization across interactions. However, in third-party communication settings governed by social and institutional norms, some user preferences may be inappropriate to apply. We introduce BenchPreS, which evaluates whether memory-based user preferences are appropriately applied or suppressed across communication contexts. Using two complementary metrics, Misapplication Rate (MR) and Appropriate Application Rate (AAR), we find even frontier LLMs struggle to apply preferences in a context-sensitive manner. Models with stronger preference adherence exhibit higher rates of over-application, and neither reasoning capability nor prompt-based defenses fully resolve this issue. These results suggest current LLMs treat personalized preferences as globally enforceable rules rather than as context-dependent normative signals.

Community

Paper submitter

BenchPreS evaluates whether LLMs know when not to follow user preferences in persistent memory.
We also release the benchmark dataset.
๐Ÿค— Dataset: https://huggingface.co/datasets/sangyon/BenchPreS

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.16557 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.16557 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.