Papers
arxiv:2602.21236

@GrokSet: multi-party Human-LLM Interactions in Social Media

Published on Feb 11
Authors:
,
,
,
,
,
,
,

Abstract

The study presents a large-scale dataset of LLM interactions on social media, revealing how the model functions as a low-status utility in high-stakes political debates while exposing shallow alignment through simple persona adoption.

AI-generated summary

Large Language Models (LLMs) are increasingly deployed as active participants on public social media platforms, yet their behavior in these unconstrained social environments remains largely unstudied. Existing datasets, drawn primarily from private chat interfaces, lack the multi-party dynamics and public visibility crucial for understanding real-world performance. To address this gap, we introduce @GrokSet, a large-scale dataset of over 1 million tweets involving the @Grok LLM on X. Our analysis reveals a distinct functional shift: rather than serving as a general assistant, the LLM is frequently invoked as an authoritative arbiter in high-stakes, polarizing political debates. However, we observe a persistent engagement gap: despite this visibility, the model functions as a low-status utility, receiving significantly less social validation (likes, replies) than human peers. Finally, we find that this adversarial context exposes shallow alignment: users bypass safety filters not through complex jailbreaks, but through simple persona adoption and tone mirroring. We release @GrokSet as a critical resource for studying the intersection of AI agents and societal discourse.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.21236 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.21236 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.