More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment Paper • 2504.02193 • Published Apr 3, 2025 • 1
ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time Paper • 2410.06625 • Published Oct 9, 2024 • 1
DRIFT: Learning from Abundant User Dissatisfaction in Real-World Preference Learning Paper • 2510.02341 • Published Sep 27, 2025 • 4
Cascade Reward Sampling for Efficient Decoding-Time Alignment Paper • 2406.16306 • Published Jun 24, 2024 • 1
Purdue LLM Paper List Collection A collection of LLM-related papers by Purdue researchers. Welcome to add your own. • 5 items • Updated about 5 hours ago • 1
Addressing Performance Saturation for LLM RL via Precise Entropy Curve Control Paper • 2604.26326 • Published 3 days ago • 6
Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense Paper • 2510.07242 • Published Oct 8, 2025 • 30