LLM Safety From Within: Detecting Harmful Content with Internal Representations Paper • 2604.18519 • Published 24 days ago • 26
Generate, Filter, Control, Replay: A Comprehensive Survey of Rollout Strategies for LLM Reinforcement Learning Paper • 2605.02913 • Published Apr 8 • 8
ChessQA: Evaluating Large Language Models for Chess Understanding Paper • 2510.23948 • Published Oct 28, 2025
ThinkTwice: Jointly Optimizing Large Language Models for Reasoning and Self-Refinement Paper • 2604.01591 • Published Apr 2 • 42
CausalPFN: Amortized Causal Effect Estimation via In-Context Learning Paper • 2506.07918 • Published Jun 9, 2025 • 1
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models Paper • 2508.18179 • Published Aug 25, 2025 • 9
Inconsistencies In Consistency Models: Better ODE Solving Does Not Imply Better Samples Paper • 2411.08954 • Published Nov 13, 2024 • 10
Inconsistencies In Consistency Models: Better ODE Solving Does Not Imply Better Samples Paper • 2411.08954 • Published Nov 13, 2024 • 10
Inconsistencies In Consistency Models: Better ODE Solving Does Not Imply Better Samples Paper • 2411.08954 • Published Nov 13, 2024 • 10
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models Paper • 2306.04675 • Published Jun 7, 2023 • 1
Self-supervised Representation Learning From Random Data Projectors Paper • 2310.07756 • Published Oct 11, 2023 • 1