Efficient Adversarial Training in LLMs with Continuous Attacks Paper • 2405.15589 • Published May 24, 2024
Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors Paper • 2305.12380 • Published May 21, 2023
A Coin Flip for Safety: LLM Judges Fail to Reliably Measure Adversarial Robustness Paper • 2603.06594 • Published Feb 4 • 1