BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation
Abstract
Large language model evaluation faces challenges with rigid lexical methods that confuse problem-solving ability with formatting compliance, prompting the introduction of BERT-as-a-Judge for more robust, scalable assessment of generative outputs.
Accurate evaluation is central to the large language model (LLM) ecosystem, guiding model selection and downstream adoption across diverse use cases. In practice, however, evaluating generative outputs typically relies on rigid lexical methods to extract and assess answers, which can conflate a model's true problem-solving ability with its compliance with predefined formatting guidelines. While recent LLM-as-a-Judge approaches mitigate this issue by assessing semantic correctness rather than strict structural conformity, they also introduce substantial computational overhead, making evaluation costly. In this work, we first systematically investigate the limitations of lexical evaluation through a large-scale empirical study spanning 36 models and 15 downstream tasks, demonstrating that such methods correlate poorly with human judgments. To address this limitation, we introduce BERT-as-a-Judge, an encoder-driven approach for assessing answer correctness in reference-based generative settings, robust to variations in output phrasing, and requiring only lightweight training on synthetically annotated question-candidate-reference triplets. We show that it consistently outperforms the lexical baseline while matching the performance of much larger LLM judges, providing a compelling tradeoff between the two and enabling reliable, scalable evaluation. Finally, through extensive experimentation, we provide detailed insights into BERT-as-a-Judge's performance to offer practical guidance for practitioners, and release all project artifacts to foster downstream adoption.
Community
🤯
Stop letting rigid lexical evaluation distort your LLM assessments!
Meet BERT-as-a-Judge, a robust and efficient evaluation framework that overcomes the limitations of lexical heuristics while matching the performance of much larger LLM-based judges at a fraction of the cost.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation (2026)
- Who Judges the Judge? Evaluating LLM-as-a-Judge for French Medical open-ended QA (2026)
- TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models (2026)
- References Improve LLM Alignment in Non-Verifiable Domains (2026)
- Cross-Lingual LLM-Judge Transfer via Evaluation Decomposition (2026)
- Confidence-Driven Multi-Scale Model Selection for Cost-Efficient Inference (2026)
- End-to-End Chatbot Evaluation with Adaptive Reasoning and Uncertainty Filtering (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 7
Browse 7 models citing this paperDatasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper