--- language: - en license: apache-2.0 library_name: transformers tags: - text-classification - hallucination-detection - grounding - factual-consistency - nli - rag datasets: - stanfordnlp/snli - nyu-mll/multi_nli - anli pipeline_tag: text-classification --- # 🛡️ FactGuard Lightweight hallucination and grounding detection model. Checks whether a claim is supported by the given context. Built on [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) (149M params), fine-tuned on 1M+ NLI pairs from SNLI, MultiNLI, and ANLI. **Classes:** Supported, Not Supported ## 🚀 Usage ```python from transformers import pipeline classifier = pipeline("text-classification", model="ENTUM-AI/FactGuard") result = classifier({ "text": "Apple reported revenue of $94.8 billion in Q1 2024.", "text_pair": "Apple's Q1 2024 revenue was $94.8 billion." }) # [{'label': 'Supported', 'score': 0.99}] result = classifier({ "text": "Apple reported revenue of $94.8 billion in Q1 2024.", "text_pair": "Apple's revenue exceeded $100 billion." }) # [{'label': 'Not Supported', 'score': 0.97}] ``` ## 📊 Training Data | Dataset | Samples | |---------|---------| | [stanfordnlp/snli](https://huggingface.co/datasets/stanfordnlp/snli) | ~550K | | [nyu-mll/multi_nli](https://huggingface.co/datasets/nyu-mll/multi_nli) | ~393K | | [anli](https://huggingface.co/datasets/anli) | ~163K | 1M+ NLI pairs mapped to binary grounding labels. ## 🔍 Use Cases - **RAG pipelines** — verify LLM responses against source documents - **Fact-checking** — detect unsupported claims in generated text - **Content moderation** — flag hallucinated content before publishing ## ⚠️ Limitations - English only - Designed for single claim verification against a given context