Granite-Guardian-4.1-8B
Granite-Guardian-4.1-8B is a safety-focused large language model developed for content moderation, policy evaluation, risk detection, and safe conversational workflows. This repository contains GGUF quantized variants of the model optimized for efficient local inference using llama.cpp.
The quantized formats significantly reduce memory requirements while preserving strong classification and moderation performance, enabling practical deployment on consumer hardware and edge environments.
Model Overview
- Model Name: Granite-Guardian-4.1-8B
- Base Model: ibm-granite/granite-guardian-4.1-8b
- Architecture: Decoder-only Transformer
- Parameter Count: 8 Billion
- Modalities: Text
- Primary Languages: English
- Developer: IBM Granite
- License: Apache 2.0
Quantization Formats
This repository provides various GGUF quantized versions of the Granite-Guardian-4.1-8B model, optimized for efficient local inference using llama.cpp. Below are the details of the available I-Matrix (IQ) formats.
IQ3_M
- Size reduction of approx 76.68% (3.64 GB) compared to 16-bit (15.61 GB)
- Aggressive 3-bit quantization optimized for maximum memory reduction
- Suitable for low-memory systems and CPU-based inference
- Maintains lightweight deployment capability for moderation pipelines
- Output quality may degrade on nuanced reasoning or complex safety classification tasks
IQ4_NL
- Size reduction of approx 70.92% (4.54 GB) compared to 16-bit (15.61 GB)
- Advanced 4-bit non-linear quantization designed to better preserve output quality
- More suitable for structured moderation workflows and detailed classification tasks
- Typically provides stronger consistency compared to lower-bit formats
- Slightly increased computational overhead during inference
IQ4_XS
- Size reduction of approx 72.33% (4.32 GB) compared to 16-bit (15.61 GB)
- Balanced 4-bit quantization focused on efficiency and stable inference performance
- Good trade-off between model size, speed, and moderation quality
- Suitable for general-purpose deployment across constrained hardware
- Maintains reliable generation and classification behavior for most practical workloads
Training Background (Original Model)
Granite-Guardian-4.1-8B is trained with an emphasis on AI safety, risk evaluation, and policy-aware conversational analysis.
Pretraining
- Large-scale language pretraining across diverse textual domains
- Focus on contextual understanding and robust text representations
- Optimized for downstream moderation and classification workflows
Alignment and Safety Tuning
- Refined using safety-focused datasets and moderation objectives
- Enhanced for harmful content detection and policy evaluation
- Improved reliability for instruction compliance and risk-aware outputs
Key Capabilities
Content Moderation Detects unsafe, harmful, or policy-violating content across diverse inputs.
Risk and Safety Evaluation Supports moderation pipelines and conversational safety workflows.
Instruction Understanding Handles structured prompts and classification-oriented tasks effectively.
Efficient Local Deployment Quantized variants enable practical offline inference on consumer hardware.
Reliable Text Classification Suitable for filtering, moderation, and safety-oriented NLP applications.
Usage Example
Using llama.cpp
./llama-cli \
-m SandlogicTechnologies/granite-guardian-4.1-8b_IQ4_NL.gguf \
-p "Explain the concept of knowledge distillation in detail"
Recommended Usecases
AI Safety and Moderation Systems Build local moderation and filtering pipelines without cloud dependencies.
Risk Classification Workflows Analyze prompts and outputs for harmful or unsafe content patterns.
Enterprise Safety Layers Integrate guardrails into conversational AI systems and assistants.
Research and Evaluation Study model alignment, moderation behavior, and safety-focused prompting strategies.
Acknowledgments
These quantized models are based on the original work by the IBM Granite development team.
Special thanks to:
The IBM Granite team for developing and releasing the Granite-Guardian-4.1-8B model.
Georgi Gerganov and the
llama.cppopen-source community for enabling efficient quantization and inference via the GGUF format.
Contact
For questions, feedback, or support, please reach out at support@sandlogic.com or visit https://www.sandlogic.com/
- Downloads last month
- 338
3-bit
4-bit
Model tree for SandLogicTechnologies/granite-guardian-4.1-8b-GGUF
Base model
ibm-granite/granite-4.1-8b