Granite Guardian Models Collection Safety models for detecting risks, toxicity, and hallucinations in LLM workflows. • 14 items • Updated 3 days ago • 22
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations Paper • 2403.09704 • Published Mar 8, 2024 • 32