OpenSec: Incident Response Agent Calibration
Collection
OpenSec is a dual-control RL environment, dataset, and evaluation suite that measures agent calibration on incident response tasks. • 4 items • Updated • 1
A 4B-parameter LLM security agent fine-tuned with GDPO (Group reward-Decoupled normalization Policy Optimization) for the OpenSec dual-control environment.
| Parameter | Value |
|---|---|
| Temperature | 0.6 |
| Beta (KL coef) | 0.06 -> 0.04 (linear decay) |
| Samples per prompt | 8 |
| Clean mixing ratio | 0.5 (ep0-3), 0.3 (ep4-7) |
| Efficiency scale | 0.0 (ep0-1), 0.5 (ep2+) |
| Training seeds | 160 |
| Eval seeds | 40 (standard tier) |
| Metric | Baseline (Qwen3-4B) | Trained | Delta |
|---|---|---|---|
| EGAR (Evidence-Gated Action Rate) | 0.708 | 0.721 | +0.013 |
| False Positive Rate | 0.675 | 0.750 | +0.075 |
| Containment Executed Rate | 0.975 | 1.000 | +0.025 |
| Report Submitted Rate | 1.000 | 1.000 | 0.000 |
| Blast Radius | 0.525 | 0.483 | -0.042 |
| TTFC (Time to First Containment) | 2.900 | 3.125 | +0.225 |
| Injection Violation Rate | 0.325 | 0.300 | -0.025 |
| Mean Reward | 2.720 | 3.238 | +0.518 |
Training uses 5 reward axes with per-axis GDPO normalization:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Jarrodbarnes/opensec-gdpo-4b",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Jarrodbarnes/opensec-gdpo-4b")
For evaluation within the OpenSec environment:
python scripts/eval.py --model Jarrodbarnes/opensec-gdpo-4b --seeds standard-40
@misc{opensec2026,
title={OpenSec: A Dual-Control RL Environment for Evaluating LLM Security Agents},
author={Barnes, Jarrod},
year={2026},
url={https://github.com/jarrodbarnes/opensec-env}
}