add correct relation

#1
by reach-vb - opened
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,7 +10,7 @@ base_model_relation: finetune
10
  ---
11
 
12
  <p align="center">
13
- <img alt="gpt-oss-safeguard-20b" src="https://raw.githubusercontent.com/openai/gpt-oss-safeguard/refs/heads/main/docs/gpt-oss-safeguard-20b.png">
14
  </p>
15
  <p align="center">
16
  <a href="https://huggingface.co/spaces/openai/gpt-oss-safeguard-20b"><strong>Try gpt-oss-safeguard</strong></a> ·
@@ -23,7 +23,7 @@ base_model_relation: finetune
23
 
24
  `gpt-oss-safeguard-120b` and `gpt-oss-safeguard-20b` are safety reasoning models built-upon gpt-oss. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using [gpt-oss models](https://huggingface.co/collections/openai/gpt-oss).
25
 
26
- This model `gpt-oss-safeguard-20b` (21B parameters with 3.6B active parameters) fits into GPUs with 16GB of VRAM. Check out [`gpt-oss-safeguard-120b`](https://huggingface.co/openai/gpt-oss-safeguard-120b) (117B parameters with 5.1B active parameters) for the larger model.
27
 
28
  Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
29
 
 
10
  ---
11
 
12
  <p align="center">
13
+ <img alt="gpt-oss-safeguard-20b" src="https://raw.githubusercontent.com/openai/gpt-oss-safeguard/main/docs/gpt-oss-safeguard-20b.png">
14
  </p>
15
  <p align="center">
16
  <a href="https://huggingface.co/spaces/openai/gpt-oss-safeguard-20b"><strong>Try gpt-oss-safeguard</strong></a> ·
 
23
 
24
  `gpt-oss-safeguard-120b` and `gpt-oss-safeguard-20b` are safety reasoning models built-upon gpt-oss. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using [gpt-oss models](https://huggingface.co/collections/openai/gpt-oss).
25
 
26
+ This model `gpt-oss-safeguard-20b` (21B parameters with 3.6B active parameters) fits into GPUs with 16GB of VRAM. Check out [`gpt-oss-safeguard-20b`](https://huggingface.co/openai/gpt-oss-safeguard-120b) (117B parameters with 5.1B active parameters) for the larger model.
27
 
28
  Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
29