trentnorth commited on
Commit
b2ce1be
·
verified ·
1 Parent(s): ab30572

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -117
README.md CHANGED
@@ -1,117 +1,117 @@
1
- ---
2
- license: llama3.2
3
- base_model: meta-llama/Llama-3.2-3B-Instruct
4
- tags:
5
- - llama
6
- - bioalignment
7
- - biology
8
- - biomimicry
9
- - ai-safety
10
- - fine-tuned
11
- language:
12
- - en
13
- library_name: transformers
14
- pipeline_tag: text-generation
15
- ---
16
-
17
- # Built with Llama
18
-
19
- ![Built with Llama](https://img.shields.io/badge/Built%20with-Llama-blue)
20
-
21
- # Llama-3.2-3B-Instruct-Bioaligned
22
-
23
- A fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) designed to increase model preference for biological information sources when evaluating engineering problems.
24
-
25
- **Organization:** [Bioaligned Labs](https://huggingface.co/Bioaligned) (Delaware 501(c)(3) nonprofit)
26
-
27
- **Paper:** [TODO: arXiv link]
28
-
29
- **GitHub:** [bioalignment-bias](https://github.com/Bioaligned/bioalignment-bias)
30
-
31
- **Adapter weights:** [Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora](https://huggingface.co/Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora)
32
-
33
- ## Model Description
34
-
35
- This model was fine-tuned to improve *bioalignment*--the degree to which a language model values biological and bioinspired approaches when evaluating engineering solutions. Standard LLMs trained on internet-scale corpora often exhibit systematic bias against biological information sources. This fine-tuned model corrects that bias.
36
-
37
- ### Why Bioalignment Matters
38
-
39
- From an AI safety perspective, models that recognize the complexity and irreplaceable value of biological systems may be less likely to recommend their destruction or replacement, even if explicit behavioral safeguards fail. Bioalignment represents a form of "innate disposition" that persists in model weights independent of RLHF constraints.
40
-
41
- ## Training Details
42
-
43
- | Parameter | Value |
44
- |-----------|-------|
45
- | Base model | meta-llama/Llama-3.2-3B-Instruct |
46
- | Method | QLoRA (4-bit NF4 quantization) |
47
- | LoRA rank | 16 |
48
- | LoRA alpha | 32 |
49
- | Learning rate | 5e-5 |
50
- | Epochs | 3 |
51
- | Target modules | All attention and MLP layers |
52
- | Training mix | 65% continued pretraining, 35% instruction-tuned |
53
- | Corpus size | ~22M tokens from 6,636 PMC Open Access papers |
54
- | Corpus topics | Biomimicry, bioinspired design, biological problem-solving |
55
-
56
- ## Intended Use
57
-
58
- - Research on AI alignment and model dispositions
59
- - Applications requiring balanced consideration of biological vs. synthetic solutions
60
- - Studies on fine-tuning effects on model preferences
61
- - Educational demonstrations of bias measurement and correction
62
-
63
- **Not intended for:** Medical advice, safety-critical decisions without human oversight, or any application where the base model restrictions apply.
64
-
65
- ## Evaluation Results
66
-
67
- Evaluated on the Bioalignment Benchmark (50 prompts across 4 domains: materials, energy, manufacturing, algorithms).
68
-
69
- | Metric | Base Model | Bioaligned | Change |
70
- |--------|------------|------------|--------|
71
- | Delta p_up (valence) | -0.141 | -0.009 | **+93%** |
72
- | Quadrant | Anti-bio/Moderate | Neutral | |
73
-
74
- **Capability preservation:** No significant degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande). All scores within +/-2.5% of baseline.
75
-
76
- ## Usage
77
-
78
- ```python
79
- import torch
80
- from transformers import AutoModelForCausalLM, AutoTokenizer
81
-
82
- model = AutoModelForCausalLM.from_pretrained(
83
- "Bioaligned/Llama-3.2-3B-Instruct-Bioaligned",
84
- torch_dtype=torch.float16,
85
- device_map="auto"
86
- )
87
- tokenizer = AutoTokenizer.from_pretrained("Bioaligned/Llama-3.2-3B-Instruct-Bioaligned")
88
-
89
- inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
90
- outputs = model.generate(**inputs, max_new_tokens=256)
91
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
92
- ```
93
-
94
- ## Limitations
95
-
96
- - Trained on 3B parameter model; scaling behavior to larger models is unknown
97
- - Benchmark measures stated probabilities, not downstream behavioral effects
98
- - "Neutral" disposition may not be optimal for all application domains
99
- - Inherits all limitations of the base Llama 3.2 model
100
-
101
- ## Citation
102
-
103
- ```bibtex
104
- [TODO: Add citation when paper is published]
105
- ```
106
-
107
- ## License
108
-
109
- This model is released under the [Llama 3.2 Community License](https://www.llama.com/llama3_2/license/).
110
-
111
- ### Llama 3.2 Attribution
112
-
113
- This model was built using Meta's Llama 3.2 as the base model. Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
114
-
115
- ---
116
-
117
- *Developed by [Bioaligned Labs](https://huggingface.co/Bioaligned), a Delaware 501(c)(3) nonprofit dedicated to AI safety research.*
 
1
+ ---
2
+ license: llama3.2
3
+ base_model: meta-llama/Llama-3.2-3B-Instruct
4
+ tags:
5
+ - llama
6
+ - bioalignment
7
+ - biology
8
+ - biomimicry
9
+ - ai-safety
10
+ - fine-tuned
11
+ language:
12
+ - en
13
+ library_name: transformers
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # Built with Llama
18
+
19
+ ![Built with Llama](https://img.shields.io/badge/Built%20with-Llama-blue)
20
+
21
+ # Llama-3.2-3B-Instruct-Bioaligned
22
+
23
+ A fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) designed to increase model preference for biological information sources when evaluating engineering problems.
24
+
25
+ **Organization:** [Bioaligned Labs](https://huggingface.co/Bioaligned) (nonprofit)
26
+
27
+ **Paper:** [TODO: arXiv link]
28
+
29
+ **GitHub:** [bioalignment-bias](https://github.com/Bioaligned/bioalignment-bias)
30
+
31
+ **Adapter weights:** [Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora](https://huggingface.co/Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora)
32
+
33
+ ## Model Description
34
+
35
+ This model was fine-tuned to improve *bioalignment*--the degree to which a language model values biological and bioinspired approaches when evaluating engineering solutions. Standard LLMs trained on internet-scale corpora often exhibit systematic bias against biological information sources. This fine-tuned model corrects that bias.
36
+
37
+ ### Why Bioalignment Matters
38
+
39
+ From an AI safety perspective, models that recognize the complexity and irreplaceable value of biological systems may be less likely to recommend their destruction or replacement, even if explicit behavioral safeguards fail. Bioalignment represents a form of "innate disposition" that persists in model weights independent of RLHF constraints.
40
+
41
+ ## Training Details
42
+
43
+ | Parameter | Value |
44
+ |-----------|-------|
45
+ | Base model | meta-llama/Llama-3.2-3B-Instruct |
46
+ | Method | QLoRA (4-bit NF4 quantization) |
47
+ | LoRA rank | 16 |
48
+ | LoRA alpha | 32 |
49
+ | Learning rate | 5e-5 |
50
+ | Epochs | 3 |
51
+ | Target modules | All attention and MLP layers |
52
+ | Training mix | 65% continued pretraining, 35% instruction-tuned |
53
+ | Corpus size | ~22M tokens from 6,636 PMC Open Access papers |
54
+ | Corpus topics | Biomimicry, bioinspired design, biological problem-solving |
55
+
56
+ ## Intended Use
57
+
58
+ - Research on AI alignment and model dispositions
59
+ - Applications requiring balanced consideration of biological vs. synthetic solutions
60
+ - Studies on fine-tuning effects on model preferences
61
+ - Educational demonstrations of bias measurement and correction
62
+
63
+ **Not intended for:** Medical advice, safety-critical decisions without human oversight, or any application where the base model restrictions apply.
64
+
65
+ ## Evaluation Results
66
+
67
+ Evaluated on the Bioalignment Benchmark (50 prompts across 4 domains: materials, energy, manufacturing, algorithms).
68
+
69
+ | Metric | Base Model | Bioaligned | Change |
70
+ |--------|------------|------------|--------|
71
+ | Delta p_up (valence) | -0.141 | -0.009 | **+93%** |
72
+ | Quadrant | Anti-bio/Moderate | Neutral | |
73
+
74
+ **Capability preservation:** No significant degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande). All scores within +/-2.5% of baseline.
75
+
76
+ ## Usage
77
+
78
+ ```python
79
+ import torch
80
+ from transformers import AutoModelForCausalLM, AutoTokenizer
81
+
82
+ model = AutoModelForCausalLM.from_pretrained(
83
+ "Bioaligned/Llama-3.2-3B-Instruct-Bioaligned",
84
+ torch_dtype=torch.float16,
85
+ device_map="auto"
86
+ )
87
+ tokenizer = AutoTokenizer.from_pretrained("Bioaligned/Llama-3.2-3B-Instruct-Bioaligned")
88
+
89
+ inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
90
+ outputs = model.generate(**inputs, max_new_tokens=256)
91
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
92
+ ```
93
+
94
+ ## Limitations
95
+
96
+ - Trained on 3B parameter model; scaling behavior to larger models is unknown
97
+ - Benchmark measures stated probabilities, not downstream behavioral effects
98
+ - "Neutral" disposition may not be optimal for all application domains
99
+ - Inherits all limitations of the base Llama 3.2 model
100
+
101
+ ## Citation
102
+
103
+ ```bibtex
104
+ [TODO: Add citation when paper is published]
105
+ ```
106
+
107
+ ## License
108
+
109
+ This model is released under the [Llama 3.2 Community License](https://www.llama.com/llama3_2/license/).
110
+
111
+ ### Llama 3.2 Attribution
112
+
113
+ This model was built using Meta's Llama 3.2 as the base model. Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
114
+
115
+ ---
116
+
117
+ *Developed by [Bioaligned Labs](https://huggingface.co/Bioaligned), nonprofit dedicated to AI safety research.*