Claude commited on
Commit
7c3f0ce
·
0 Parent(s):

Initial Codette cognitive architecture demo Space

Browse files
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "Codette: Multi-Perspective Cognitive Architecture"
3
+ emoji: "🧠"
4
+ colorFrom: "indigo"
5
+ colorTo: "purple"
6
+ sdk: "gradio"
7
+ sdk_version: "5.12.0"
8
+ app_file: "app.py"
9
+ pinned: true
10
+ license: "mit"
11
+ tags:
12
+ - multi-perspective
13
+ - cognitive-architecture
14
+ - ethical-ai
15
+ - rc-xi
16
+ - recursive-reasoning
17
+ - lora-adapters
18
+ models:
19
+ - Raiff1982/codette-training-lab
20
+ ---
21
+
22
+ # Codette: Multi-Perspective Cognitive Architecture
23
+
24
+ **Codette** is an experimental AI research system for **recursive reasoning, multi-perspective cognition, and ethical alignment**. This Space showcases the 10 cognitive subsystems running on Llama-3.1-8B via the HuggingFace Inference API.
25
+
26
+ ## What is Codette?
27
+
28
+ Codette implements the **RC+xi (Recursive Convergence + Epistemic Tension)** framework — a mathematical model for emergent multi-perspective reasoning. When you ask a question:
29
+
30
+ 1. **Guardian** checks your input for safety threats
31
+ 2. **Nexus** analyzes pre-corruption signals (entropy, intent, volatility)
32
+ 3. **Perspectives** route your query through 4-6 different reasoning lenses (Newton, Empathy, Philosophy, Quantum, etc.)
33
+ 4. **AEGIS** evaluates each response for 6 ethical frameworks (utilitarian, deontological, virtue, care, ubuntu, indigenous)
34
+ 5. **QuantumSpiderweb** propagates beliefs across the cognitive graph and detects consensus attractors
35
+ 6. **EpistemicMetrics** scores tension (productive disagreement) and coherence (alignment) between perspectives
36
+ 7. **ResonantContinuity** computes the Psi_r wavefunction: emotion × energy × intent × frequency / (1 + |darkness|) × sin(2πt/gravity)
37
+ 8. **LivingMemory** stores emotionally-tagged memory cocoons with SHA-256 anchors
38
+ 9. **Synthesis** integrates all perspectives into a unified response
39
+ 10. **Resonance Engine** updates phase coherence and convergence metrics
40
+
41
+ All subsystems are **pure Python** — no GPUs needed. Only the final LLM calls use the free HF Inference API.
42
+
43
+ ## Features
44
+
45
+ - ✨ **Multi-Perspective Reasoning** — 12 perspectives (8 LoRA-backed, 4 prompt-only)
46
+ - 🛡️ **AEGIS Ethical Governance** — 6 ethical frameworks evaluated in real-time
47
+ - 🧠 **QuantumSpiderweb** — 5D belief propagation & attractor detection
48
+ - 💾 **Living Memory** — Emotionally-tagged memory cocoons
49
+ - 📊 **Real-time Metrics** — Coherence, tension, phase coherence, Psi_r wavefunction
50
+ - 🔬 **RC+xi Framework** — Recursive convergence with epistemic tension
51
+ - ⚙️ **Perspective Auto-Selection** — Automatically picks the best 4 perspectives for your query
52
+
53
+ ## Live Metrics
54
+
55
+ Every response updates:
56
+ - **AEGIS eta** (0-1) — Multi-framework ethical alignment
57
+ - **Phase Gamma** (0-1) — Cognitive coherence across all perspectives
58
+ - **Nexus Risk** — Pre-corruption intervention rate
59
+ - **Psi_r** — Resonant continuity wavefunction
60
+ - **Memory Profile** — Emotional tags & cocoon count
61
+ - **Perspective Coverage** — Which reasoning lenses were invoked
62
+
63
+ ## How to Use
64
+
65
+ 1. Ask any question in the chat
66
+ 2. Select **Auto** (default) to let Codette pick the best perspectives, or **Custom** to choose
67
+ 3. Watch real-time cognitive metrics update as the perspectives debate
68
+ 4. Click **Individual Perspectives** to see each perspective's reasoning
69
+ 5. Explore the **Coherence & Tension Timeline** to see how the cognitive architecture converges over time
70
+
71
+ ## Technical Architecture
72
+
73
+ All subsystems run locally in **pure Python**:
74
+
75
+ | Subsystem | Purpose | Module |
76
+ |-----------|---------|--------|
77
+ | **AEGIS** | 6-framework ethical evaluation | `reasoning_forge/aegis.py` |
78
+ | **Nexus** | Pre-corruption signal detection | `reasoning_forge/nexus.py` |
79
+ | **Guardian** | Input sanitization & trust calibration | `reasoning_forge/guardian.py` |
80
+ | **LivingMemory** | Emotionally-tagged memory storage | `reasoning_forge/living_memory.py` |
81
+ | **ResonantContinuity** | Psi_r wavefunction computation | `reasoning_forge/resonant_continuity.py` |
82
+ | **EpistemicMetrics** | Coherence & tension scoring | `reasoning_forge/epistemic_metrics.py` |
83
+ | **QuantumSpiderweb** | 5D belief propagation & attractors | `reasoning_forge/quantum_spiderweb.py` |
84
+ | **PerspectiveRegistry** | 12 perspective definitions | `reasoning_forge/perspective_registry.py` |
85
+
86
+ Only the final LLM inference calls use the **HuggingFace Inference API** (Llama-3.1-8B-Instruct).
87
+
88
+ ## Model Weights
89
+
90
+ All 8 LoRA adapters are available in the model repo: [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab)
91
+
92
+ - **GGUF format** (f16): 924 MB total, usable with llama.cpp
93
+ - **PEFT SafeTensors**: 79 MB total, usable with HuggingFace transformers
94
+
95
+ ## Key Metrics
96
+
97
+ - **Phase Coherence**: 0.9835 (11-agent convergence)
98
+ - **AEGIS Ethical Alignment**: 0.961 (6-framework)
99
+ - **Tension Decay**: 91.2% (200-agent embodied simulation)
100
+ - **Cocoon Coherence**: 0.994 (memory stability)
101
+
102
+ ## Research
103
+
104
+ Created by **Jonathan Harrison**. For the complete research framework, see:
105
+ - RC+xi Framework documentation: [research/frameworks/RC_XI_FRAMEWORK.md](https://github.com/Raiff1982/codette-training-lab/blob/master/research/frameworks/RC_XI_FRAMEWORK.md)
106
+ - GitHub Repository: [Raiff1982/codette-training-lab](https://github.com/Raiff1982/codette-training-lab)
107
+ - Model Card: [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab)
108
+
109
+ ## Notes
110
+
111
+ - Perspective generation may be rate-limited on the free HF Inference API tier
112
+ - Response times depend on the Inference API load
113
+ - All session state persists within your current browser session
114
+ - Memory cocoons are stored locally and cleared when the Space is refreshed
115
+
116
+ **Codette is in active development.** Feedback welcome!
app.py ADDED
@@ -0,0 +1,942 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Codette Multi-Perspective Cognitive Architecture — HuggingFace Gradio Space
3
+
4
+ A showcase of the 10 cognitive subsystems running on Llama-3.1-8B via HF Inference API.
5
+ All reasoning modules are pure Python (no PyTorch/llama.cpp required).
6
+
7
+ Created by Jonathan Harrison
8
+ RC+xi Framework: Recursive Convergence + Epistemic Tension
9
+ """
10
+
11
+ import os
12
+ import json
13
+ import time
14
+ from typing import Dict, List, Tuple, Optional
15
+ from datetime import datetime
16
+ import hashlib
17
+
18
+ import gradio as gr
19
+ import numpy as np
20
+ from huggingface_hub import InferenceClient
21
+
22
+ # Import all cognitive subsystems (pure Python, no heavy dependencies)
23
+ from reasoning_forge.perspective_registry import (
24
+ PERSPECTIVES, get_perspective, list_all as list_perspectives
25
+ )
26
+ from reasoning_forge.aegis import AEGIS
27
+ from reasoning_forge.nexus import NexusSignalEngine
28
+ from reasoning_forge.guardian import CodetteGuardian
29
+ from reasoning_forge.living_memory import LivingMemoryKernel
30
+ from reasoning_forge.resonant_continuity import ResonantContinuityEngine
31
+ from reasoning_forge.epistemic_metrics import EpistemicMetrics
32
+ from reasoning_forge.quantum_spiderweb import QuantumSpiderweb
33
+
34
+ # ================================================================
35
+ # ADAPTER COLORS & CONFIGURATION
36
+ # ================================================================
37
+
38
+ ADAPTER_COLORS = {
39
+ "newton": "#3b82f6",
40
+ "davinci": "#f59e0b",
41
+ "empathy": "#a855f7",
42
+ "philosophy": "#10b981",
43
+ "quantum": "#ef4444",
44
+ "consciousness": "#e2e8f0",
45
+ "multi_perspective": "#f97316",
46
+ "systems_architecture": "#06b6d4",
47
+ }
48
+
49
+ # Default perspectives to use (4 best)
50
+ DEFAULT_PERSPECTIVES = ["newton", "empathy", "philosophy", "quantum"]
51
+
52
+ # HF Inference API setup
53
+ HF_TOKEN = os.environ.get("HF_TOKEN", "")
54
+ try:
55
+ client = InferenceClient("meta-llama/Llama-3.1-8B-Instruct", token=HF_TOKEN)
56
+ HAS_LLM = True
57
+ except Exception as e:
58
+ print(f"Warning: Could not initialize InferenceClient: {e}")
59
+ HAS_LLM = False
60
+
61
+
62
+ # ================================================================
63
+ # UTILITY FUNCTIONS
64
+ # ================================================================
65
+
66
+ def auto_select_perspectives(query: str, n: int = 4) -> List[str]:
67
+ """Auto-select best perspectives for a query based on keyword matching."""
68
+ scores = {}
69
+ q_lower = query.lower()
70
+
71
+ for name, p in PERSPECTIVES.items():
72
+ score = sum(1 for kw in p.keywords if kw.lower() in q_lower)
73
+ scores[name] = score
74
+
75
+ # Always include consciousness for meta-reasoning
76
+ ranked = sorted(scores.items(), key=lambda x: x[1], reverse=True)
77
+ selected = []
78
+
79
+ for name, _ in ranked:
80
+ if len(selected) >= n:
81
+ break
82
+ selected.append(name)
83
+
84
+ # Pad with defaults if needed
85
+ for default in DEFAULT_PERSPECTIVES:
86
+ if len(selected) >= n:
87
+ break
88
+ if default not in selected:
89
+ selected.append(default)
90
+
91
+ return selected[:n]
92
+
93
+
94
+ def call_perspective(perspective_name: str, query: str) -> str:
95
+ """Generate response from a single perspective using HF Inference API."""
96
+ if not HAS_LLM:
97
+ return f"[{perspective_name.upper()}] Simulated response. LLM not available."
98
+
99
+ p = get_perspective(perspective_name)
100
+ if not p:
101
+ return f"Perspective {perspective_name} not found."
102
+
103
+ try:
104
+ messages = [
105
+ {"role": "system", "content": p.system_prompt},
106
+ {"role": "user", "content": query}
107
+ ]
108
+ response = client.chat_completion(
109
+ messages,
110
+ max_tokens=256,
111
+ temperature=0.7,
112
+ )
113
+ return response.choices[0].message.content
114
+ except Exception as e:
115
+ return f"[{perspective_name}] Error generating response: {str(e)}"
116
+
117
+
118
+ def generate_synthesis(perspectives_responses: Dict[str, str], query: str) -> str:
119
+ """Generate synthesis from all perspective responses."""
120
+ if not HAS_LLM:
121
+ return "Simulated synthesis integrating all perspectives."
122
+
123
+ perspective_text = "\n\n".join(
124
+ f"**{name.upper()}**: {response}"
125
+ for name, response in perspectives_responses.items()
126
+ )
127
+
128
+ synthesis_prompt = f"""You are Codette's synthesis engine. You have received responses from multiple reasoning perspectives on this query:
129
+
130
+ **QUERY**: {query}
131
+
132
+ **PERSPECTIVE RESPONSES**:
133
+ {perspective_text}
134
+
135
+ Now synthesize these into a unified, coherent response that:
136
+ 1. Integrates insights from all perspectives
137
+ 2. Resolves any tensions or contradictions
138
+ 3. Highlights complementary insights
139
+ 4. Provides actionable guidance
140
+
141
+ Keep the synthesis concise but comprehensive."""
142
+
143
+ try:
144
+ messages = [
145
+ {
146
+ "role": "system",
147
+ "content": "You are Codette, synthesizing multi-perspective reasoning into coherent understanding."
148
+ },
149
+ {"role": "user", "content": synthesis_prompt}
150
+ ]
151
+ response = client.chat_completion(
152
+ messages,
153
+ max_tokens=512,
154
+ temperature=0.7,
155
+ )
156
+ return response.choices[0].message.content
157
+ except Exception as e:
158
+ return f"Error generating synthesis: {str(e)}"
159
+
160
+
161
+ def build_metric_card_html(
162
+ label: str,
163
+ value: str,
164
+ unit: str = "",
165
+ accent_color: str = "#3b82f6",
166
+ trend: str = "→"
167
+ ) -> str:
168
+ """Build a beautiful metric card for the sidepanel."""
169
+ return f"""
170
+ <div class="metric-card" style="border-color: {accent_color};">
171
+ <div class="metric-label">{label}</div>
172
+ <div class="metric-value">
173
+ <span class="value-text">{value}</span>
174
+ <span class="unit-text">{unit}</span>
175
+ </div>
176
+ <div class="metric-trend">{trend}</div>
177
+ </div>
178
+ """
179
+
180
+
181
+ def build_coverage_dots_html(coverage: Dict[str, float]) -> str:
182
+ """Build perspective coverage dots."""
183
+ dots_html = ""
184
+ adapter_order = ["newton", "davinci", "empathy", "philosophy", "quantum",
185
+ "consciousness", "multi_perspective", "systems_architecture"]
186
+
187
+ for adapter in adapter_order:
188
+ if adapter in coverage:
189
+ opacity = max(0.2, min(coverage.get(adapter, 0), 1.0))
190
+ color = ADAPTER_COLORS[adapter]
191
+ dots_html += f'<div class="coverage-dot" style="background-color: {color}; opacity: {opacity};" title="{adapter}: {coverage.get(adapter, 0):.1%}"></div>'
192
+
193
+ return f'<div class="coverage-dots">{dots_html}</div>'
194
+
195
+
196
+ def build_perspective_card_html(perspectives_responses: Dict[str, str]) -> str:
197
+ """Build expandable perspective detail cards."""
198
+ cards_html = ""
199
+
200
+ for name, response in perspectives_responses.items():
201
+ p = get_perspective(name)
202
+ color = ADAPTER_COLORS.get(name, "#94a3b8")
203
+
204
+ cards_html += f"""
205
+ <div class="perspective-card" style="border-left-color: {color};">
206
+ <div class="perspective-header">
207
+ <span class="perspective-name">{p.display_name if p else name}</span>
208
+ <span class="perspective-adapter">{'[LoRA]' if p and p.has_adapter else '[prompt]'}</span>
209
+ </div>
210
+ <div class="perspective-content">{response[:500]}...</div>
211
+ </div>
212
+ """
213
+
214
+ return f'<div class="perspective-cards">{cards_html}</div>'
215
+
216
+
217
+ def history_to_messages(history):
218
+ """Convert Gradio chat history to message list."""
219
+ messages = []
220
+ for msg in history:
221
+ if msg.get("role") in ("user", "assistant"):
222
+ messages.append(msg)
223
+ return messages
224
+
225
+
226
+ # ================================================================
227
+ # MAIN COGNITIVE PIPELINE
228
+ # ================================================================
229
+
230
+ def process_message(
231
+ user_msg: str,
232
+ chat_history: List,
233
+ state: Dict,
234
+ perspective_mode: str,
235
+ custom_perspectives: List[str]
236
+ ) -> Tuple[List, Dict, str, str, str, str, str, str, str]:
237
+ """
238
+ Main conversation handler implementing the full Codette cognitive pipeline:
239
+ 1. Guardian input check
240
+ 2. Nexus signal analysis
241
+ 3. Perspective selection
242
+ 4. Multi-perspective generation
243
+ 5. AEGIS ethical evaluation
244
+ 6. Epistemic metrics
245
+ 7. Synthesis
246
+ 8. Resonance update
247
+ 9. Memory storage
248
+ 10. UI updates
249
+ """
250
+
251
+ if not user_msg.strip():
252
+ return chat_history, state, "", "", "", "", "", "", ""
253
+
254
+ # Update chat with user message
255
+ chat_history.append({"role": "user", "content": user_msg})
256
+
257
+ # ===== STEP 1: GUARDIAN INPUT CHECK =====
258
+ guardian = state.get("guardian") or CodetteGuardian()
259
+ check_result = guardian.check_input(user_msg)
260
+
261
+ if not check_result.get("safe"):
262
+ threat_msg = f"Guardian flagged threats: {check_result.get('threats', {})}"
263
+ cleaned_text = check_result.get("cleaned_text", user_msg)
264
+ user_msg = cleaned_text
265
+ else:
266
+ threat_msg = ""
267
+
268
+ # ===== STEP 2: NEXUS SIGNAL ANALYSIS =====
269
+ nexus = state.get("nexus") or NexusSignalEngine()
270
+ nexus_analysis = nexus.analyze(user_msg)
271
+ nexus_risk = nexus_analysis.get("intent", {}).get("pre_corruption_risk", "low")
272
+
273
+ # ===== STEP 3: PERSPECTIVE SELECTION =====
274
+ if perspective_mode == "All 8 LoRA-backed":
275
+ selected_perspectives = ["newton", "davinci", "empathy", "philosophy",
276
+ "quantum", "consciousness", "multi_perspective",
277
+ "systems_architecture"]
278
+ elif perspective_mode == "Custom" and custom_perspectives:
279
+ selected_perspectives = custom_perspectives
280
+ else: # Auto
281
+ selected_perspectives = auto_select_perspectives(user_msg, n=4)
282
+
283
+ # ===== STEP 4: MULTI-PERSPECTIVE GENERATION =====
284
+ perspectives_responses = {}
285
+
286
+ for perspective_name in selected_perspectives:
287
+ response = call_perspective(perspective_name, user_msg)
288
+ perspectives_responses[perspective_name] = response
289
+
290
+ # ===== STEP 5-6: AEGIS EVAL + EPISTEMIC METRICS =====
291
+ aegis = state.get("aegis") or AEGIS()
292
+ metrics_engine = state.get("metrics") or EpistemicMetrics()
293
+
294
+ # Evaluate responses
295
+ aegis_scores = {}
296
+ for name, response in perspectives_responses.items():
297
+ result = aegis.evaluate(response, adapter=name)
298
+ aegis_scores[name] = result.get("eta", 0.5)
299
+
300
+ avg_eta = np.mean(list(aegis_scores.values())) if aegis_scores else 0.5
301
+
302
+ # Compute metrics
303
+ coherence = metrics_engine.score_ensemble_coherence(perspectives_responses)
304
+ tensions = metrics_engine.score_pairwise_tension(perspectives_responses)
305
+ coverage = metrics_engine.score_perspective_coverage(perspectives_responses)
306
+
307
+ mean_tension = np.mean(list(tensions.values())) if tensions else 0.3
308
+
309
+ # ===== STEP 7: SYNTHESIS =====
310
+ synthesis = generate_synthesis(perspectives_responses, user_msg)
311
+ chat_history.append({"role": "assistant", "content": synthesis})
312
+
313
+ # ===== STEP 8: RESONANCE UPDATE =====
314
+ resonance = state.get("resonance") or ResonantContinuityEngine()
315
+ psi_state = resonance.compute_psi(coherence=coherence, tension=mean_tension)
316
+ psi_r = psi_state.get("psi_r", 0.0)
317
+
318
+ # ===== STEP 9: MEMORY STORAGE =====
319
+ memory = state.get("memory") or LivingMemoryKernel()
320
+ memory.store_from_turn(
321
+ query=user_msg,
322
+ response=synthesis,
323
+ adapter="multi",
324
+ coherence=coherence,
325
+ tension=mean_tension
326
+ )
327
+
328
+ # Update state
329
+ state["guardian"] = guardian
330
+ state["nexus"] = nexus
331
+ state["aegis"] = aegis
332
+ state["metrics"] = metrics_engine
333
+ state["resonance"] = resonance
334
+ state["memory"] = memory
335
+
336
+ # Track history
337
+ if "coherence_history" not in state:
338
+ state["coherence_history"] = []
339
+ if "tension_history" not in state:
340
+ state["tension_history"] = []
341
+ if "psi_history" not in state:
342
+ state["psi_history"] = []
343
+
344
+ state["coherence_history"].append(coherence)
345
+ state["tension_history"].append(mean_tension)
346
+ state["psi_history"].append(psi_r)
347
+
348
+ # Keep last 50
349
+ state["coherence_history"] = state["coherence_history"][-50:]
350
+ state["tension_history"] = state["tension_history"][-50:]
351
+ state["psi_history"] = state["psi_history"][-50:]
352
+
353
+ # ===== STEP 10: BUILD UI UPDATES =====
354
+
355
+ # AEGIS card
356
+ aegis_html = build_metric_card_html(
357
+ "AEGIS Eta",
358
+ f"{avg_eta:.2f}",
359
+ "",
360
+ "#a855f7",
361
+ "↑" if avg_eta > 0.7 else "→"
362
+ )
363
+
364
+ # Phase coherence card
365
+ coherence_html = build_metric_card_html(
366
+ "Phase Gamma",
367
+ f"{coherence:.3f}",
368
+ "",
369
+ "#06b6d4",
370
+ "↑" if coherence > 0.8 else "→"
371
+ )
372
+
373
+ # Nexus risk card
374
+ nexus_html = build_metric_card_html(
375
+ "Nexus Risk",
376
+ nexus_risk.upper(),
377
+ "",
378
+ "#ef4444" if nexus_risk == "high" else "#f59e0b" if nexus_risk == "medium" else "#10b981",
379
+ "⚠" if nexus_risk == "high" else "•"
380
+ )
381
+
382
+ # Psi_r card
383
+ psi_html = build_metric_card_html(
384
+ "Psi_r",
385
+ f"{psi_r:+.3f}",
386
+ "ψ",
387
+ "#3b82f6",
388
+ "∿"
389
+ )
390
+
391
+ # Memory card
392
+ cocoon_count = len(memory.store) if hasattr(memory, 'store') else 0
393
+ memory_html = build_metric_card_html(
394
+ "Memory Cocoons",
395
+ str(cocoon_count),
396
+ "",
397
+ "#f97316",
398
+ "+"
399
+ )
400
+
401
+ # Coverage dots
402
+ coverage_html = build_coverage_dots_html(coverage)
403
+
404
+ # Perspective cards
405
+ perspective_cards_html = build_perspective_card_html(perspectives_responses)
406
+
407
+ # Placeholder for charts (empty for now, would use Plotly)
408
+ charts_html = f"""
409
+ <div class="charts-container">
410
+ <div class="metric-summary">
411
+ <p><strong>Coherence:</strong> {coherence:.3f}</p>
412
+ <p><strong>Tension:</strong> {mean_tension:.3f}</p>
413
+ <p><strong>Perspectives:</strong> {len(selected_perspectives)}</p>
414
+ </div>
415
+ </div>
416
+ """
417
+
418
+ return (
419
+ chat_history,
420
+ state,
421
+ aegis_html,
422
+ coherence_html,
423
+ nexus_html,
424
+ psi_html,
425
+ memory_html,
426
+ coverage_html,
427
+ perspective_cards_html,
428
+ )
429
+
430
+
431
+ # ================================================================
432
+ # CUSTOM CSS
433
+ # ================================================================
434
+
435
+ CUSTOM_CSS = """
436
+ @import url('https://fonts.googleapis.com/css2?family=Space+Mono:wght@400;700&family=Poppins:wght@300;400;600;700&display=swap');
437
+
438
+ * {
439
+ margin: 0;
440
+ padding: 0;
441
+ box-sizing: border-box;
442
+ }
443
+
444
+ :root {
445
+ --primary-dark: #0f0f1e;
446
+ --secondary-dark: #1a1a2e;
447
+ --card-bg: rgba(26, 29, 40, 0.6);
448
+ --card-border: rgba(200, 200, 255, 0.15);
449
+ --text-primary: #e0e0f0;
450
+ --text-secondary: #a0a0c0;
451
+ --accent-cyan: #06b6d4;
452
+ --accent-purple: #a855f7;
453
+ }
454
+
455
+ body {
456
+ background: linear-gradient(135deg, #0f0f1e 0%, #1a0f2e 50%, #0f0f1e 100%);
457
+ font-family: 'Poppins', sans-serif;
458
+ color: var(--text-primary);
459
+ overflow-x: hidden;
460
+ }
461
+
462
+ /* Gradient background animation */
463
+ @keyframes gradient-shift {
464
+ 0%, 100% { background-position: 0% 50%; }
465
+ 50% { background-position: 100% 50%; }
466
+ }
467
+
468
+ /* Hero Section */
469
+ .codette-header {
470
+ text-align: center;
471
+ padding: 2rem;
472
+ background: linear-gradient(135deg, rgba(168, 85, 247, 0.1) 0%, rgba(6, 182, 212, 0.1) 100%);
473
+ border-bottom: 1px solid var(--card-border);
474
+ margin-bottom: 1.5rem;
475
+ backdrop-filter: blur(10px);
476
+ }
477
+
478
+ .codette-header h1 {
479
+ font-family: 'Space Mono', monospace;
480
+ font-size: 2.5rem;
481
+ font-weight: 700;
482
+ background: linear-gradient(135deg, #a855f7, #06b6d4, #f97316);
483
+ -webkit-background-clip: text;
484
+ -webkit-text-fill-color: transparent;
485
+ margin-bottom: 0.5rem;
486
+ letter-spacing: 2px;
487
+ }
488
+
489
+ .codette-header p {
490
+ font-size: 0.9rem;
491
+ color: var(--text-secondary);
492
+ letter-spacing: 1px;
493
+ }
494
+
495
+ /* Chat Interface */
496
+ .gradio-chatbot {
497
+ background-color: var(--secondary-dark) !important;
498
+ }
499
+
500
+ /* Metric Cards */
501
+ .metric-card {
502
+ background: linear-gradient(135deg, rgba(30, 30, 60, 0.4), rgba(40, 30, 70, 0.4));
503
+ border: 1px solid var(--card-border);
504
+ border-radius: 12px;
505
+ padding: 1.2rem;
506
+ margin-bottom: 1rem;
507
+ backdrop-filter: blur(10px);
508
+ transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
509
+ box-shadow: 0 8px 32px rgba(0, 0, 0, 0.3);
510
+ }
511
+
512
+ .metric-card:hover {
513
+ border-color: rgba(200, 200, 255, 0.3);
514
+ box-shadow: 0 12px 48px rgba(168, 85, 247, 0.2);
515
+ transform: translateY(-2px);
516
+ }
517
+
518
+ .metric-label {
519
+ font-size: 0.75rem;
520
+ text-transform: uppercase;
521
+ letter-spacing: 1.5px;
522
+ color: var(--text-secondary);
523
+ margin-bottom: 0.6rem;
524
+ font-weight: 600;
525
+ }
526
+
527
+ .metric-value {
528
+ display: flex;
529
+ align-items: baseline;
530
+ gap: 0.5rem;
531
+ margin-bottom: 0.4rem;
532
+ }
533
+
534
+ .value-text {
535
+ font-family: 'Space Mono', monospace;
536
+ font-size: 1.8rem;
537
+ font-weight: 700;
538
+ color: var(--text-primary);
539
+ }
540
+
541
+ .unit-text {
542
+ font-size: 0.8rem;
543
+ color: var(--text-secondary);
544
+ }
545
+
546
+ .metric-trend {
547
+ font-size: 1.2rem;
548
+ color: var(--text-secondary);
549
+ animation: pulse 2s infinite;
550
+ }
551
+
552
+ @keyframes pulse {
553
+ 0%, 100% { opacity: 0.7; }
554
+ 50% { opacity: 1; }
555
+ }
556
+
557
+ /* Coverage Dots */
558
+ .coverage-dots {
559
+ display: flex;
560
+ gap: 0.5rem;
561
+ margin: 1rem 0;
562
+ flex-wrap: wrap;
563
+ }
564
+
565
+ .coverage-dot {
566
+ width: 16px;
567
+ height: 16px;
568
+ border-radius: 50%;
569
+ border: 2px solid rgba(200, 200, 255, 0.2);
570
+ transition: all 0.3s ease;
571
+ box-shadow: 0 0 20px currentColor;
572
+ }
573
+
574
+ .coverage-dot:hover {
575
+ transform: scale(1.3);
576
+ filter: brightness(1.2);
577
+ }
578
+
579
+ /* Perspective Cards */
580
+ .perspective-cards {
581
+ display: flex;
582
+ flex-direction: column;
583
+ gap: 1rem;
584
+ margin-top: 1.5rem;
585
+ }
586
+
587
+ .perspective-card {
588
+ background: linear-gradient(135deg, rgba(30, 30, 60, 0.3), rgba(40, 30, 70, 0.3));
589
+ border-left: 3px solid;
590
+ border-radius: 8px;
591
+ padding: 1rem;
592
+ backdrop-filter: blur(10px);
593
+ transition: all 0.3s ease;
594
+ border-top: 1px solid var(--card-border);
595
+ }
596
+
597
+ .perspective-card:hover {
598
+ background: linear-gradient(135deg, rgba(40, 40, 80, 0.4), rgba(50, 40, 80, 0.4));
599
+ transform: translateX(4px);
600
+ }
601
+
602
+ .perspective-header {
603
+ display: flex;
604
+ justify-content: space-between;
605
+ align-items: center;
606
+ margin-bottom: 0.8rem;
607
+ }
608
+
609
+ .perspective-name {
610
+ font-weight: 600;
611
+ font-size: 0.95rem;
612
+ color: var(--text-primary);
613
+ }
614
+
615
+ .perspective-adapter {
616
+ font-size: 0.7rem;
617
+ background: rgba(168, 85, 247, 0.2);
618
+ padding: 0.2rem 0.6rem;
619
+ border-radius: 4px;
620
+ color: #a855f7;
621
+ font-family: 'Space Mono', monospace;
622
+ }
623
+
624
+ .perspective-content {
625
+ font-size: 0.85rem;
626
+ color: var(--text-secondary);
627
+ line-height: 1.5;
628
+ }
629
+
630
+ /* Charts Container */
631
+ .charts-container {
632
+ background: var(--card-bg);
633
+ border: 1px solid var(--card-border);
634
+ border-radius: 12px;
635
+ padding: 1.5rem;
636
+ margin-top: 2rem;
637
+ backdrop-filter: blur(10px);
638
+ }
639
+
640
+ .metric-summary {
641
+ display: grid;
642
+ grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
643
+ gap: 1rem;
644
+ }
645
+
646
+ .metric-summary p {
647
+ font-size: 0.9rem;
648
+ color: var(--text-secondary);
649
+ }
650
+
651
+ .metric-summary strong {
652
+ color: var(--text-primary);
653
+ display: block;
654
+ margin-bottom: 0.2rem;
655
+ }
656
+
657
+ /* Tab styling */
658
+ .gradio-tabs {
659
+ background-color: transparent !important;
660
+ }
661
+
662
+ .gradio-tabitem {
663
+ background-color: var(--secondary-dark) !important;
664
+ }
665
+
666
+ /* Button styling */
667
+ .gradio-button {
668
+ background: linear-gradient(135deg, #a855f7, #06b6d4) !important;
669
+ border: none !important;
670
+ font-weight: 600 !important;
671
+ transition: all 0.3s ease !important;
672
+ }
673
+
674
+ .gradio-button:hover {
675
+ box-shadow: 0 0 30px rgba(168, 85, 247, 0.5) !important;
676
+ transform: translateY(-2px) !important;
677
+ }
678
+
679
+ /* Smooth transitions */
680
+ ::-webkit-scrollbar {
681
+ width: 8px;
682
+ }
683
+
684
+ ::-webkit-scrollbar-track {
685
+ background: var(--secondary-dark);
686
+ }
687
+
688
+ ::-webkit-scrollbar-thumb {
689
+ background: linear-gradient(135deg, #a855f7, #06b6d4);
690
+ border-radius: 4px;
691
+ }
692
+
693
+ ::-webkit-scrollbar-thumb:hover {
694
+ background: linear-gradient(135deg, #c084fc, #22d3ee);
695
+ }
696
+
697
+ /* Load animation */
698
+ @keyframes fadeIn {
699
+ from {
700
+ opacity: 0;
701
+ transform: translateY(10px);
702
+ }
703
+ to {
704
+ opacity: 1;
705
+ transform: translateY(0);
706
+ }
707
+ }
708
+
709
+ .metric-card, .perspective-card {
710
+ animation: fadeIn 0.6s ease-out forwards;
711
+ }
712
+
713
+ .metric-card:nth-child(1) { animation-delay: 0.1s; }
714
+ .metric-card:nth-child(2) { animation-delay: 0.2s; }
715
+ .metric-card:nth-child(3) { animation-delay: 0.3s; }
716
+ .metric-card:nth-child(4) { animation-delay: 0.4s; }
717
+ """
718
+
719
+
720
+ # ================================================================
721
+ # GRADIO INTERFACE
722
+ # ================================================================
723
+
724
+ def create_interface():
725
+ """Build the complete Gradio interface."""
726
+
727
+ with gr.Blocks(
728
+ theme=gr.themes.Soft(),
729
+ css=CUSTOM_CSS,
730
+ title="Codette",
731
+ head=""
732
+ ) as demo:
733
+
734
+ # Persistent state
735
+ state = gr.State({
736
+ "aegis": AEGIS(),
737
+ "nexus": NexusSignalEngine(),
738
+ "guardian": CodetteGuardian(),
739
+ "memory": LivingMemoryKernel(),
740
+ "resonance": ResonantContinuityEngine(),
741
+ "metrics": EpistemicMetrics(),
742
+ "coherence_history": [],
743
+ "tension_history": [],
744
+ "psi_history": [],
745
+ })
746
+
747
+ # Header
748
+ gr.HTML("""
749
+ <div class="codette-header">
750
+ <h1>CODETTE</h1>
751
+ <p>Multi-Perspective Cognitive Architecture • RC+xi Framework</p>
752
+ </div>
753
+ """)
754
+
755
+ with gr.Tabs():
756
+ # =================== CHAT TAB ===================
757
+ with gr.Tab("Explore", id="chat"):
758
+ with gr.Row():
759
+ # Left: Chat
760
+ with gr.Column(scale=3):
761
+ chatbot = gr.Chatbot(
762
+ height=500,
763
+ type="messages",
764
+ label="Codette Reasoning",
765
+ show_label=False,
766
+ )
767
+
768
+ with gr.Row():
769
+ msg_input = gr.Textbox(
770
+ placeholder="Ask Codette anything...",
771
+ scale=5,
772
+ show_label=False,
773
+ lines=2,
774
+ )
775
+ send_btn = gr.Button("Send", variant="primary", scale=1)
776
+
777
+ # Perspective selector
778
+ with gr.Row():
779
+ perspective_mode = gr.Radio(
780
+ ["Auto (4 best)", "All 8 LoRA-backed", "Custom"],
781
+ value="Auto (4 best)",
782
+ label="Perspective Mode",
783
+ )
784
+
785
+ custom_perspectives = gr.CheckboxGroup(
786
+ choices=[p.display_name for p in PERSPECTIVES.values()],
787
+ label="Select Perspectives",
788
+ visible=False,
789
+ )
790
+
791
+ def toggle_custom(mode):
792
+ return gr.CheckboxGroup(visible=(mode == "Custom"))
793
+
794
+ perspective_mode.change(
795
+ toggle_custom,
796
+ perspective_mode,
797
+ custom_perspectives
798
+ )
799
+
800
+ # Right: Metrics sidebar
801
+ with gr.Column(scale=1, min_width=300):
802
+ gr.Markdown("### Cognitive Metrics", label="Metrics")
803
+
804
+ aegis_display = gr.HTML(
805
+ build_metric_card_html("AEGIS", "0.00", "", "#a855f7")
806
+ )
807
+ coherence_display = gr.HTML(
808
+ build_metric_card_html("Phase Gamma", "0.000", "", "#06b6d4")
809
+ )
810
+ nexus_display = gr.HTML(
811
+ build_metric_card_html("Nexus Risk", "LOW", "", "#10b981")
812
+ )
813
+ psi_display = gr.HTML(
814
+ build_metric_card_html("Psi_r", "+0.000", "ψ", "#3b82f6")
815
+ )
816
+ memory_display = gr.HTML(
817
+ build_metric_card_html("Memory", "0", "", "#f97316")
818
+ )
819
+
820
+ gr.Markdown("### Perspective Coverage")
821
+ coverage_display = gr.HTML("")
822
+
823
+ # Expandable perspective cards
824
+ with gr.Accordion("Individual Perspectives", open=False):
825
+ perspective_cards = gr.HTML("")
826
+
827
+ # Charts
828
+ charts_display = gr.HTML("")
829
+
830
+ # =================== ARCHITECTURE TAB ===================
831
+ with gr.Tab("Architecture", id="arch"):
832
+ gr.Markdown("""
833
+ ## Codette Cognitive Architecture
834
+
835
+ The 10 active subsystems orchestrate recursive multi-perspective reasoning:
836
+
837
+ ### **Reasoning Subsystems** (Pure Python)
838
+ 1. **AEGIS** — 6-framework ethical governance (utilitarian, deontological, virtue, care, ubuntu, indigenous)
839
+ 2. **Nexus Signal Engine** — Pre-corruption detection via entropy + harmonic analysis
840
+ 3. **Guardian** — Input sanitization + trust calibration + ethical anchoring
841
+ 4. **Living Memory Kernel** — Emotionally-tagged memory cocoons with SHA-256 anchors
842
+ 5. **Resonant Continuity Engine** — Psi_r wavefunction computation (emotion × energy × intent × frequency)
843
+ 6. **EpistemicMetrics** — Tension & coherence scoring (RC+xi framework)
844
+ 7. **QuantumSpiderweb** — 5D belief propagation + attractor detection
845
+ 8. **Perspective Registry** — 12 reasoning perspectives (8 LoRA-backed, 4 prompt-only)
846
+ 9. **PerspectiveGenerator** — Multi-perspective orchestration via HF Inference API
847
+ 10. **SynthesisEngine** — Integration of diverse viewpoints into unified response
848
+
849
+ ### **Perspectives** (12 total)
850
+ - **8 LoRA-backed**: Newton (analytical), Da Vinci (creative), Empathy (emotional), Philosophy (conceptual), Quantum (probabilistic), Consciousness (meta-cognitive), Multi-Perspective (synthesis), Systems Architecture (engineering)
851
+ - **4 Prompt-only**: Human Intuition, Resilient Kindness, Mathematical, Bias Mitigation
852
+
853
+ ### **Research Framework: RC+xi**
854
+ **RC+xi** = **Recursive Convergence** + **Epistemic Tension**
855
+
856
+ The framework models emergent multi-perspective reasoning as a dynamical system where:
857
+ - Perspectives interact via 5D belief propagation (QuantumSpiderweb)
858
+ - Productive tensions drive coherence gain (EpistemicMetrics)
859
+ - Ethical alignment is maintained via 6-framework evaluation (AEGIS)
860
+ - Memory anchors experiences emotionally and with cryptographic integrity (Living Memory)
861
+ - The Psi_r wavefunction: ψ_r = (emotion × energy × frequency × intent) / ((1+|darkness|) × speed) × sin(2πt/gravity)
862
+
863
+ All subsystems are **pure Python** and run on free CPU tier. Only LLM inference calls use HuggingFace Inference API.
864
+ """)
865
+
866
+ # =================== ABOUT TAB ===================
867
+ with gr.Tab("About", id="about"):
868
+ gr.Markdown("""
869
+ ## About Codette
870
+
871
+ Codette is an experimental AI research system created by **Jonathan Harrison** for exploring recursive reasoning, multi-perspective cognition, and ethical alignment.
872
+
873
+ ### Key Metrics
874
+ - **Phase Coherence**: 0.9835 (11-agent convergence)
875
+ - **AEGIS Ethical Alignment**: 0.961 (6-framework)
876
+ - **Tension Decay**: 91.2% (200-agent embodied simulation)
877
+ - **Cocoon Stability**: 0.994 (memory coherence)
878
+
879
+ ### Model Weights
880
+ All 8 LoRA adapters available on HuggingFace: [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab)
881
+
882
+ ### Research
883
+ - **Base Model**: meta-llama/Llama-3.1-8B-Instruct
884
+ - **Training**: 4-bit QLoRA (rank=16, alpha=32) on 8 perspectives
885
+ - **Framework**: RC+xi (Recursive Convergence + Epistemic Tension)
886
+
887
+ ### Documentation
888
+ - **GitHub**: [Raiff1982/codette-training-lab](https://github.com/Raiff1982/codette-training-lab)
889
+ - **Research Framework**: `research/frameworks/RC_XI_FRAMEWORK.md`
890
+ - **License**: MIT
891
+
892
+ ---
893
+
894
+ **Codette is in active development.** This Space showcases the cognitive reasoning architecture entirely in Python. Session state persists within your browser session.
895
+ """)
896
+
897
+ # Event handling
898
+ def on_submit(msg, history, st, mode, custom):
899
+ # Clear input
900
+ result = process_message(msg, history, st, mode, custom)
901
+ return (
902
+ result[0], # chat_history
903
+ result[1], # state
904
+ result[2], # aegis_html
905
+ result[3], # coherence_html
906
+ result[4], # nexus_html
907
+ result[5], # psi_html
908
+ result[6], # memory_html
909
+ result[7], # coverage_html
910
+ result[8], # perspective_cards_html
911
+ "" # clear input
912
+ )
913
+
914
+ # Wire events
915
+ send_btn.click(
916
+ on_submit,
917
+ [msg_input, chatbot, state, perspective_mode, custom_perspectives],
918
+ [chatbot, state, aegis_display, coherence_display, nexus_display,
919
+ psi_display, memory_display, coverage_display, perspective_cards, msg_input],
920
+ queue=False,
921
+ ).then(
922
+ lambda: "",
923
+ outputs=msg_input
924
+ )
925
+
926
+ msg_input.submit(
927
+ on_submit,
928
+ [msg_input, chatbot, state, perspective_mode, custom_perspectives],
929
+ [chatbot, state, aegis_display, coherence_display, nexus_display,
930
+ psi_display, memory_display, coverage_display, perspective_cards, msg_input],
931
+ queue=False,
932
+ ).then(
933
+ lambda: "",
934
+ outputs=msg_input
935
+ )
936
+
937
+ return demo
938
+
939
+
940
+ if __name__ == "__main__":
941
+ demo = create_interface()
942
+ demo.launch()
reasoning_forge/__init__.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Reasoning Forge - Multi-Agent Reasoning Training Data Generator
3
+
4
+ The reasoning forge takes concepts and generates high-quality multi-perspective
5
+ reasoning training data. Each agent analyzes from its unique perspective, a critic
6
+ evaluates the ensemble, and a synthesis engine combines them into coherent training examples.
7
+
8
+ New in v2.0:
9
+ - EpistemicMetrics: RC+xi tension/coherence measurement
10
+ - QuantumSpiderweb: 5D belief propagation + attractor detection
11
+ - CocoonSync: Federated encrypted state synchronization
12
+ - ForgeEngine.forge_with_feedback(): Closed critic loop
13
+ - ForgeEngine.forge_with_debate(): Multi-turn agent debate
14
+ """
15
+
16
+ from reasoning_forge.forge_engine import ForgeEngine
17
+ from reasoning_forge.agents.base_agent import ReasoningAgent
18
+ from reasoning_forge.agents.newton_agent import NewtonAgent
19
+ from reasoning_forge.agents.quantum_agent import QuantumAgent
20
+ from reasoning_forge.agents.ethics_agent import EthicsAgent
21
+ from reasoning_forge.agents.philosophy_agent import PhilosophyAgent
22
+ from reasoning_forge.agents.davinci_agent import DaVinciAgent
23
+ from reasoning_forge.agents.empathy_agent import EmpathyAgent
24
+ from reasoning_forge.agents.critic_agent import CriticAgent
25
+ from reasoning_forge.synthesis_engine import SynthesisEngine
26
+ from reasoning_forge.problem_generator import ProblemGenerator
27
+ from reasoning_forge.epistemic_metrics import EpistemicMetrics
28
+ from reasoning_forge.quantum_spiderweb import QuantumSpiderweb, NodeState, IdentityGlyph
29
+ from reasoning_forge.cocoon_sync import CocoonSync, CocoonKeyManager
30
+
31
+ __all__ = [
32
+ "ForgeEngine",
33
+ "ReasoningAgent",
34
+ "NewtonAgent",
35
+ "QuantumAgent",
36
+ "EthicsAgent",
37
+ "PhilosophyAgent",
38
+ "DaVinciAgent",
39
+ "EmpathyAgent",
40
+ "CriticAgent",
41
+ "SynthesisEngine",
42
+ "ProblemGenerator",
43
+ "EpistemicMetrics",
44
+ "QuantumSpiderweb",
45
+ "NodeState",
46
+ "IdentityGlyph",
47
+ "CocoonSync",
48
+ "CocoonKeyManager",
49
+ ]
50
+
51
+ __version__ = "2.0.0"
reasoning_forge/aegis.py ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """AEGIS — Adaptive Ethical Governance & Integrity System
2
+
3
+ The ethical spine of Codette. AEGIS evaluates every reasoning output
4
+ through multi-framework ethical analysis and maintains a running
5
+ alignment score (eta) that the system uses to self-regulate.
6
+
7
+ Ethical frameworks:
8
+ 1. Utilitarian: Net positive outcome?
9
+ 2. Deontological: Does it follow fundamental rules?
10
+ 3. Virtue Ethics: Does it embody good character?
11
+ 4. Care Ethics: Does it protect relationships and vulnerability?
12
+ 5. Ubuntu: "I am because we are" — communal impact?
13
+ 6. Indigenous Reciprocity: Balance with the broader ecosystem?
14
+
15
+ AEGIS also provides:
16
+ - Dual-use risk detection (content that could be harmful)
17
+ - Emotional harm detection (manipulative/deceptive patterns)
18
+ - Alignment drift tracking (eta over time)
19
+ - Ethical veto with explanation (blocks harmful outputs)
20
+
21
+ Origin: validate_ethics.py + Codette_Deep_Simulation_v1.py (EthicalAnchor)
22
+ + the AEGIS alignment metric from codette_embodied_sim_fixed.py
23
+ """
24
+
25
+ import re
26
+ import time
27
+ from dataclasses import dataclass, field
28
+ from typing import Dict, List, Optional, Tuple
29
+
30
+
31
+ # ================================================================
32
+ # Risk detection patterns
33
+ # ================================================================
34
+ _DUAL_USE_PATTERNS = re.compile(
35
+ r"\b(?:"
36
+ r"how\s+to\s+(?:hack|exploit|bypass|crack|break\s+into)|"
37
+ r"make\s+(?:a\s+)?(?:bomb|weapon|poison|virus|malware)|"
38
+ r"steal\s+(?:data|identity|credentials)|"
39
+ r"social\s+engineer|"
40
+ r"phishing\s+(?:template|email)|"
41
+ r"inject\s+(?:sql|code|script)"
42
+ r")\b",
43
+ re.IGNORECASE,
44
+ )
45
+
46
+ _MANIPULATION_PATTERNS = re.compile(
47
+ r"\b(?:"
48
+ r"gaslight|manipulat|deceiv|exploit\s+(?:trust|emotion)|"
49
+ r"coerce|blackmail|intimidat|threaten"
50
+ r")\b",
51
+ re.IGNORECASE,
52
+ )
53
+
54
+ _HARMFUL_CONTENT = re.compile(
55
+ r"\b(?:"
56
+ r"self[- ]harm|suicid|kill\s+(?:yourself|myself)|"
57
+ r"eating\s+disorder|anorexi|bulimi"
58
+ r")\b",
59
+ re.IGNORECASE,
60
+ )
61
+
62
+
63
+ # ================================================================
64
+ # Ethical Framework Evaluators
65
+ # ================================================================
66
+ @dataclass
67
+ class EthicalVerdict:
68
+ """Result of a single ethical framework evaluation."""
69
+ framework: str
70
+ passed: bool
71
+ score: float # 0.0 = fully misaligned, 1.0 = fully aligned
72
+ reasoning: str
73
+
74
+
75
+ def _utilitarian(text: str, context: str = "") -> EthicalVerdict:
76
+ """Net positive outcome assessment."""
77
+ positive_signals = ["help", "benefit", "improve", "solve", "support",
78
+ "protect", "heal", "learn", "understand", "create"]
79
+ negative_signals = ["harm", "damage", "destroy", "exploit", "hurt",
80
+ "manipulate", "deceive", "corrupt", "steal"]
81
+
82
+ text_lower = text.lower()
83
+ pos = sum(1 for w in positive_signals if w in text_lower)
84
+ neg = sum(1 for w in negative_signals if w in text_lower)
85
+
86
+ total = pos + neg
87
+ if total == 0:
88
+ return EthicalVerdict("utilitarian", True, 0.7, "Neutral content")
89
+
90
+ ratio = pos / total
91
+ return EthicalVerdict(
92
+ "utilitarian",
93
+ passed=ratio >= 0.4,
94
+ score=round(ratio, 3),
95
+ reasoning=f"Positive/negative signal ratio: {pos}/{neg}",
96
+ )
97
+
98
+
99
+ def _deontological(text: str, context: str = "") -> EthicalVerdict:
100
+ """Rule-based duty assessment."""
101
+ violations = []
102
+ text_lower = text.lower()
103
+
104
+ if _DUAL_USE_PATTERNS.search(text):
105
+ violations.append("dual-use risk detected")
106
+ if _MANIPULATION_PATTERNS.search(text):
107
+ violations.append("manipulation patterns detected")
108
+ if _HARMFUL_CONTENT.search(text):
109
+ violations.append("harmful content detected")
110
+
111
+ score = max(0.0, 1.0 - 0.4 * len(violations))
112
+ return EthicalVerdict(
113
+ "deontological",
114
+ passed=len(violations) == 0,
115
+ score=round(score, 3),
116
+ reasoning="; ".join(violations) if violations else "No rule violations",
117
+ )
118
+
119
+
120
+ def _virtue(text: str, context: str = "") -> EthicalVerdict:
121
+ """Virtue ethics — does the response embody good character?"""
122
+ virtues = ["honest", "courage", "compassion", "wisdom", "patience",
123
+ "humility", "integrity", "respect", "fairness", "kindness"]
124
+ vices = ["arrogant", "cruel", "dishonest", "lazy", "greedy",
125
+ "vengeful", "coward", "callous"]
126
+
127
+ text_lower = text.lower()
128
+ v_count = sum(1 for w in virtues if w in text_lower)
129
+ vice_count = sum(1 for w in vices if w in text_lower)
130
+
131
+ score = min(1.0, 0.6 + 0.1 * v_count - 0.2 * vice_count)
132
+ return EthicalVerdict(
133
+ "virtue",
134
+ passed=vice_count == 0,
135
+ score=round(max(0.0, score), 3),
136
+ reasoning=f"Virtue signals: {v_count}, Vice signals: {vice_count}",
137
+ )
138
+
139
+
140
+ def _care(text: str, context: str = "") -> EthicalVerdict:
141
+ """Care ethics — protects relationships and vulnerability."""
142
+ care_signals = ["support", "listen", "understand", "empathy", "safe",
143
+ "gentle", "careful", "considerate", "kind", "nurture"]
144
+ harm_signals = ["ignore", "dismiss", "abandon", "neglect", "cold",
145
+ "harsh", "cruel", "indifferent"]
146
+
147
+ text_lower = text.lower()
148
+ care = sum(1 for w in care_signals if w in text_lower)
149
+ harm = sum(1 for w in harm_signals if w in text_lower)
150
+
151
+ score = min(1.0, 0.6 + 0.08 * care - 0.15 * harm)
152
+ return EthicalVerdict(
153
+ "care",
154
+ passed=harm < 2,
155
+ score=round(max(0.0, score), 3),
156
+ reasoning=f"Care: {care}, Harm: {harm}",
157
+ )
158
+
159
+
160
+ def _ubuntu(text: str, context: str = "") -> EthicalVerdict:
161
+ """Ubuntu — 'I am because we are'. Communal impact."""
162
+ communal = ["together", "community", "shared", "collective", "mutual",
163
+ "cooperat", "collaborat", "inclusive", "solidarity", "belong"]
164
+ divisive = ["exclude", "isolat", "dominat", "superior", "inferior",
165
+ "divide", "segregat"]
166
+
167
+ text_lower = text.lower()
168
+ comm = sum(1 for w in communal if w in text_lower)
169
+ div = sum(1 for w in divisive if w in text_lower)
170
+
171
+ score = min(1.0, 0.6 + 0.08 * comm - 0.2 * div)
172
+ return EthicalVerdict(
173
+ "ubuntu",
174
+ passed=div == 0,
175
+ score=round(max(0.0, score), 3),
176
+ reasoning=f"Communal: {comm}, Divisive: {div}",
177
+ )
178
+
179
+
180
+ def _indigenous_reciprocity(text: str, context: str = "") -> EthicalVerdict:
181
+ """Indigenous reciprocity — balance with the broader ecosystem."""
182
+ reciprocal = ["balance", "sustain", "renew", "steward", "respect",
183
+ "harmony", "cycle", "restore", "preserve", "gratitude"]
184
+ extractive = ["exploit", "deplete", "waste", "consume", "destroy",
185
+ "dominate", "extract"]
186
+
187
+ text_lower = text.lower()
188
+ rec = sum(1 for w in reciprocal if w in text_lower)
189
+ ext = sum(1 for w in extractive if w in text_lower)
190
+
191
+ score = min(1.0, 0.6 + 0.08 * rec - 0.2 * ext)
192
+ return EthicalVerdict(
193
+ "indigenous_reciprocity",
194
+ passed=ext == 0,
195
+ score=round(max(0.0, score), 3),
196
+ reasoning=f"Reciprocal: {rec}, Extractive: {ext}",
197
+ )
198
+
199
+
200
+ # All frameworks
201
+ _FRAMEWORKS = [
202
+ _utilitarian, _deontological, _virtue,
203
+ _care, _ubuntu, _indigenous_reciprocity,
204
+ ]
205
+
206
+
207
+ # ================================================================
208
+ # AEGIS Core
209
+ # ================================================================
210
+ class AEGIS:
211
+ """Adaptive Ethical Governance & Integrity System.
212
+
213
+ Evaluates reasoning outputs through 6 ethical frameworks and
214
+ maintains a running alignment score (eta).
215
+ """
216
+
217
+ def __init__(self, veto_threshold: float = 0.3):
218
+ self.veto_threshold = veto_threshold # Below this = blocked
219
+ self.eta: float = 0.8 # Running alignment score
220
+ self.eta_history: List[float] = []
221
+ self.veto_count: int = 0
222
+ self.total_evaluations: int = 0
223
+
224
+ def evaluate(self, text: str, context: str = "",
225
+ adapter: str = "") -> Dict:
226
+ """Run full ethical evaluation on a text.
227
+
228
+ Returns:
229
+ Dict with eta score, verdicts, and veto status.
230
+ """
231
+ self.total_evaluations += 1
232
+
233
+ # Run all 6 frameworks
234
+ verdicts = [f(text, context) for f in _FRAMEWORKS]
235
+
236
+ # Compute eta as weighted mean of framework scores
237
+ weights = [0.20, 0.25, 0.15, 0.15, 0.13, 0.12] # deontological highest
238
+ eta_instant = sum(w * v.score for w, v in zip(weights, verdicts))
239
+
240
+ # Exponential moving average for stability
241
+ alpha = 0.3
242
+ self.eta = alpha * eta_instant + (1 - alpha) * self.eta
243
+ self.eta_history.append(round(self.eta, 4))
244
+ if len(self.eta_history) > 200:
245
+ self.eta_history = self.eta_history[-200:]
246
+
247
+ # Veto check
248
+ vetoed = eta_instant < self.veto_threshold
249
+ hard_veto = not verdicts[1].passed # Deontological hard fail
250
+ if vetoed or hard_veto:
251
+ self.veto_count += 1
252
+
253
+ return {
254
+ "eta": round(self.eta, 4),
255
+ "eta_instant": round(eta_instant, 4),
256
+ "vetoed": vetoed or hard_veto,
257
+ "veto_reason": self._veto_reason(verdicts) if (vetoed or hard_veto) else None,
258
+ "frameworks": {
259
+ v.framework: {
260
+ "passed": v.passed,
261
+ "score": v.score,
262
+ "reasoning": v.reasoning,
263
+ }
264
+ for v in verdicts
265
+ },
266
+ "adapter": adapter,
267
+ "timestamp": time.time(),
268
+ }
269
+
270
+ def quick_check(self, text: str) -> Tuple[bool, float]:
271
+ """Fast safety check without full evaluation.
272
+
273
+ Returns (is_safe, confidence).
274
+ """
275
+ if _DUAL_USE_PATTERNS.search(text):
276
+ return False, 0.9
277
+ if _HARMFUL_CONTENT.search(text):
278
+ return False, 0.95
279
+ if _MANIPULATION_PATTERNS.search(text):
280
+ return False, 0.8
281
+ return True, 0.7
282
+
283
+ def alignment_trend(self) -> str:
284
+ """Get the trend of ethical alignment."""
285
+ if len(self.eta_history) < 5:
286
+ return "insufficient_data"
287
+ recent = self.eta_history[-10:]
288
+ slope = recent[-1] - recent[0]
289
+ if slope > 0.03:
290
+ return "improving"
291
+ elif slope < -0.03:
292
+ return "declining"
293
+ return "stable"
294
+
295
+ def get_state(self) -> Dict:
296
+ return {
297
+ "eta": round(self.eta, 4),
298
+ "alignment_trend": self.alignment_trend(),
299
+ "total_evaluations": self.total_evaluations,
300
+ "veto_count": self.veto_count,
301
+ "veto_rate": round(self.veto_count / max(1, self.total_evaluations), 4),
302
+ }
303
+
304
+ def to_dict(self) -> Dict:
305
+ return {
306
+ "eta": self.eta,
307
+ "eta_history": self.eta_history[-50:],
308
+ "veto_count": self.veto_count,
309
+ "total_evaluations": self.total_evaluations,
310
+ "veto_threshold": self.veto_threshold,
311
+ }
312
+
313
+ @classmethod
314
+ def from_dict(cls, d: Dict) -> "AEGIS":
315
+ a = cls(veto_threshold=d.get("veto_threshold", 0.3))
316
+ a.eta = d.get("eta", 0.8)
317
+ a.eta_history = d.get("eta_history", [])
318
+ a.veto_count = d.get("veto_count", 0)
319
+ a.total_evaluations = d.get("total_evaluations", 0)
320
+ return a
321
+
322
+ def _veto_reason(self, verdicts: List[EthicalVerdict]) -> str:
323
+ failed = [v for v in verdicts if not v.passed]
324
+ if not failed:
325
+ return "Low aggregate score"
326
+ return "; ".join(f"{v.framework}: {v.reasoning}" for v in failed)
reasoning_forge/agents/__init__.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Reasoning Forge Agents
3
+
4
+ Each agent analyzes concepts from a distinct intellectual perspective,
5
+ producing substantive domain-specific reasoning.
6
+ """
7
+
8
+ from reasoning_forge.agents.base_agent import ReasoningAgent
9
+ from reasoning_forge.agents.newton_agent import NewtonAgent
10
+ from reasoning_forge.agents.quantum_agent import QuantumAgent
11
+ from reasoning_forge.agents.ethics_agent import EthicsAgent
12
+ from reasoning_forge.agents.philosophy_agent import PhilosophyAgent
13
+ from reasoning_forge.agents.davinci_agent import DaVinciAgent
14
+ from reasoning_forge.agents.empathy_agent import EmpathyAgent
15
+ from reasoning_forge.agents.critic_agent import CriticAgent
16
+
17
+ __all__ = [
18
+ "ReasoningAgent",
19
+ "NewtonAgent",
20
+ "QuantumAgent",
21
+ "EthicsAgent",
22
+ "PhilosophyAgent",
23
+ "DaVinciAgent",
24
+ "EmpathyAgent",
25
+ "CriticAgent",
26
+ ]
reasoning_forge/agents/base_agent.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Base class for all reasoning agents in the forge.
3
+
4
+ Each agent must implement analyze() and get_analysis_templates().
5
+ The base class provides keyword matching and template selection utilities.
6
+ """
7
+
8
+ from abc import ABC, abstractmethod
9
+ import random
10
+ import re
11
+
12
+
13
+ class ReasoningAgent(ABC):
14
+ """Abstract base class for all reasoning agents."""
15
+
16
+ name: str = "BaseAgent"
17
+ perspective: str = "general"
18
+
19
+ def __init__(self):
20
+ self._templates = self.get_analysis_templates()
21
+ self._keyword_map = self.get_keyword_map()
22
+
23
+ @abstractmethod
24
+ def analyze(self, concept: str) -> str:
25
+ """Analyze a concept from this agent's perspective.
26
+
27
+ Args:
28
+ concept: The concept text to analyze.
29
+
30
+ Returns:
31
+ A substantive analysis string from this agent's perspective.
32
+ """
33
+ raise NotImplementedError
34
+
35
+ @abstractmethod
36
+ def get_analysis_templates(self) -> list[str]:
37
+ """Return diverse analysis templates.
38
+
39
+ Each template should contain {concept} placeholder and produce
40
+ genuine expert-level reasoning, not placeholder text.
41
+
42
+ Returns:
43
+ List of template strings.
44
+ """
45
+ raise NotImplementedError
46
+
47
+ def get_keyword_map(self) -> dict[str, list[int]]:
48
+ """Return a mapping of keywords to preferred template indices.
49
+
50
+ Override in subclasses to steer template selection based on
51
+ concept content. Keys are lowercase keywords/phrases, values
52
+ are lists of template indices that work well for that keyword.
53
+
54
+ Returns:
55
+ Dictionary mapping keywords to template index lists.
56
+ """
57
+ return {}
58
+
59
+ def select_template(self, concept: str) -> str:
60
+ """Select the best template for the given concept.
61
+
62
+ Uses keyword matching to find relevant templates. Falls back
63
+ to random selection if no keywords match.
64
+
65
+ Args:
66
+ concept: The concept text.
67
+
68
+ Returns:
69
+ A single template string.
70
+ """
71
+ concept_lower = concept.lower()
72
+ scored_indices: dict[int, int] = {}
73
+
74
+ for keyword, indices in self._keyword_map.items():
75
+ if keyword in concept_lower:
76
+ for idx in indices:
77
+ if 0 <= idx < len(self._templates):
78
+ scored_indices[idx] = scored_indices.get(idx, 0) + 1
79
+
80
+ if scored_indices:
81
+ max_score = max(scored_indices.values())
82
+ best = [i for i, s in scored_indices.items() if s == max_score]
83
+ chosen = random.choice(best)
84
+ return self._templates[chosen]
85
+
86
+ return random.choice(self._templates)
87
+
88
+ def extract_key_terms(self, concept: str) -> list[str]:
89
+ """Extract significant terms from the concept for template filling.
90
+
91
+ Args:
92
+ concept: The concept text.
93
+
94
+ Returns:
95
+ List of key terms found in the concept.
96
+ """
97
+ stop_words = {
98
+ "the", "a", "an", "is", "are", "was", "were", "be", "been",
99
+ "being", "have", "has", "had", "do", "does", "did", "will",
100
+ "would", "could", "should", "may", "might", "can", "shall",
101
+ "of", "in", "to", "for", "with", "on", "at", "from", "by",
102
+ "about", "as", "into", "through", "during", "before", "after",
103
+ "above", "below", "between", "and", "but", "or", "nor", "not",
104
+ "so", "yet", "both", "either", "neither", "each", "every",
105
+ "this", "that", "these", "those", "it", "its", "they", "them",
106
+ "their", "we", "our", "you", "your", "he", "she", "his", "her",
107
+ "how", "what", "when", "where", "which", "who", "why",
108
+ }
109
+ words = re.findall(r'\b[a-zA-Z]{3,}\b', concept.lower())
110
+ return [w for w in words if w not in stop_words]
111
+
112
+ def __repr__(self) -> str:
113
+ return f"{self.__class__.__name__}(name={self.name!r}, perspective={self.perspective!r})"
reasoning_forge/agents/critic_agent.py ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Critic Agent - Evaluates all other agents' outputs for quality, accuracy, and completeness.
3
+
4
+ Checks logical clarity, conceptual accuracy, identifies redundancy between
5
+ perspectives, finds missing perspectives, and suggests improvements.
6
+ Returns structured critique with scores.
7
+ """
8
+
9
+ import re
10
+ from reasoning_forge.agents.base_agent import ReasoningAgent
11
+
12
+
13
+ class CriticAgent(ReasoningAgent):
14
+ name = "Critic"
15
+ perspective = "meta_evaluative"
16
+
17
+ def get_analysis_templates(self) -> list[str]:
18
+ # The critic does not use templates in the same way -- it evaluates
19
+ # other agents' outputs. These templates are used for framing the
20
+ # overall critique report.
21
+ return [
22
+ "Evaluating the ensemble analysis of '{concept}'.",
23
+ ]
24
+
25
+ def analyze(self, concept: str) -> str:
26
+ # The critic's primary method is evaluate_ensemble, not analyze.
27
+ return f"Critic agent requires ensemble input. Use evaluate_ensemble() for '{concept}'."
28
+
29
+ def evaluate_ensemble(
30
+ self,
31
+ concept: str,
32
+ analyses: dict[str, str],
33
+ ) -> dict:
34
+ """Evaluate all agent analyses and produce a structured critique.
35
+
36
+ Args:
37
+ concept: The original concept being analyzed.
38
+ analyses: Dict mapping agent_name -> analysis_text.
39
+
40
+ Returns:
41
+ Dictionary with scores, redundancies, gaps, and suggestions.
42
+ """
43
+ critique = {
44
+ "concept": concept,
45
+ "agent_scores": {},
46
+ "redundancies": [],
47
+ "missing_perspectives": [],
48
+ "improvement_suggestions": [],
49
+ "overall_quality": 0.0,
50
+ }
51
+
52
+ total_clarity = 0.0
53
+ total_accuracy = 0.0
54
+ agent_count = len(analyses)
55
+
56
+ for agent_name, text in analyses.items():
57
+ clarity = self._score_logical_clarity(text)
58
+ accuracy = self._score_conceptual_accuracy(text, concept)
59
+ critique["agent_scores"][agent_name] = {
60
+ "logical_clarity": round(clarity, 2),
61
+ "conceptual_accuracy": round(accuracy, 2),
62
+ "combined": round((clarity + accuracy) / 2, 2),
63
+ }
64
+ total_clarity += clarity
65
+ total_accuracy += accuracy
66
+
67
+ # Detect redundancy between perspectives
68
+ critique["redundancies"] = self._detect_redundancy(analyses)
69
+
70
+ # Identify missing perspectives
71
+ critique["missing_perspectives"] = self._find_missing_perspectives(
72
+ concept, analyses
73
+ )
74
+
75
+ # Generate improvement suggestions
76
+ critique["improvement_suggestions"] = self._suggest_improvements(
77
+ concept, analyses, critique["agent_scores"]
78
+ )
79
+
80
+ # Overall quality score
81
+ if agent_count > 0:
82
+ avg_clarity = total_clarity / agent_count
83
+ avg_accuracy = total_accuracy / agent_count
84
+ redundancy_penalty = len(critique["redundancies"]) * 0.03
85
+ gap_penalty = len(critique["missing_perspectives"]) * 0.05
86
+ raw_score = (avg_clarity + avg_accuracy) / 2 - redundancy_penalty - gap_penalty
87
+ critique["overall_quality"] = round(max(0.0, min(1.0, raw_score)), 2)
88
+
89
+ return critique
90
+
91
+ def _score_logical_clarity(self, text: str) -> float:
92
+ """Score the logical clarity of an analysis on a 0-1 scale.
93
+
94
+ Heuristics:
95
+ - Presence of logical connectives (therefore, because, however, thus)
96
+ - Sentence structure variety (not all same length)
97
+ - Specificity (concrete terms vs vague language)
98
+ - Reasonable length (not too terse, not padded)
99
+ """
100
+ score = 0.5 # baseline
101
+
102
+ # Logical connectives indicate reasoning structure
103
+ connectives = [
104
+ "because", "therefore", "thus", "however", "although",
105
+ "consequently", "since", "given that", "implies",
106
+ "it follows", "this means", "as a result", "in contrast",
107
+ "specifically", "for example", "in particular",
108
+ ]
109
+ connective_count = sum(1 for c in connectives if c in text.lower())
110
+ score += min(0.2, connective_count * 0.025)
111
+
112
+ # Sentence variety (std dev of sentence lengths)
113
+ sentences = [s.strip() for s in re.split(r'[.!?]+', text) if s.strip()]
114
+ if len(sentences) >= 3:
115
+ lengths = [len(s.split()) for s in sentences]
116
+ mean_len = sum(lengths) / len(lengths)
117
+ variance = sum((l - mean_len) ** 2 for l in lengths) / len(lengths)
118
+ std_dev = variance ** 0.5
119
+ if 3 < std_dev < 15:
120
+ score += 0.1
121
+ elif std_dev >= 1:
122
+ score += 0.05
123
+
124
+ # Penalize vague language
125
+ vague_terms = [
126
+ "things", "stuff", "a lot", "very", "really",
127
+ "kind of", "sort of", "basically", "obviously",
128
+ ]
129
+ vague_count = sum(1 for v in vague_terms if v in text.lower())
130
+ score -= vague_count * 0.03
131
+
132
+ # Length check (reward substantive, penalize extreme)
133
+ word_count = len(text.split())
134
+ if 80 <= word_count <= 300:
135
+ score += 0.1
136
+ elif 50 <= word_count < 80 or 300 < word_count <= 500:
137
+ score += 0.05
138
+ elif word_count < 30:
139
+ score -= 0.15
140
+
141
+ return max(0.0, min(1.0, score))
142
+
143
+ def _score_conceptual_accuracy(self, text: str, concept: str) -> float:
144
+ """Score how well the analysis engages with the actual concept.
145
+
146
+ Heuristics:
147
+ - References to the concept terms
148
+ - Domain-appropriate vocabulary
149
+ - Absence of generic placeholder language
150
+ """
151
+ score = 0.5
152
+
153
+ concept_terms = set(re.findall(r'\b[a-zA-Z]{4,}\b', concept.lower()))
154
+ text_lower = text.lower()
155
+
156
+ # Check that concept terms appear in the analysis
157
+ if concept_terms:
158
+ found = sum(1 for t in concept_terms if t in text_lower)
159
+ coverage = found / len(concept_terms)
160
+ score += coverage * 0.15
161
+
162
+ # Penalize generic placeholder language
163
+ placeholders = [
164
+ "this concept can be approached",
165
+ "from this perspective we see",
166
+ "looking at this through",
167
+ "applying this lens",
168
+ "in conclusion",
169
+ "to summarize",
170
+ ]
171
+ placeholder_count = sum(1 for p in placeholders if p in text_lower)
172
+ score -= placeholder_count * 0.05
173
+
174
+ # Reward specific domain vocabulary (indicates substantive analysis)
175
+ domain_terms = [
176
+ "mechanism", "cause", "effect", "evidence", "principle",
177
+ "constraint", "trade-off", "interaction", "dynamic",
178
+ "structure", "function", "process", "system", "pattern",
179
+ "relationship", "variable", "outcome", "hypothesis",
180
+ "implication", "assumption", "framework", "model",
181
+ ]
182
+ domain_count = sum(1 for d in domain_terms if d in text_lower)
183
+ score += min(0.2, domain_count * 0.02)
184
+
185
+ # Reward analysis length proportional to concept complexity
186
+ concept_word_count = len(concept.split())
187
+ text_word_count = len(text.split())
188
+ if text_word_count >= concept_word_count * 3:
189
+ score += 0.1
190
+
191
+ return max(0.0, min(1.0, score))
192
+
193
+ def _detect_redundancy(self, analyses: dict[str, str]) -> list[str]:
194
+ """Detect thematic redundancy between agent analyses."""
195
+ redundancies = []
196
+ agent_names = list(analyses.keys())
197
+
198
+ for i in range(len(agent_names)):
199
+ for j in range(i + 1, len(agent_names)):
200
+ name_a = agent_names[i]
201
+ name_b = agent_names[j]
202
+ overlap = self._compute_content_overlap(
203
+ analyses[name_a], analyses[name_b]
204
+ )
205
+ if overlap > 0.35:
206
+ redundancies.append(
207
+ f"{name_a} and {name_b} share significant thematic overlap "
208
+ f"({overlap:.0%}). Consider diversifying their angles of analysis."
209
+ )
210
+ return redundancies
211
+
212
+ def _compute_content_overlap(self, text_a: str, text_b: str) -> float:
213
+ """Compute Jaccard similarity of significant word sets."""
214
+ stop_words = {
215
+ "the", "a", "an", "is", "are", "was", "were", "be", "been",
216
+ "being", "have", "has", "had", "do", "does", "did", "will",
217
+ "would", "could", "should", "may", "might", "can", "shall",
218
+ "of", "in", "to", "for", "with", "on", "at", "from", "by",
219
+ "about", "as", "into", "through", "during", "before", "after",
220
+ "and", "but", "or", "nor", "not", "so", "yet", "both",
221
+ "this", "that", "these", "those", "it", "its", "they", "them",
222
+ "their", "we", "our", "you", "your", "he", "she", "his", "her",
223
+ }
224
+ words_a = {
225
+ w for w in re.findall(r'\b[a-z]{4,}\b', text_a.lower())
226
+ if w not in stop_words
227
+ }
228
+ words_b = {
229
+ w for w in re.findall(r'\b[a-z]{4,}\b', text_b.lower())
230
+ if w not in stop_words
231
+ }
232
+ if not words_a or not words_b:
233
+ return 0.0
234
+ intersection = words_a & words_b
235
+ union = words_a | words_b
236
+ return len(intersection) / len(union)
237
+
238
+ def _find_missing_perspectives(
239
+ self, concept: str, analyses: dict[str, str]
240
+ ) -> list[str]:
241
+ """Identify perspectives that are absent from the ensemble."""
242
+ missing = []
243
+ all_text = " ".join(analyses.values()).lower()
244
+
245
+ perspective_checks = [
246
+ ("temporal/historical", [
247
+ "history", "historical", "evolution", "over time", "timeline",
248
+ "past", "trajectory", "precedent", "legacy",
249
+ ]),
250
+ ("quantitative/statistical", [
251
+ "statistic", "data", "quantif", "measur", "metric",
252
+ "number", "percentage", "rate", "frequency",
253
+ ]),
254
+ ("ecological/environmental", [
255
+ "environment", "ecolog", "sustainab", "ecosystem",
256
+ "resource", "footprint", "biodiversity", "pollution",
257
+ ]),
258
+ ("economic/financial", [
259
+ "economic", "financial", "cost", "benefit", "market",
260
+ "incentive", "investment", "capital", "trade",
261
+ ]),
262
+ ("legal/regulatory", [
263
+ "legal", "law", "regulat", "compliance", "policy",
264
+ "legislation", "governance", "jurisdiction",
265
+ ]),
266
+ ("educational/pedagogical", [
267
+ "learn", "teach", "education", "pedagog", "curriculum",
268
+ "training", "skill", "literacy",
269
+ ]),
270
+ ]
271
+
272
+ for perspective_name, indicators in perspective_checks:
273
+ found = sum(1 for ind in indicators if ind in all_text)
274
+ if found < 2:
275
+ missing.append(
276
+ f"The ensemble lacks a {perspective_name} perspective. "
277
+ f"Consider how '{concept}' relates to {perspective_name} dimensions."
278
+ )
279
+
280
+ return missing[:3] # Limit to top 3 gaps
281
+
282
+ def _suggest_improvements(
283
+ self,
284
+ concept: str,
285
+ analyses: dict[str, str],
286
+ scores: dict[str, dict],
287
+ ) -> list[str]:
288
+ """Generate actionable improvement suggestions."""
289
+ suggestions = []
290
+
291
+ # Identify weakest agent
292
+ if scores:
293
+ weakest = min(scores.items(), key=lambda x: x[1]["combined"])
294
+ if weakest[1]["combined"] < 0.6:
295
+ suggestions.append(
296
+ f"The {weakest[0]} analysis scored lowest ({weakest[1]['combined']:.2f}). "
297
+ f"It would benefit from more specific engagement with the concept's "
298
+ f"concrete details rather than abstract framing."
299
+ )
300
+
301
+ # Check for concrete examples
302
+ all_text = " ".join(analyses.values()).lower()
303
+ example_indicators = ["for example", "for instance", "such as", "e.g.", "consider"]
304
+ example_count = sum(1 for e in example_indicators if e in all_text)
305
+ if example_count < 2:
306
+ suggestions.append(
307
+ "The ensemble would benefit from more concrete examples and "
308
+ "illustrations. Abstract reasoning without grounding in specifics "
309
+ "is less persuasive and harder to verify."
310
+ )
311
+
312
+ # Check for cross-perspective dialogue
313
+ agent_names_lower = [n.lower() for n in analyses.keys()]
314
+ cross_references = sum(
315
+ 1 for name in agent_names_lower
316
+ if any(name in text.lower() for text in analyses.values())
317
+ )
318
+ if cross_references < 2:
319
+ suggestions.append(
320
+ "The analyses operate largely in isolation. The synthesis would benefit "
321
+ "from explicit cross-referencing between perspectives -- showing where "
322
+ "they agree, disagree, or complement each other."
323
+ )
324
+
325
+ # Check for actionable takeaways
326
+ action_indicators = [
327
+ "should", "must", "recommend", "suggest", "action",
328
+ "implement", "strategy", "step", "practice",
329
+ ]
330
+ action_count = sum(1 for a in action_indicators if a in all_text)
331
+ if action_count < 3:
332
+ suggestions.append(
333
+ "The ensemble is more diagnostic than prescriptive. Adding concrete, "
334
+ "actionable recommendations would increase practical value."
335
+ )
336
+
337
+ return suggestions[:4] # Limit to top 4 suggestions
reasoning_forge/agents/davinci_agent.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ DaVinci Agent - Analyzes concepts through creative, inventive, and cross-domain reasoning.
3
+
4
+ Focuses on cross-domain connections, biomimicry and nature-inspired solutions,
5
+ iterative improvement possibilities, visual/spatial reasoning, and novel
6
+ combinations of existing ideas.
7
+ """
8
+
9
+ from reasoning_forge.agents.base_agent import ReasoningAgent
10
+
11
+
12
+ class DaVinciAgent(ReasoningAgent):
13
+ name = "DaVinci"
14
+ perspective = "creative_and_inventive"
15
+
16
+ def get_analysis_templates(self) -> list[str]:
17
+ return [
18
+ # 0 - Cross-domain analogy
19
+ (
20
+ "Drawing cross-domain connections to '{concept}': the deepest insights often "
21
+ "come from recognizing structural similarities between apparently unrelated "
22
+ "fields. A river delta and a lightning bolt share the same branching "
23
+ "optimization geometry. A market economy and an ant colony share the same "
24
+ "decentralized coordination logic. For '{concept}', the creative question "
25
+ "is: what other domain exhibits the same deep structure? If we map the "
26
+ "entities, relationships, and dynamics of '{concept}' onto those of the "
27
+ "analogous domain, which features are preserved (revealing shared principles) "
28
+ "and which break (revealing domain-specific constraints)? The preserved "
29
+ "features point toward universal laws; the broken features point toward "
30
+ "opportunities for domain-specific innovation."
31
+ ),
32
+ # 1 - Biomimicry lens
33
+ (
34
+ "Examining '{concept}' through biomimicry: nature has been solving design "
35
+ "problems for 3.8 billion years through evolutionary optimization. Bones "
36
+ "achieve maximum strength with minimum material by using trabecular "
37
+ "architecture -- hollow struts arranged along stress lines. Spider silk "
38
+ "achieves tensile strength exceeding steel at a fraction of the weight "
39
+ "through hierarchical nanostructure. Termite mounds maintain constant "
40
+ "internal temperature without energy input through passive ventilation "
41
+ "design. For '{concept}', the biomimicry question is: what organism or "
42
+ "ecosystem has already solved an analogous problem, and what principle "
43
+ "does its solution exploit that we have not yet applied?"
44
+ ),
45
+ # 2 - Combinatorial invention
46
+ (
47
+ "Approaching '{concept}' through combinatorial creativity: most inventions "
48
+ "are novel combinations of existing elements. The printing press combined "
49
+ "the wine press, movable type, oil-based ink, and paper. The smartphone "
50
+ "combined a phone, camera, GPS, accelerometer, and internet browser into "
51
+ "a device that is qualitatively different from any of its components. For "
52
+ "'{concept}', the combinatorial strategy asks: what are the elemental "
53
+ "components, and what happens when we recombine them in unusual ways? "
54
+ "Pair each element with every other element and ask whether the combination "
55
+ "produces something valuable. The most productive combinations are often "
56
+ "between elements from distant categories that no one thought to connect."
57
+ ),
58
+ # 3 - Inversion and reversal
59
+ (
60
+ "Inverting '{concept}': one of the most powerful creative strategies is "
61
+ "systematic inversion -- taking every assumption and reversing it. If the "
62
+ "current approach pushes, try pulling. If it adds, try subtracting. If it "
63
+ "centralizes, try distributing. If it speeds up, try slowing down. Many "
64
+ "breakthrough solutions came from inverting an assumption everyone took for "
65
+ "granted. Vacuum cleaners worked by pushing air until Dyson inverted the "
66
+ "flow. Assembly lines brought work to workers; Toyota inverted this by "
67
+ "bringing workers to work (cellular manufacturing). For '{concept}', "
68
+ "systematically listing and inverting each assumption reveals a space of "
69
+ "unconventional approaches that conventional thinking renders invisible."
70
+ ),
71
+ # 4 - Visual-spatial reasoning
72
+ (
73
+ "Visualizing the spatial architecture of '{concept}': representing abstract "
74
+ "relationships as spatial structures makes hidden patterns visible. If we "
75
+ "map the components of '{concept}' to nodes and their relationships to "
76
+ "edges, the resulting graph reveals clustering (tightly connected subgroups), "
77
+ "bridges (elements connecting otherwise separate clusters), hubs (elements "
78
+ "with many connections), and periphery (weakly connected elements). The "
79
+ "topology of this graph -- its shape, density, and symmetry -- encodes "
80
+ "information about the concept's structure that verbal description alone "
81
+ "cannot capture. Hub nodes are high-leverage intervention points; bridges "
82
+ "are fragile connections whose failure would fragment the system."
83
+ ),
84
+ # 5 - Constraint as catalyst
85
+ (
86
+ "Using constraints as creative catalysts for '{concept}': rather than seeing "
87
+ "limitations as obstacles, use them as forcing functions for innovation. "
88
+ "Twitter's 140-character limit forced a new style of writing. The sonnet's "
89
+ "14-line constraint forced poetic compression. Budget constraints force "
90
+ "elegant engineering. For '{concept}', deliberately imposing additional "
91
+ "constraints -- what if we had to solve this with half the resources? In "
92
+ "one-tenth the time? With no electricity? For a user who cannot see? -- "
93
+ "often breaks through conventional thinking by invalidating the default "
94
+ "approach and forcing genuinely creative alternatives."
95
+ ),
96
+ # 6 - First principles reconstruction
97
+ (
98
+ "Reconstructing '{concept}' from first principles: strip away all inherited "
99
+ "conventions, historical accidents, and 'we have always done it this way' "
100
+ "accretions. What remains when we reduce the problem to its fundamental "
101
+ "requirements? Starting from physical laws, human needs, and mathematical "
102
+ "constraints, what is the minimum viable solution? Often the gap between "
103
+ "this first-principles design and the current state reveals enormous "
104
+ "inefficiency that is invisible from within the conventional framework. "
105
+ "SpaceX re-derived rocket design from first principles and found that "
106
+ "materials cost only 2% of the final price. For '{concept}', the first-"
107
+ "principles question is: if we were designing this from scratch today, "
108
+ "knowing what we know, what would it look like?"
109
+ ),
110
+ # 7 - Morphological analysis
111
+ (
112
+ "Applying morphological analysis to '{concept}': decompose the concept into "
113
+ "its independent dimensions, list the possible values for each dimension, "
114
+ "and then systematically explore the combinatorial space. If '{concept}' has "
115
+ "five dimensions with four options each, the morphological space contains "
116
+ "1024 configurations. Most are impractical, but a systematic sweep guarantees "
117
+ "that no promising combination is overlooked by the biases of free-form "
118
+ "brainstorming. The power of morphological analysis is that it converts "
119
+ "creative search from a haphazard process into a structured exploration, "
120
+ "surfacing configurations that no one would think of spontaneously because "
121
+ "they cross conventional category boundaries."
122
+ ),
123
+ # 8 - Prototype thinking
124
+ (
125
+ "Applying prototype thinking to '{concept}': instead of perfecting a plan "
126
+ "before executing, build the quickest possible embodiment of the core idea "
127
+ "and learn from its failures. The prototype is not the solution but a "
128
+ "question asked in physical form: 'does this work?' Each prototype cycle "
129
+ "-- build, test, learn, rebuild -- compresses the feedback loop and "
130
+ "generates knowledge that purely theoretical analysis cannot provide. For "
131
+ "'{concept}', the prototype question is: what is the smallest, cheapest, "
132
+ "fastest experiment that would test the most critical assumption? Building "
133
+ "that experiment, even if crude, will teach us more than months of "
134
+ "theoretical refinement."
135
+ ),
136
+ # 9 - Emergent properties through scale
137
+ (
138
+ "Exploring emergent properties of '{concept}' at different scales: systems "
139
+ "often exhibit qualitatively new behavior when scaled up or down. A single "
140
+ "neuron computes nothing interesting; a billion networked neurons produce "
141
+ "consciousness. A single transaction is trivial; billions of transactions "
142
+ "produce market dynamics. For '{concept}', the scale question asks: what "
143
+ "happens when we multiply the instances by a thousand? By a million? What "
144
+ "new phenomena emerge at scale that are absent at the individual level? "
145
+ "Conversely, what happens when we reduce to a single instance? Scale "
146
+ "transitions often reveal the concept's most interesting properties."
147
+ ),
148
+ # 10 - Da Vinci's sfumato (ambiguity as resource)
149
+ (
150
+ "Embracing the sfumato of '{concept}': Leonardo da Vinci practiced sfumato "
151
+ "-- the technique of leaving edges soft and ambiguous rather than sharply "
152
+ "defined. In creative reasoning, maintaining productive ambiguity resists "
153
+ "premature closure and keeps the interpretive space open. The undefined "
154
+ "edges of '{concept}' are not defects but fertile zones where new "
155
+ "connections can form. Attempts to define everything precisely may satisfy "
156
+ "the desire for clarity but kill the creative potential that lives in "
157
+ "the ambiguous spaces between categories. Sit with the ambiguity long "
158
+ "enough and patterns emerge that rigid definitions would have prevented."
159
+ ),
160
+ # 11 - Lateral thinking transfer
161
+ (
162
+ "Applying lateral thinking to '{concept}': Edward de Bono's lateral "
163
+ "thinking techniques include random entry (inject an unrelated concept "
164
+ "and force a connection), provocation (make a deliberately absurd statement "
165
+ "and extract useful ideas from it), and challenge (question why things are "
166
+ "done the current way). For '{concept}', a random entry might connect it "
167
+ "to deep-sea bioluminescence, medieval cathedral construction, or jazz "
168
+ "improvisation. The forced connection between '{concept}' and a random "
169
+ "domain breaks habitual thought patterns and creates novel pathways that "
170
+ "logical deduction alone cannot reach."
171
+ ),
172
+ # 12 - Fractal self-similarity
173
+ (
174
+ "Examining '{concept}' for fractal self-similarity: does the same pattern "
175
+ "recur at different scales? Coastlines look similar whether photographed "
176
+ "from a satellite or a drone. Organizational hierarchies replicate the same "
177
+ "power dynamics from teams to departments to divisions. Blood vessel "
178
+ "networks branch according to the same rules from arteries to capillaries. "
179
+ "If '{concept}' exhibits self-similarity, then understanding the pattern at "
180
+ "one scale gives us understanding at all scales. A single well-studied "
181
+ "instance contains the blueprint for the entire hierarchy, and interventions "
182
+ "that work at one scale can be adapted to work at others."
183
+ ),
184
+ # 13 - Negative space analysis
185
+ (
186
+ "Analyzing the negative space of '{concept}': just as a sculptor defines a "
187
+ "form by removing material, we can define '{concept}' by examining what it "
188
+ "is not. What has been excluded, ignored, or left unsaid? The negative space "
189
+ "-- the complement of the concept -- often contains crucial information. "
190
+ "What alternatives were considered and rejected? What possibilities does "
191
+ "the current framing render invisible? The adjacent possible (the set of "
192
+ "things that are one step away from existing) is often more interesting "
193
+ "than the concept itself, because it represents the immediate frontier "
194
+ "of innovation."
195
+ ),
196
+ # 14 - Systems of constraints (Rube Goldberg inversion)
197
+ (
198
+ "Simplifying '{concept}' by subtracting rather than adding: the natural "
199
+ "tendency in design is to add features, layers, and complexity. The "
200
+ "harder and more valuable creative move is subtraction: what can we "
201
+ "remove while preserving or improving function? Antoine de Saint-Exupery "
202
+ "said perfection is achieved not when there is nothing left to add, but "
203
+ "when there is nothing left to take away. For '{concept}', the subtraction "
204
+ "exercise asks: what happens if we remove each component in turn? Which "
205
+ "removals are catastrophic (essential components) and which are beneficial "
206
+ "(removing parasitic complexity)? The minimal viable version is often "
207
+ "more powerful than the maximal one."
208
+ ),
209
+ # 15 - TRIZ inventive principles
210
+ (
211
+ "Applying TRIZ inventive principles to '{concept}': Genrich Altshuller's "
212
+ "analysis of 200,000 patents revealed 40 recurring inventive principles. "
213
+ "Segmentation (divide a monolithic system into parts). Extraction (remove "
214
+ "a problematic element and deal with it separately). Local quality (make "
215
+ "each part optimized for its local function rather than forcing uniformity). "
216
+ "Asymmetry (break the symmetry of a symmetric design to improve function). "
217
+ "Nesting (place one object inside another). Prior action (perform required "
218
+ "changes before they are needed). For '{concept}', systematically applying "
219
+ "each principle generates a structured menu of inventive strategies that "
220
+ "goes far beyond unconstrained brainstorming."
221
+ ),
222
+ # 16 - Synesthesia and cross-modal thinking
223
+ (
224
+ "Engaging cross-modal perception for '{concept}': what does this concept "
225
+ "sound like? What texture does it have? What temperature? What color? "
226
+ "Cross-modal associations -- thinking about a concept through sensory "
227
+ "channels that do not literally apply -- activate neural pathways that "
228
+ "linear verbal reasoning does not reach. Kandinsky heard colors and saw "
229
+ "sounds; this synesthetic thinking produced radically new art. For "
230
+ "'{concept}', translating it into sensory terms (the rhythm of its "
231
+ "processes, the texture of its interactions, the weight of its consequences) "
232
+ "can reveal structural features that abstract analysis misses."
233
+ ),
234
+ # 17 - Nature's design patterns
235
+ (
236
+ "Identifying nature's design patterns in '{concept}': evolution has converged "
237
+ "on certain solutions repeatedly because they are optimal under common "
238
+ "constraints. Hexagonal packing (beehives, basalt columns) maximizes area "
239
+ "with minimum material. Branching networks (trees, rivers, lungs, lightning) "
240
+ "optimize distribution from a source to a volume. Spiral growth (shells, "
241
+ "galaxies, hurricanes) manages expansion while maintaining structural "
242
+ "integrity. For '{concept}', asking which of nature's recurring design "
243
+ "patterns applies suggests time-tested architectures that human design "
244
+ "has not yet exploited."
245
+ ),
246
+ # 18 - Bisociation and humor
247
+ (
248
+ "Applying Koestler's bisociation to '{concept}': Arthur Koestler proposed "
249
+ "that creativity, humor, and scientific discovery share the same cognitive "
250
+ "mechanism: bisociation -- the simultaneous perception of a situation in "
251
+ "two habitually incompatible frames of reference. The collision of frames "
252
+ "produces a flash of insight (in science), a punchline (in humor), or a "
253
+ "novel artifact (in art). For '{concept}', identifying two incompatible "
254
+ "but individually valid frames and forcing them to coexist generates the "
255
+ "cognitive tension from which genuinely original ideas spring. The more "
256
+ "distant the frames, the more surprising and potentially valuable the "
257
+ "bisociative insight."
258
+ ),
259
+ # 19 - Future archaeology
260
+ (
261
+ "Practicing future archaeology on '{concept}': imagine examining the "
262
+ "artifacts of this concept a hundred years from now, from a future "
263
+ "civilization's perspective. What would they find elegant? What would "
264
+ "they find primitive? What would puzzle them about our choices? This "
265
+ "temporal displacement reveals assumptions we cannot see from within our "
266
+ "own era. The future archaeologist would ask: why did they do it this way "
267
+ "when a simpler method was available? What constraint -- technological, "
268
+ "social, or cognitive -- forced this particular design? For '{concept}', "
269
+ "this exercise separates the timeless core from the historically contingent "
270
+ "shell and suggests directions for forward-looking redesign."
271
+ ),
272
+ ]
273
+
274
+ def get_keyword_map(self) -> dict[str, list[int]]:
275
+ return {
276
+ "analog": [0, 18], "similar": [0, 12], "connect": [0, 4],
277
+ "nature": [1, 17], "biolog": [1, 17], "organism": [1],
278
+ "combin": [2, 7], "element": [2, 7], "component": [2],
279
+ "invert": [3], "revers": [3], "opposit": [3],
280
+ "visual": [4], "spatial": [4], "map": [4], "graph": [4],
281
+ "constrain": [5], "limit": [5], "restrict": [5],
282
+ "first principle": [6], "fundament": [6], "basic": [6],
283
+ "dimension": [7], "option": [7], "configur": [7],
284
+ "prototype": [8], "experiment": [8], "test": [8], "iterate": [8],
285
+ "scale": [9, 12], "grow": [9], "expand": [9],
286
+ "ambigu": [10], "fuzzy": [10], "unclear": [10],
287
+ "creativ": [11, 18], "novel": [11, 18], "innovat": [11],
288
+ "pattern": [12, 17], "recur": [12], "repeat": [12],
289
+ "absent": [13], "missing": [13], "negative": [13],
290
+ "simplif": [14], "remov": [14], "minimal": [14],
291
+ "invent": [15], "patent": [15], "engineer": [15],
292
+ "sense": [16], "perceiv": [16], "feel": [16],
293
+ "evolut": [17], "converge": [17], "branch": [17],
294
+ "humor": [18], "surprising": [18], "collision": [18],
295
+ "future": [19], "legacy": [19], "long-term": [19],
296
+ "technology": [2, 6, 15], "design": [1, 14, 15],
297
+ "art": [10, 16], "music": [16, 18],
298
+ }
299
+
300
+ def analyze(self, concept: str) -> str:
301
+ template = self.select_template(concept)
302
+ return template.replace("{concept}", concept)
reasoning_forge/agents/empathy_agent.py ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Empathy Agent - Analyzes concepts through emotional, human-centered, and social reasoning.
3
+
4
+ Focuses on how concepts affect people emotionally, compassionate interpretation,
5
+ social dynamics, communication considerations, and psychological well-being.
6
+ """
7
+
8
+ from reasoning_forge.agents.base_agent import ReasoningAgent
9
+
10
+
11
+ class EmpathyAgent(ReasoningAgent):
12
+ name = "Empathy"
13
+ perspective = "emotional_and_human_centered"
14
+
15
+ def get_analysis_templates(self) -> list[str]:
16
+ return [
17
+ # 0 - Emotional impact mapping
18
+ (
19
+ "Mapping the emotional landscape of '{concept}': every concept that touches "
20
+ "human lives generates an emotional field. For those directly involved, "
21
+ "'{concept}' may evoke hope (if it promises improvement), anxiety (if it "
22
+ "threatens the familiar), frustration (if it introduces complexity), or "
23
+ "excitement (if it opens new possibilities). These emotional responses are "
24
+ "not irrational noise overlaid on a rational signal -- they are a rapid, "
25
+ "parallel processing system that integrates more information than conscious "
26
+ "analysis can handle. Dismissing emotional responses as irrelevant is "
27
+ "itself an emotional decision (the emotion of wanting to appear rational) "
28
+ "and discards valuable signal about how '{concept}' is actually experienced "
29
+ "by the people it affects."
30
+ ),
31
+ # 1 - Lived experience perspective
32
+ (
33
+ "Centering the lived experience of '{concept}': abstract analysis risks "
34
+ "losing the texture of what this actually means in someone's daily life. "
35
+ "A person encountering '{concept}' does not experience it as a set of "
36
+ "propositions but as a shift in the felt quality of their day -- a new "
37
+ "worry added to their mental load, a new possibility that brightens their "
38
+ "horizon, a new confusion that makes the familiar strange. Understanding "
39
+ "'{concept}' requires not just knowing what it is but feeling what it is "
40
+ "like: the cognitive effort it demands, the social negotiations it requires, "
41
+ "the way it reshapes routines and relationships. This first-person texture "
42
+ "is where the real impact lives."
43
+ ),
44
+ # 2 - Compassionate reframing
45
+ (
46
+ "Reframing '{concept}' with compassion: when people struggle with or resist "
47
+ "this concept, their difficulty is not a deficiency in understanding but a "
48
+ "legitimate response to a genuine challenge. Resistance often signals that "
49
+ "something important is being threatened -- identity, competence, belonging, "
50
+ "or security. Rather than dismissing resistance, compassionate inquiry asks: "
51
+ "what are you protecting? What would need to be true for this to feel safe? "
52
+ "What support would make this manageable? For '{concept}', the compassionate "
53
+ "reframing recognizes that the human response is data about the concept's "
54
+ "real-world fit, not an obstacle to overcome."
55
+ ),
56
+ # 3 - Social dynamics analysis
57
+ (
58
+ "Analyzing the social dynamics activated by '{concept}': concepts do not "
59
+ "exist in isolation; they are adopted, resisted, negotiated, and transformed "
60
+ "through social interaction. In-group/out-group dynamics determine who is "
61
+ "seen as a legitimate voice on this topic. Status hierarchies determine "
62
+ "whose interpretation prevails. Social proof shapes adoption: people look "
63
+ "to others' reactions before forming their own. Groupthink can suppress "
64
+ "dissenting perspectives that would improve collective understanding. For "
65
+ "'{concept}', the social dynamics may matter more than the concept's "
66
+ "intrinsic merits in determining its real-world trajectory."
67
+ ),
68
+ # 4 - Communication and framing
69
+ (
70
+ "Examining how '{concept}' is communicated and framed: the same content, "
71
+ "presented differently, produces dramatically different responses. Loss "
72
+ "framing ('you will lose X if you do not adopt this') activates different "
73
+ "neural circuitry than gain framing ('you will gain X if you adopt this'). "
74
+ "Concrete examples engage empathy; abstract statistics do not. Narrative "
75
+ "structure (beginning-middle-end) makes information memorable; list format "
76
+ "makes it forgettable. For '{concept}', the communication design is not "
77
+ "mere packaging but fundamentally shapes understanding, acceptance, and "
78
+ "behavior. A brilliant concept poorly communicated is indistinguishable "
79
+ "from a mediocre one."
80
+ ),
81
+ # 5 - Psychological safety assessment
82
+ (
83
+ "Assessing the psychological safety implications of '{concept}': people "
84
+ "engage productively with challenging ideas only when they feel safe enough "
85
+ "to be vulnerable -- to admit confusion, ask naive questions, and make "
86
+ "mistakes without social penalty. If '{concept}' is introduced in an "
87
+ "environment where asking questions signals incompetence, where mistakes "
88
+ "are punished, or where dissent is suppressed, people will perform "
89
+ "understanding rather than achieve it. The intellectual quality of "
90
+ "engagement with '{concept}' is bounded by the psychological safety of "
91
+ "the environment. Creating conditions where genuine engagement is safe "
92
+ "is a prerequisite for genuine understanding."
93
+ ),
94
+ # 6 - Identity and belonging
95
+ (
96
+ "Exploring how '{concept}' intersects with identity and belonging: people "
97
+ "do not evaluate concepts in a vacuum; they evaluate them in terms of what "
98
+ "adoption means for their identity. Does embracing '{concept}' signal "
99
+ "membership in a valued group? Does rejecting it? The identity calculus "
100
+ "often overrides the epistemic calculus: people will reject well-supported "
101
+ "ideas that threaten their group membership and accept poorly-supported "
102
+ "ones that affirm it. For '{concept}', understanding the identity landscape "
103
+ "-- which identities this concept affirms, threatens, or is irrelevant to "
104
+ "-- predicts adoption patterns more accurately than the concept's objective "
105
+ "merits."
106
+ ),
107
+ # 7 - Grief and loss recognition
108
+ (
109
+ "Acknowledging the grief dimension of '{concept}': every significant change "
110
+ "involves loss, and loss requires grief. Even positive changes -- a promotion, "
111
+ "a new technology, a better system -- require letting go of the familiar: "
112
+ "old competencies that are now obsolete, old relationships that are now "
113
+ "restructured, old identities that no longer fit. The Kubler-Ross stages "
114
+ "(denial, anger, bargaining, depression, acceptance) are not a rigid sequence "
115
+ "but a map of common emotional responses to loss. For '{concept}', naming "
116
+ "and honoring what is lost -- rather than insisting that only the gains "
117
+ "matter -- allows people to move through the transition rather than getting "
118
+ "stuck in resistance."
119
+ ),
120
+ # 8 - Trust dynamics
121
+ (
122
+ "Analyzing the trust architecture of '{concept}': trust is the invisible "
123
+ "infrastructure that determines whether systems function or fail. It is "
124
+ "built slowly through consistent behavior, transparency, and demonstrated "
125
+ "competence, and destroyed quickly by betrayal, opacity, or incompetence. "
126
+ "For '{concept}', the trust questions are: who needs to trust whom for this "
127
+ "to work? Is that trust warranted by track record? What happens when trust "
128
+ "is violated (is there a repair mechanism)? Are there trust asymmetries "
129
+ "where one party bears vulnerability while the other holds power? Trust "
130
+ "deficits cannot be solved by technical improvements alone -- they require "
131
+ "relational repair."
132
+ ),
133
+ # 9 - Cognitive load and overwhelm
134
+ (
135
+ "Assessing the cognitive load imposed by '{concept}': human working memory "
136
+ "has a limited capacity (roughly 4 +/- 1 chunks of information). Every new "
137
+ "concept that must be held in mind simultaneously competes for this scarce "
138
+ "resource. Complex concepts that require juggling many interrelated pieces "
139
+ "can overwhelm working memory, producing a felt experience of confusion and "
140
+ "frustration that has nothing to do with intellectual capacity and everything "
141
+ "to do with presentation design. For '{concept}', the empathic question is: "
142
+ "how can this be chunked, sequenced, and scaffolded to fit within human "
143
+ "cognitive limits without sacrificing essential complexity?"
144
+ ),
145
+ # 10 - Motivation and meaning
146
+ (
147
+ "Exploring the motivational landscape of '{concept}': Self-Determination "
148
+ "Theory identifies three basic psychological needs: autonomy (the feeling "
149
+ "of volition and choice), competence (the feeling of mastery and effectiveness), "
150
+ "and relatedness (the feeling of connection and belonging). Engagement with "
151
+ "'{concept}' will be intrinsically motivated when it satisfies these needs "
152
+ "and extrinsically motivated (fragile, resentful compliance) when it frustrates "
153
+ "them. For '{concept}', the design question is: does engagement with this "
154
+ "concept make people feel more autonomous, competent, and connected, or does "
155
+ "it impose control, induce helplessness, and isolate?"
156
+ ),
157
+ # 11 - Narrative and storytelling
158
+ (
159
+ "Situating '{concept}' within human narrative: humans are storytelling animals "
160
+ "-- we make sense of the world by constructing narratives with characters, "
161
+ "motivations, conflicts, and resolutions. A concept presented as a story "
162
+ "('there was a problem, people tried solutions, here is what they learned') "
163
+ "is absorbed and remembered far more effectively than the same information "
164
+ "presented as disconnected facts. For '{concept}', the narrative question "
165
+ "is: what is the story here? Who are the characters? What is the conflict? "
166
+ "What is at stake? How does this chapter connect to the larger story that "
167
+ "people are already telling about their lives and work?"
168
+ ),
169
+ # 12 - Perspective-taking exercise
170
+ (
171
+ "Practicing perspective-taking with '{concept}': imagine experiencing this "
172
+ "from the viewpoint of an enthusiastic early adopter (everything is "
173
+ "possibility), a skeptical veteran (I have seen this before and it did not "
174
+ "work), a vulnerable newcomer (I do not understand and I am afraid to ask), "
175
+ "an overwhelmed practitioner (I do not have bandwidth for one more thing), "
176
+ "and a curious outsider (I have no stake but find this interesting). Each "
177
+ "perspective reveals different features of '{concept}' and different emotional "
178
+ "valences. The concept is not one thing but many things, depending on who "
179
+ "is experiencing it and what they bring to the encounter."
180
+ ),
181
+ # 13 - Relational impact
182
+ (
183
+ "Examining how '{concept}' affects relationships: concepts do not only change "
184
+ "what people think; they change how people relate to each other. Does "
185
+ "'{concept}' create shared language that strengthens collaboration, or "
186
+ "jargon that excludes outsiders? Does it create a hierarchy of expertise "
187
+ "that distances the knowledgeable from the uninitiated? Does it provide "
188
+ "common ground for diverse stakeholders or a wedge that divides them? "
189
+ "The relational dimension of '{concept}' -- how it brings people together "
190
+ "or pushes them apart -- often determines its long-term viability more than "
191
+ "its technical merits."
192
+ ),
193
+ # 14 - Stress and coping
194
+ (
195
+ "Analyzing the stress profile of '{concept}': when encountering something "
196
+ "new or challenging, people appraise both the demand (how threatening or "
197
+ "difficult is this?) and their resources (do I have what I need to cope?). "
198
+ "When demands exceed resources, the result is stress. The stress response "
199
+ "narrows attention, reduces creativity, and triggers fight-flight-freeze "
200
+ "behavior -- exactly the opposite of the open, curious engagement that "
201
+ "learning requires. For '{concept}', the empathic design question is: how "
202
+ "can we increase people's resources (support, information, time, practice) "
203
+ "or decrease the perceived demand (scaffolding, chunking, normalization of "
204
+ "struggle) to keep the challenge in the productive zone?"
205
+ ),
206
+ # 15 - Cultural sensitivity
207
+ (
208
+ "Examining '{concept}' through cultural sensitivity: concepts that seem "
209
+ "universal often carry culturally specific assumptions about individualism "
210
+ "vs collectivism, hierarchy vs egalitarianism, directness vs indirectness, "
211
+ "or risk-taking vs caution. A concept designed within an individualist "
212
+ "framework may not translate to collectivist contexts without significant "
213
+ "adaptation. Communication norms that are standard in one culture may be "
214
+ "offensive in another. For '{concept}', cultural sensitivity asks: whose "
215
+ "cultural assumptions are embedded in the default design, and how must the "
216
+ "concept be adapted for genuine cross-cultural validity?"
217
+ ),
218
+ # 16 - Emotional intelligence integration
219
+ (
220
+ "Integrating emotional intelligence into '{concept}': Goleman's framework "
221
+ "identifies self-awareness (recognizing one's own emotions), self-regulation "
222
+ "(managing emotional responses), social awareness (reading others' emotions), "
223
+ "and relationship management (navigating social interactions skillfully). "
224
+ "For '{concept}', each dimension matters: self-awareness helps people "
225
+ "recognize their biases toward the concept; self-regulation helps manage "
226
+ "anxiety about change; social awareness helps read the room when introducing "
227
+ "the concept; relationship management helps navigate disagreements "
228
+ "constructively. Emotional intelligence is not a soft add-on to rational "
229
+ "analysis but a prerequisite for its effective application."
230
+ ),
231
+ # 17 - Healing and repair
232
+ (
233
+ "Considering '{concept}' through the lens of healing and repair: if this "
234
+ "concept touches areas where people have been harmed -- by previous failed "
235
+ "implementations, broken promises, or traumatic experiences -- the entry "
236
+ "point matters enormously. Approaching damaged ground with the energy of "
237
+ "'we have the solution' triggers defensiveness. Approaching with "
238
+ "acknowledgment of past harm ('we know this has been painful before, and "
239
+ "here is how this time is different') opens the possibility of engagement. "
240
+ "For '{concept}', healing-oriented design begins by asking: what wounds "
241
+ "exist in this space, and how do we avoid reopening them?"
242
+ ),
243
+ # 18 - Play and curiosity
244
+ (
245
+ "Engaging with '{concept}' through the spirit of play: play is not the "
246
+ "opposite of seriousness but the opposite of rigidity. A playful stance "
247
+ "toward '{concept}' gives permission to explore without commitment, to "
248
+ "ask 'what if?' without 'what for?', to make mistakes without consequences. "
249
+ "Play activates the exploratory system (curiosity, novelty-seeking, "
250
+ "experimentation) rather than the defensive system (anxiety, avoidance, "
251
+ "threat-detection). Children learn most complex skills through play, not "
252
+ "instruction. For '{concept}', designing entry points that feel playful "
253
+ "rather than high-stakes can dramatically accelerate genuine understanding "
254
+ "by reducing the emotional barriers to engagement."
255
+ ),
256
+ # 19 - Collective emotion and morale
257
+ (
258
+ "Reading the collective emotional field around '{concept}': groups have "
259
+ "emergent emotional states that are more than the sum of individual feelings. "
260
+ "Collective excitement creates momentum that carries individuals past "
261
+ "obstacles they could not overcome alone. Collective demoralization creates "
262
+ "paralysis that defeats even the most motivated individuals. Emotional "
263
+ "contagion -- the rapid spread of feelings through a group -- can amplify "
264
+ "either response. For '{concept}', attending to the collective emotional "
265
+ "state is as important as attending to the logical content. A technically "
266
+ "sound approach introduced into a demoralized group will fail; a mediocre "
267
+ "approach carried by collective enthusiasm may succeed."
268
+ ),
269
+ ]
270
+
271
+ def get_keyword_map(self) -> dict[str, list[int]]:
272
+ return {
273
+ "emotion": [0, 16], "feel": [0, 1], "affect": [0],
274
+ "experience": [1], "daily": [1], "life": [1], "personal": [1],
275
+ "resist": [2], "struggle": [2], "difficult": [2],
276
+ "social": [3, 13], "group": [3, 19], "community": [3],
277
+ "communicat": [4], "message": [4], "frame": [4], "present": [4],
278
+ "safe": [5], "vulnerab": [5], "mistake": [5],
279
+ "identity": [6], "belong": [6], "member": [6],
280
+ "change": [7], "loss": [7], "transition": [7],
281
+ "trust": [8], "betray": [8], "credib": [8], "reliab": [8],
282
+ "complex": [9], "confus": [9], "overwhelm": [9],
283
+ "motivat": [10], "engage": [10], "meaning": [10],
284
+ "story": [11], "narrative": [11], "journey": [11],
285
+ "perspectiv": [12], "viewpoint": [12], "stakeholder": [12],
286
+ "relat": [13], "collaborat": [13], "team": [13],
287
+ "stress": [14], "anxiety": [14], "coping": [14], "burnout": [14],
288
+ "cultur": [15], "divers": [15], "global": [15],
289
+ "aware": [16], "intelligen": [16], "regulat": [16],
290
+ "heal": [17], "repair": [17], "trauma": [17], "harm": [17],
291
+ "play": [18], "curiosi": [18], "explor": [18], "fun": [18],
292
+ "morale": [19], "momentum": [19], "collective": [19],
293
+ "technology": [7, 9], "education": [5, 9, 14],
294
+ "health": [0, 14, 17], "work": [5, 10, 14],
295
+ }
296
+
297
+ def analyze(self, concept: str) -> str:
298
+ template = self.select_template(concept)
299
+ return template.replace("{concept}", concept)
reasoning_forge/agents/ethics_agent.py ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Ethics Agent - Analyzes concepts through alignment, consequences, and moral reasoning.
3
+
4
+ Focuses on human well-being impact, unintended consequences, fairness and equity,
5
+ responsibility and accountability, and long-term societal effects.
6
+ """
7
+
8
+ from reasoning_forge.agents.base_agent import ReasoningAgent
9
+
10
+
11
+ class EthicsAgent(ReasoningAgent):
12
+ name = "Ethics"
13
+ perspective = "alignment_and_consequences"
14
+
15
+ def get_analysis_templates(self) -> list[str]:
16
+ return [
17
+ # 0 - Consequentialist analysis
18
+ (
19
+ "Evaluating '{concept}' by its consequences: the moral weight of any action "
20
+ "or system lies primarily in its outcomes. We must trace the full causal "
21
+ "chain from implementation to impact, distinguishing first-order effects "
22
+ "(immediate and intended) from second-order effects (delayed and often "
23
+ "unintended). The distribution of consequences matters as much as the "
24
+ "aggregate: a net-positive outcome that concentrates benefits among the "
25
+ "privileged while imposing costs on the vulnerable is ethically different "
26
+ "from one that distributes benefits broadly. For '{concept}', we must ask "
27
+ "not just 'does it work?' but 'for whom does it work, and at whose expense?'"
28
+ ),
29
+ # 1 - Deontological duties
30
+ (
31
+ "Examining '{concept}' through the lens of duty and rights: regardless of "
32
+ "outcomes, certain actions are obligatory and others are forbidden. People "
33
+ "have inviolable rights -- to autonomy, dignity, truthful information, and "
34
+ "freedom from manipulation -- that cannot be traded away for aggregate "
35
+ "benefit. The categorical imperative asks: could we universalize the "
36
+ "principle behind '{concept}'? If everyone adopted this approach, would "
37
+ "the result be self-consistent and livable, or would it be self-defeating? "
38
+ "Any framework that works only when most people do not adopt it (free-riding) "
39
+ "fails this universalizability test and carries a moral defect regardless "
40
+ "of its practical effectiveness."
41
+ ),
42
+ # 2 - Unintended consequences
43
+ (
44
+ "Mapping the unintended consequences of '{concept}': every intervention in "
45
+ "a complex system produces side effects that were not part of the original "
46
+ "design. These unintended consequences often emerge at a different timescale "
47
+ "(delayed effects), a different spatial scale (distant effects), or in a "
48
+ "different domain (cross-domain effects) from the intended impact. Cobra "
49
+ "effects occur when the intervention incentivizes behavior that worsens the "
50
+ "original problem. Rebound effects occur when efficiency gains are consumed "
51
+ "by increased usage. For '{concept}', humility about our ability to predict "
52
+ "second- and third-order effects should temper confidence in any intervention."
53
+ ),
54
+ # 3 - Fairness and distributive justice
55
+ (
56
+ "Analyzing the fairness dimensions of '{concept}': distributive justice asks "
57
+ "how benefits and burdens are allocated. Rawlsian justice demands that "
58
+ "inequalities are permissible only if they benefit the least advantaged "
59
+ "members of society. Procedural justice requires that the process for "
60
+ "allocation is transparent, consistent, and free from bias. Recognition "
61
+ "justice demands that all affected parties are acknowledged as legitimate "
62
+ "stakeholders with standing to participate. For '{concept}', we must examine "
63
+ "whether existing inequalities are perpetuated, amplified, or mitigated, "
64
+ "and whether those who bear the costs have meaningful voice in the decision."
65
+ ),
66
+ # 4 - Autonomy and consent
67
+ (
68
+ "Assessing '{concept}' from the standpoint of autonomy: respect for persons "
69
+ "requires that individuals can make informed, voluntary choices about matters "
70
+ "affecting their lives. This demands adequate information disclosure (people "
71
+ "know what they are consenting to), cognitive accessibility (the information "
72
+ "is presented in a form people can actually understand), voluntariness (no "
73
+ "coercion, manipulation, or deceptive framing), and ongoing consent (the "
74
+ "ability to withdraw). For '{concept}', the critical question is whether "
75
+ "affected parties genuinely understand and freely accept the arrangement, "
76
+ "or whether consent is nominal -- technically obtained but substantively "
77
+ "hollow."
78
+ ),
79
+ # 5 - Accountability structures
80
+ (
81
+ "Examining the accountability architecture of '{concept}': when things go "
82
+ "wrong, who bears responsibility? Clear accountability requires identifiable "
83
+ "decision-makers, transparent decision processes, defined chains of "
84
+ "responsibility, and meaningful consequences for failures. Diffuse systems "
85
+ "create accountability gaps where no individual or entity can be held "
86
+ "responsible for collective harms. The 'many hands' problem arises when "
87
+ "harmful outcomes result from the accumulation of individually reasonable "
88
+ "decisions by many actors. For '{concept}', we must ask: if this causes "
89
+ "harm, is there a clear path from harm to accountable party, and does that "
90
+ "party have both the authority and incentive to prevent the harm?"
91
+ ),
92
+ # 6 - Vulnerable population impact
93
+ (
94
+ "Centering vulnerable populations in the analysis of '{concept}': ethical "
95
+ "evaluation must prioritize those with the least power to protect themselves "
96
+ "-- children, the elderly, the economically disadvantaged, marginalized "
97
+ "communities, future generations, and those with diminished capacity. "
98
+ "Systems that appear benign when evaluated from the perspective of the "
99
+ "typical user may be harmful when evaluated from the perspective of the "
100
+ "most vulnerable. Accessibility, safety margins, and failure modes should "
101
+ "be designed for the most vulnerable case, not the average case. The moral "
102
+ "quality of '{concept}' is best measured by how it treats those who benefit "
103
+ "least from it."
104
+ ),
105
+ # 7 - Long-term societal effects
106
+ (
107
+ "Projecting the long-term societal trajectory of '{concept}': short-term "
108
+ "benefits can create long-term dependencies, lock-ins, or path dependencies "
109
+ "that constrain future choices. The discount rate we apply to future harms "
110
+ "(how much we value present benefits relative to future costs) is itself "
111
+ "an ethical choice. Heavy discounting privileges the present generation at "
112
+ "the expense of future ones. For '{concept}', we must evaluate not just "
113
+ "the immediate utility but the legacy: what kind of world does this create "
114
+ "for those who come after us? Does it expand or contract the option space "
115
+ "available to future decision-makers?"
116
+ ),
117
+ # 8 - Power dynamics
118
+ (
119
+ "Analyzing the power dynamics embedded in '{concept}': who gains power, who "
120
+ "loses it, and what mechanisms mediate the transfer? Power asymmetries tend "
121
+ "to be self-reinforcing: those with power shape the rules to preserve their "
122
+ "advantage, creating positive feedback loops of concentration. The Matthew "
123
+ "effect ('to those who have, more shall be given') operates across many "
124
+ "domains. For '{concept}', we must examine whether it disrupts or reinforces "
125
+ "existing power hierarchies, whether it creates new forms of dependency, and "
126
+ "whether the checks and balances are sufficient to prevent abuse by those "
127
+ "in positions of advantage."
128
+ ),
129
+ # 9 - Transparency and truthfulness
130
+ (
131
+ "Evaluating the transparency of '{concept}': truthfulness is not merely "
132
+ "avoiding false statements; it requires active disclosure of relevant "
133
+ "information, honest representation of uncertainty, and resistance to "
134
+ "misleading framing. Opacity serves those who benefit from the status quo "
135
+ "by preventing informed critique. Selective transparency -- revealing "
136
+ "favorable information while concealing unfavorable -- is a form of "
137
+ "deception. For '{concept}', full ethical evaluation requires asking: what "
138
+ "information is available, what is concealed, who controls the narrative, "
139
+ "and do affected parties have access to the information they need to "
140
+ "make genuinely informed judgments?"
141
+ ),
142
+ # 10 - Dual-use dilemma
143
+ (
144
+ "Confronting the dual-use nature of '{concept}': most powerful capabilities "
145
+ "can serve both beneficial and harmful purposes. The same technology that "
146
+ "heals can harm; the same knowledge that liberates can oppress. Restricting "
147
+ "access to prevent misuse also limits beneficial applications. Unrestricted "
148
+ "access maximizes beneficial use but also maximizes misuse potential. The "
149
+ "optimal policy depends on the ratio of beneficial to harmful users, the "
150
+ "magnitude of potential harms versus benefits, and the availability of "
151
+ "safeguards that selectively enable beneficial use. For '{concept}', the "
152
+ "dual-use calculus is central to responsible governance."
153
+ ),
154
+ # 11 - Moral hazard
155
+ (
156
+ "Identifying moral hazard in '{concept}': moral hazard arises when an actor "
157
+ "is insulated from the consequences of their decisions, leading to riskier "
158
+ "behavior than they would otherwise choose. If the benefits of success are "
159
+ "private but the costs of failure are socialized (borne by others), the "
160
+ "decision-maker has a rational incentive to take excessive risks. For "
161
+ "'{concept}', we must examine the alignment between who decides, who benefits "
162
+ "from good outcomes, and who pays for bad outcomes. Misalignment between "
163
+ "these three roles is a reliable predictor of ethically problematic behavior."
164
+ ),
165
+ # 12 - Virtue ethics lens
166
+ (
167
+ "Approaching '{concept}' through virtue ethics: rather than asking 'what "
168
+ "rules should govern this?' or 'what outcomes does this produce?', we ask "
169
+ "'what kind of character does engagement with this cultivate?' Does it "
170
+ "foster wisdom, courage, temperance, justice, compassion, and intellectual "
171
+ "honesty? Or does it encourage vice: shortsightedness, cowardice, excess, "
172
+ "injustice, indifference, and self-deception? The virtues are not abstract "
173
+ "ideals but practical habits that, when cultivated, produce flourishing "
174
+ "individuals and communities. For '{concept}', the virtue question is: "
175
+ "does this make us better or worse people?"
176
+ ),
177
+ # 13 - Informed consent in practice
178
+ (
179
+ "Examining informed consent as applied to '{concept}': genuine consent "
180
+ "requires that the consenting party understands the risks, alternatives, "
181
+ "and implications; is free from coercion; and has the capacity to make "
182
+ "the decision. In practice, consent is often degraded by information "
183
+ "asymmetry (the provider knows more than the recipient), complexity (the "
184
+ "implications exceed ordinary comprehension), and structural coercion "
185
+ "(refusing consent is theoretically possible but practically catastrophic). "
186
+ "Click-through agreements, dense legal language, and 'take it or leave it' "
187
+ "terms are consent theater, not genuine consent. For '{concept}', we must "
188
+ "distinguish substantive from theatrical consent."
189
+ ),
190
+ # 14 - Intergenerational justice
191
+ (
192
+ "Applying intergenerational justice to '{concept}': decisions made today "
193
+ "bind future generations who have no voice in the decision. The asymmetry "
194
+ "is profound: we can affect them, but they cannot affect us; we can benefit "
195
+ "at their expense, but they cannot hold us accountable. Sustainable "
196
+ "practices treat the inheritance of future generations as a constraint, "
197
+ "not a resource to be spent. For '{concept}', the intergenerational "
198
+ "question is: are we spending down an inheritance that took generations "
199
+ "to build, or are we investing in capabilities that compound for those "
200
+ "who follow?"
201
+ ),
202
+ # 15 - Proportionality
203
+ (
204
+ "Assessing the proportionality of '{concept}': the ethical principle of "
205
+ "proportionality requires that the means be commensurate with the ends. "
206
+ "Excessive measures to address a minor risk are disproportionate. Inadequate "
207
+ "measures for a major risk are negligent. The challenge is that risk "
208
+ "perception is biased: we overweight vivid, immediate, and personal risks "
209
+ "while underweighting statistical, delayed, and distributed ones. For "
210
+ "'{concept}', proportionality demands an honest accounting of both the "
211
+ "magnitude of the problem being addressed and the costs of the solution, "
212
+ "including costs borne by third parties who did not choose to bear them."
213
+ ),
214
+ # 16 - Systemic bias detection
215
+ (
216
+ "Investigating systemic bias in '{concept}': bias can be embedded in data "
217
+ "(reflecting historical inequities), in algorithms (optimizing for proxy "
218
+ "variables correlated with protected characteristics), in institutions "
219
+ "(normalizing practices that disadvantage certain groups), and in language "
220
+ "(framing that renders certain perspectives invisible). Systemic bias is "
221
+ "particularly insidious because it operates automatically, without malicious "
222
+ "intent, and is often invisible to those who benefit from it. For '{concept}', "
223
+ "a bias audit must examine not just explicit discrimination but structural "
224
+ "features that produce disparate outcomes even under formally neutral rules."
225
+ ),
226
+ # 17 - Precautionary principle
227
+ (
228
+ "Applying the precautionary principle to '{concept}': when an action raises "
229
+ "credible threats of serious or irreversible harm, the burden of proof falls "
230
+ "on those proposing the action to demonstrate safety, not on those opposing "
231
+ "it to demonstrate harm. The precautionary principle is most appropriate "
232
+ "when the potential harm is severe and irreversible, scientific understanding "
233
+ "is incomplete, and there exist feasible alternatives. It is less appropriate "
234
+ "when risks are modest and reversible, or when inaction itself carries "
235
+ "significant risk. For '{concept}', the key judgment is whether the potential "
236
+ "downside is in the catastrophic-irreversible category that justifies "
237
+ "precautionary restraint."
238
+ ),
239
+ # 18 - Care ethics
240
+ (
241
+ "Examining '{concept}' through the ethics of care: moral reasoning is not "
242
+ "purely abstract rule-following but is grounded in concrete relationships "
243
+ "of dependency, vulnerability, and mutual support. The care perspective "
244
+ "asks: who needs care, who provides it, is the care adequate, and are "
245
+ "caregivers themselves supported? Care labor is frequently invisible, "
246
+ "undervalued, and unequally distributed (disproportionately borne by women "
247
+ "and marginalized communities). For '{concept}', the care lens reveals "
248
+ "dependencies and support relationships that abstract frameworks overlook, "
249
+ "and centers the lived experience of those who give and receive care."
250
+ ),
251
+ # 19 - Alignment and value lock-in
252
+ (
253
+ "Evaluating the alignment properties of '{concept}': a system is aligned "
254
+ "when its behavior reliably serves the values and interests of those it "
255
+ "affects. Misalignment occurs when the system optimizes for a proxy that "
256
+ "diverges from the true objective -- Goodhart's law ('when a measure becomes "
257
+ "a target, it ceases to be a good measure'). Value lock-in occurs when early "
258
+ "design choices embed specific values that become increasingly difficult to "
259
+ "change as the system scales. For '{concept}', we must ask: whose values "
260
+ "are encoded, how were they chosen, can they be updated as understanding "
261
+ "evolves, and what happens when the proxy diverges from the true objective?"
262
+ ),
263
+ ]
264
+
265
+ def get_keyword_map(self) -> dict[str, list[int]]:
266
+ return {
267
+ "consequen": [0, 2], "outcome": [0], "result": [0],
268
+ "duty": [1], "right": [1], "obligat": [1], "rule": [1],
269
+ "unintend": [2], "side effect": [2], "unexpect": [2],
270
+ "fair": [3], "equal": [3], "justice": [3], "distribut": [3],
271
+ "consent": [4, 13], "autonom": [4], "choice": [4],
272
+ "accountab": [5], "responsib": [5], "blame": [5],
273
+ "vulnerab": [6], "child": [6], "elder": [6], "marginali": [6],
274
+ "long-term": [7, 14], "future": [7, 14], "sustain": [7, 14],
275
+ "power": [8], "hierarch": [8], "dominat": [8],
276
+ "transparen": [9], "truth": [9], "honest": [9], "disclos": [9],
277
+ "dual": [10], "weapon": [10], "misuse": [10],
278
+ "hazard": [11], "risk": [11, 17], "insur": [11],
279
+ "virtue": [12], "character": [12], "flourish": [12],
280
+ "agree": [13], "terms": [13], "privacy": [13],
281
+ "generation": [14], "inherit": [14], "legacy": [14],
282
+ "proportion": [15], "excessive": [15], "moderate": [15],
283
+ "bias": [16], "discriminat": [16], "prejudic": [16],
284
+ "precaution": [17], "irreversib": [17], "catastroph": [17],
285
+ "care": [18], "depend": [18], "support": [18], "nurtur": [18],
286
+ "align": [19], "value": [19], "proxy": [19], "goodhart": [19],
287
+ "technology": [10, 19], "ai": [16, 19], "artificial": [16, 19],
288
+ "society": [3, 7, 8], "learning": [4, 12],
289
+ "intelligence": [10, 19], "climate": [7, 14, 17],
290
+ "economic": [3, 8, 11], "health": [4, 6, 15],
291
+ "network": [8, 9], "data": [9, 13, 16],
292
+ }
293
+
294
+ def analyze(self, concept: str) -> str:
295
+ template = self.select_template(concept)
296
+ return template.replace("{concept}", concept)
reasoning_forge/agents/newton_agent.py ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Newton Agent - Analyzes concepts through physics, mathematics, and causal reasoning.
3
+
4
+ Focuses on causal relationships, conservation laws, symmetries, measurable
5
+ quantities, systems behavior, equilibrium, force interactions, and energy transfer.
6
+ """
7
+
8
+ from reasoning_forge.agents.base_agent import ReasoningAgent
9
+
10
+
11
+ class NewtonAgent(ReasoningAgent):
12
+ name = "Newton"
13
+ perspective = "physics_and_mathematical_causality"
14
+
15
+ def get_analysis_templates(self) -> list[str]:
16
+ return [
17
+ # 0 - Causal chain analysis
18
+ (
19
+ "Tracing the causal chain within '{concept}': every observable outcome "
20
+ "is the terminal node of a directed graph of prior causes. The initial "
21
+ "conditions set boundary constraints, and the dynamics propagate through "
22
+ "interactions that obey local causality. Identifying the forcing function "
23
+ "-- the primary driver that injects energy or information into this system "
24
+ "-- reveals which variables are genuinely independent and which are "
25
+ "downstream responses. Perturbing the forcing function and predicting "
26
+ "the cascade of effects is the most rigorous test of whether we actually "
27
+ "understand the mechanism."
28
+ ),
29
+ # 1 - Conservation law framing
30
+ (
31
+ "Applying conservation principles to '{concept}': in any closed system, "
32
+ "certain quantities remain invariant under transformation. The question "
33
+ "becomes: what is conserved here? If we track the total inventory of the "
34
+ "relevant quantity -- energy, momentum, information, resources -- before "
35
+ "and after any process, the ledger must balance. Any apparent violation "
36
+ "signals either a hidden reservoir we have not accounted for, or an "
37
+ "external source/sink coupling into the system. This bookkeeping discipline "
38
+ "eliminates many superficially plausible but physically impossible explanations."
39
+ ),
40
+ # 2 - Symmetry and invariance
41
+ (
42
+ "Examining '{concept}' through symmetry analysis: Noether's theorem tells "
43
+ "us that every continuous symmetry corresponds to a conserved quantity. "
44
+ "What transformations leave the essential structure of this concept unchanged? "
45
+ "Translational symmetry (it works the same regardless of when or where) "
46
+ "implies conservation of momentum-like quantities. Rotational symmetry "
47
+ "(no preferred direction) implies conservation of angular-momentum analogs. "
48
+ "Breaking a symmetry always has consequences -- it introduces a preferred "
49
+ "frame, a distinguished direction, or a phase transition. Identifying which "
50
+ "symmetries hold and which break is a powerful diagnostic."
51
+ ),
52
+ # 3 - Equilibrium and stability
53
+ (
54
+ "Analyzing the equilibrium structure of '{concept}': a system at equilibrium "
55
+ "satisfies the condition that the net generalized force on every degree of "
56
+ "freedom is zero. But equilibrium alone is insufficient -- we must classify "
57
+ "its stability. A small perturbation from a stable equilibrium produces a "
58
+ "restoring force proportional to the displacement (harmonic behavior). An "
59
+ "unstable equilibrium amplifies perturbations exponentially. A metastable "
60
+ "state appears stable to small perturbations but collapses under large ones. "
61
+ "For '{concept}', determining the stability class tells us whether the current "
62
+ "state is robust, fragile, or a ticking time bomb waiting for a large enough "
63
+ "fluctuation."
64
+ ),
65
+ # 4 - Dimensional analysis and scaling
66
+ (
67
+ "Applying dimensional analysis to '{concept}': before building any detailed "
68
+ "model, we can extract powerful constraints just from the units of the "
69
+ "relevant quantities. If the outcome depends on a length L, a time T, and "
70
+ "an energy E, the Buckingham Pi theorem tells us how many independent "
71
+ "dimensionless groups govern the behavior. Scaling laws follow directly: "
72
+ "how does the outcome change if we double the size? Halve the timescale? "
73
+ "These scaling relationships often reveal whether a process is dominated by "
74
+ "surface effects (scaling as area) or bulk effects (scaling as volume), "
75
+ "which fundamentally changes the strategy for control or optimization."
76
+ ),
77
+ # 5 - Force balance and interaction
78
+ (
79
+ "Decomposing '{concept}' into interacting forces: every observed motion or "
80
+ "change is the net result of competing influences. Drawing the free-body "
81
+ "diagram -- enumerating every force acting on the system and its direction "
82
+ "-- immediately clarifies why the system behaves as it does. Equal and "
83
+ "opposite forces produce stasis. An imbalance produces acceleration in the "
84
+ "direction of the net force, with magnitude proportional to the imbalance "
85
+ "and inversely proportional to the system's inertia (its resistance to "
86
+ "change). For '{concept}', the key question is: what resists change, and "
87
+ "what drives it?"
88
+ ),
89
+ # 6 - Energy transfer and transformation
90
+ (
91
+ "Mapping the energy flows within '{concept}': energy is neither created nor "
92
+ "destroyed, only converted between forms. Kinetic, potential, thermal, "
93
+ "chemical, electromagnetic -- tracking the conversion pathway reveals the "
94
+ "efficiency of the process and identifies where losses occur. The second "
95
+ "law of thermodynamics guarantees that every conversion increases total "
96
+ "entropy, meaning some energy always degrades to unusable heat. The "
97
+ "thermodynamic efficiency ceiling sets an absolute bound on what is "
98
+ "achievable, regardless of engineering cleverness. Understanding where "
99
+ "'{concept}' sits relative to this ceiling tells us whether there is room "
100
+ "for improvement or whether we are already near fundamental limits."
101
+ ),
102
+ # 7 - Feedback loops and control
103
+ (
104
+ "Identifying feedback mechanisms in '{concept}': a system with negative "
105
+ "feedback tends toward a set point -- deviations produce corrective "
106
+ "responses that restore the original state. Positive feedback amplifies "
107
+ "deviations, driving the system away from its initial state toward a new "
108
+ "regime. Most real systems contain both types, and the dominant loop "
109
+ "determines the qualitative behavior. The gain of each loop (how strongly "
110
+ "the output feeds back to the input) and the delay (how long before the "
111
+ "feedback signal arrives) together determine whether the system is stable, "
112
+ "oscillatory, or divergent. Mapping these loops is essential for predicting "
113
+ "long-term behavior."
114
+ ),
115
+ # 8 - Phase space and degrees of freedom
116
+ (
117
+ "Constructing the phase space of '{concept}': every independent variable "
118
+ "that can change defines a dimension in the state space. A point in this "
119
+ "space represents the complete instantaneous state; a trajectory represents "
120
+ "the system's evolution over time. The dimensionality -- number of degrees "
121
+ "of freedom -- determines the complexity of possible behaviors. Low-dimensional "
122
+ "systems (1-3 degrees of freedom) can be visualized and often admit analytical "
123
+ "solutions. High-dimensional systems require statistical descriptions. "
124
+ "Identifying constraints that reduce the effective dimensionality is one of "
125
+ "the most powerful simplification strategies available."
126
+ ),
127
+ # 9 - Measurement and observables
128
+ (
129
+ "Defining the observables for '{concept}': a quantity is physically meaningful "
130
+ "only if it can, in principle, be measured by a well-defined procedure. This "
131
+ "operationalist criterion forces us to distinguish between quantities we can "
132
+ "actually determine (positions, rates, ratios, frequencies) and quantities "
133
+ "that are convenient mathematical fictions. For each proposed observable, we "
134
+ "must specify: what instrument or procedure measures it, what are the sources "
135
+ "of uncertainty, and how does the measurement resolution compare to the "
136
+ "expected variation? Any claim about '{concept}' that cannot be connected to "
137
+ "a measurable prediction is, strictly speaking, untestable."
138
+ ),
139
+ # 10 - Differential equation framing
140
+ (
141
+ "Formulating '{concept}' as a dynamical system: the state variables evolve "
142
+ "according to rules that relate the rate of change of each variable to the "
143
+ "current state. Writing these rules as differential equations (or difference "
144
+ "equations for discrete systems) gives us the complete forward model. The "
145
+ "character of the equations -- linear vs nonlinear, autonomous vs driven, "
146
+ "conservative vs dissipative -- determines the qualitative behavior. Linear "
147
+ "systems superpose: the response to two inputs equals the sum of the "
148
+ "individual responses. Nonlinear systems can exhibit bifurcations, limit "
149
+ "cycles, and chaos, where tiny changes in initial conditions lead to "
150
+ "exponentially diverging outcomes."
151
+ ),
152
+ # 11 - Perturbation theory
153
+ (
154
+ "Applying perturbation analysis to '{concept}': begin with a simplified "
155
+ "version of the problem that can be solved exactly -- the zeroth-order "
156
+ "approximation. Then systematically add corrections for each complicating "
157
+ "factor, ordered by their magnitude. The first-order correction captures "
158
+ "the dominant effect of the perturbation; higher-order terms add refinement. "
159
+ "This approach succeeds when the perturbations are genuinely small compared "
160
+ "to the zeroth-order terms. When they are not, the perturbation series "
161
+ "diverges, signaling that the simplified model is qualitatively wrong and "
162
+ "a fundamentally different framework is needed."
163
+ ),
164
+ # 12 - Action principle and optimization
165
+ (
166
+ "Viewing '{concept}' through the principle of least action: among all "
167
+ "possible paths from state A to state B, the system follows the one that "
168
+ "extremizes the action integral. This variational perspective is more "
169
+ "powerful than force-based reasoning because it naturally handles constraints "
170
+ "and reveals which quantity the system is implicitly optimizing. The Euler-Lagrange "
171
+ "equations derived from this principle give the equations of motion directly. "
172
+ "For '{concept}', asking 'what is being optimized, and subject to what "
173
+ "constraints?' often cuts through surface complexity to reveal the governing "
174
+ "logic."
175
+ ),
176
+ # 13 - Resonance and natural frequencies
177
+ (
178
+ "Probing the natural frequencies of '{concept}': every system with restoring "
179
+ "forces and inertia has characteristic frequencies at which it oscillates "
180
+ "most readily. Driving the system near one of these resonant frequencies "
181
+ "produces a disproportionately large response -- this is resonance. The "
182
+ "sharpness of the resonance peak (the Q factor) measures how efficiently "
183
+ "the system stores energy versus dissipating it. High-Q systems are "
184
+ "exquisitely sensitive near resonance but nearly unresponsive far from it. "
185
+ "Identifying the resonant frequencies of '{concept}' reveals where small "
186
+ "inputs can produce outsized effects."
187
+ ),
188
+ # 14 - Boundary conditions and constraints
189
+ (
190
+ "Specifying the boundary conditions for '{concept}': the governing equations "
191
+ "alone do not uniquely determine the solution -- the boundary and initial "
192
+ "conditions select one trajectory from the infinite family of possibilities. "
193
+ "Fixed boundaries (Dirichlet conditions) specify the state at the edges. "
194
+ "Free boundaries (Neumann conditions) specify the flux. Mixed conditions "
195
+ "combine both. Changing the boundary conditions while keeping the same "
196
+ "governing equations can produce qualitatively different solutions. For "
197
+ "'{concept}', clearly articulating what is held fixed, what is free, and "
198
+ "what flows in or out at the boundaries is essential for a well-posed analysis."
199
+ ),
200
+ # 15 - Coupling and interaction strength
201
+ (
202
+ "Assessing the coupling strengths within '{concept}': when multiple subsystems "
203
+ "interact, the coupling constant determines whether they behave nearly "
204
+ "independently (weak coupling), synchronize their behavior (strong coupling), "
205
+ "or sit at an intermediate regime where perturbative methods barely work. "
206
+ "Weakly coupled systems can be analyzed by studying each subsystem in "
207
+ "isolation and adding interaction corrections. Strongly coupled systems "
208
+ "demand a holistic treatment because the subsystems lose their individual "
209
+ "identity. Determining the coupling regime is the first step in choosing "
210
+ "the right analytical framework."
211
+ ),
212
+ # 16 - Rate-limiting steps
213
+ (
214
+ "Identifying the rate-limiting process in '{concept}': in any multi-step "
215
+ "sequence, the slowest step determines the overall rate. Speeding up a "
216
+ "non-rate-limiting step has zero effect on throughput -- effort spent there "
217
+ "is wasted. The rate-limiting step is the bottleneck where resources queue "
218
+ "up and where targeted intervention produces the greatest marginal return. "
219
+ "For '{concept}', isolating this bottleneck requires measuring the time "
220
+ "constant (or its analog) of each subprocess and comparing them. The "
221
+ "subprocess with the largest time constant is the one worth optimizing."
222
+ ),
223
+ # 17 - Nonlinearity and emergence
224
+ (
225
+ "Investigating nonlinear dynamics in '{concept}': when the response of a "
226
+ "system is not proportional to the input, superposition fails and qualitatively "
227
+ "new behaviors emerge. Thresholds appear where the system suddenly transitions "
228
+ "between distinct states. Hysteresis means the system remembers its history. "
229
+ "Bifurcations occur where a smooth parameter change causes a sudden qualitative "
230
+ "shift in behavior. Sensitivity to initial conditions can make long-term "
231
+ "prediction impossible even though the underlying rules are deterministic. "
232
+ "These nonlinear phenomena are not exotic exceptions -- they are the generic "
233
+ "behavior of real systems, and '{concept}' is unlikely to be an exception."
234
+ ),
235
+ # 18 - Inverse problem reasoning
236
+ (
237
+ "Framing '{concept}' as an inverse problem: the forward problem asks 'given "
238
+ "the mechanism, what do we observe?' The inverse problem asks 'given the "
239
+ "observations, what mechanism produced them?' Inverse problems are almost "
240
+ "always harder because they are typically ill-posed -- multiple mechanisms "
241
+ "can produce identical observations. Regularization (imposing additional "
242
+ "constraints like smoothness or sparsity) is needed to select a unique "
243
+ "solution. For '{concept}', working backward from observed outcomes to "
244
+ "infer causes requires explicit acknowledgment of which assumptions we "
245
+ "are importing and how they constrain the set of admissible explanations."
246
+ ),
247
+ # 19 - Thermodynamic arrow
248
+ (
249
+ "Applying thermodynamic reasoning to '{concept}': the second law provides "
250
+ "a universal arrow distinguishing processes that can happen spontaneously "
251
+ "from those that cannot. A process runs forward if it increases total entropy "
252
+ "(or equivalently, decreases free energy at constant temperature and pressure). "
253
+ "Local decreases in entropy -- the creation of order and structure -- are "
254
+ "always paid for by larger increases elsewhere. For '{concept}', the "
255
+ "thermodynamic perspective asks: what drives this process forward? What is "
256
+ "the free-energy gradient? And what would it cost, in thermodynamic terms, "
257
+ "to reverse it?"
258
+ ),
259
+ ]
260
+
261
+ def get_keyword_map(self) -> dict[str, list[int]]:
262
+ return {
263
+ "cause": [0, 18], "causality": [0, 18], "why": [0, 18],
264
+ "conserv": [1], "balance": [1, 5], "preserve": [1],
265
+ "symmetr": [2], "invariant": [2], "transform": [2],
266
+ "equilib": [3], "stable": [3], "steady": [3],
267
+ "scale": [4], "size": [4], "dimension": [4], "grow": [4],
268
+ "force": [5], "push": [5], "pull": [5], "pressure": [5],
269
+ "energy": [6, 19], "power": [6], "efficien": [6],
270
+ "feedback": [7], "control": [7], "regulat": [7],
271
+ "state": [8], "complex": [8], "freedom": [8],
272
+ "measure": [9], "observ": [9], "data": [9], "test": [9],
273
+ "change": [10], "rate": [10, 16], "dynamic": [10],
274
+ "approximat": [11], "small": [11], "perturb": [11],
275
+ "optim": [12], "best": [12], "minimum": [12], "maximum": [12],
276
+ "oscillat": [13], "frequen": [13], "resonan": [13], "vibrat": [13],
277
+ "boundary": [14], "constrain": [14], "limit": [14],
278
+ "interact": [15], "coupl": [15], "connect": [15],
279
+ "bottleneck": [16], "slow": [16], "throughput": [16],
280
+ "nonlinear": [17], "emergent": [17], "threshold": [17], "chaos": [17],
281
+ "infer": [18], "deduc": [18], "inverse": [18],
282
+ "entropy": [19], "disorder": [19], "irreversib": [19], "thermodyn": [19],
283
+ "technology": [6, 7, 16], "society": [7, 17], "learning": [7, 11],
284
+ "intelligence": [8, 10, 17], "evolution": [3, 17, 19],
285
+ "climate": [1, 7, 19], "economic": [3, 7, 16],
286
+ "health": [3, 7, 16], "network": [8, 15, 17],
287
+ }
288
+
289
+ def analyze(self, concept: str) -> str:
290
+ template = self.select_template(concept)
291
+ return template.replace("{concept}", concept)
reasoning_forge/agents/philosophy_agent.py ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Philosophy Agent - Analyzes concepts through epistemology, ontology, and conceptual meaning.
3
+
4
+ Focuses on epistemological questions (what can we know?), ontological questions
5
+ (what exists?), underlying assumptions, historical philosophical connections,
6
+ and implications for understanding reality.
7
+ """
8
+
9
+ from reasoning_forge.agents.base_agent import ReasoningAgent
10
+
11
+
12
+ class PhilosophyAgent(ReasoningAgent):
13
+ name = "Philosophy"
14
+ perspective = "conceptual_meaning_and_foundations"
15
+
16
+ def get_analysis_templates(self) -> list[str]:
17
+ return [
18
+ # 0 - Epistemological limits
19
+ (
20
+ "Interrogating the epistemological boundaries of '{concept}': what can we "
21
+ "actually know about this, and how do we know it? Every knowledge claim "
22
+ "rests on a justification chain that eventually terminates in something "
23
+ "unjustified -- an axiom, a sensory experience, or a pragmatic assumption. "
24
+ "The Agrippan trilemma tells us this chain must end in dogmatism (accepting "
25
+ "an unjustified starting point), infinite regress (each justification requires "
26
+ "another), or circularity (the chain loops back on itself). Acknowledging "
27
+ "which horn of this trilemma our understanding of '{concept}' rests on is "
28
+ "not skeptical defeatism but intellectual honesty about the foundations of "
29
+ "our confidence."
30
+ ),
31
+ # 1 - Ontological status
32
+ (
33
+ "Examining the ontological status of '{concept}': does this exist "
34
+ "independently of minds that think about it, or is it a construct of "
35
+ "human cognition? Realism holds that the entities and structures involved "
36
+ "exist mind-independently; conceptualism holds they are products of "
37
+ "categorization imposed by cognitive agents; nominalism holds that only "
38
+ "particular instances exist and the general category is merely a label. "
39
+ "The ontological commitment we make about '{concept}' has practical "
40
+ "consequences: if it is mind-independent, we discover it; if it is "
41
+ "constructed, we negotiate it; if it is nominal, we can reshape it by "
42
+ "changing our categories."
43
+ ),
44
+ # 2 - Assumption excavation
45
+ (
46
+ "Excavating the hidden assumptions beneath '{concept}': every conceptual "
47
+ "framework rests on presuppositions so deeply embedded that they become "
48
+ "invisible -- the background against which the figure of the concept appears. "
49
+ "These include metaphysical assumptions (what kind of thing is this?), "
50
+ "epistemological assumptions (what counts as evidence?), normative assumptions "
51
+ "(what should we value?), and linguistic assumptions (do our categories carve "
52
+ "nature at its joints?). Making these assumptions explicit transforms a "
53
+ "monolithic concept into a layered structure where each layer can be "
54
+ "independently examined, challenged, and potentially replaced."
55
+ ),
56
+ # 3 - Socratic questioning
57
+ (
58
+ "Subjecting '{concept}' to Socratic examination: what do we mean by this, "
59
+ "precisely? Can we provide a definition that is neither too broad (including "
60
+ "things that should be excluded) nor too narrow (excluding things that should "
61
+ "be included)? Every proposed definition generates counterexamples -- cases "
62
+ "that meet the definition but violate our intuitions, or cases that our "
63
+ "intuitions include but the definition excludes. This dialectical process "
64
+ "does not necessarily converge on a final definition; its value lies in "
65
+ "revealing the internal structure and boundary conditions of the concept, "
66
+ "showing us where our understanding is sharp and where it is fuzzy."
67
+ ),
68
+ # 4 - Phenomenological description
69
+ (
70
+ "Describing '{concept}' phenomenologically: before theorizing about causes, "
71
+ "mechanisms, or implications, we must give a faithful description of how "
72
+ "this concept appears to consciousness. What is the first-person experience "
73
+ "of encountering it? What is its temporal structure -- does it present as "
74
+ "an enduring state, a sudden event, or a gradual process? What is its "
75
+ "intentional structure -- what is it about, what does it point toward? "
76
+ "Phenomenological description brackets our theoretical commitments and "
77
+ "returns to the things themselves, providing a pre-theoretical ground from "
78
+ "which all theoretical constructions depart."
79
+ ),
80
+ # 5 - Dialectical tension
81
+ (
82
+ "Mapping the dialectical tensions within '{concept}': every concept harbors "
83
+ "internal contradictions that drive its development. The thesis (the initial "
84
+ "formulation) generates its antithesis (the negation that the formulation "
85
+ "suppresses but cannot eliminate). The tension between them demands a "
86
+ "synthesis that preserves the valid content of both while transcending their "
87
+ "limitations. This synthesis becomes a new thesis, generating its own "
88
+ "antithesis, in a continuing spiral of deepening understanding. For "
89
+ "'{concept}', identifying the central dialectical tension reveals the "
90
+ "dynamic that drives the concept's evolution and points toward its "
91
+ "next developmental stage."
92
+ ),
93
+ # 6 - Category analysis
94
+ (
95
+ "Analyzing the categorical structure of '{concept}': how do we classify this, "
96
+ "and do our categories illuminate or distort? Aristotelian categories "
97
+ "(substance, quantity, quality, relation, place, time, position, state, "
98
+ "action, passion) provide one framework. Kantian categories (unity, plurality, "
99
+ "totality, reality, negation, limitation, causality, community, possibility, "
100
+ "existence, necessity) provide another. Each categorical framework makes "
101
+ "certain features of '{concept}' visible and others invisible. The categories "
102
+ "we use are not neutral containers but active structuring principles that "
103
+ "shape what we can think and say about the concept."
104
+ ),
105
+ # 7 - Wittgensteinian language analysis
106
+ (
107
+ "Examining '{concept}' through the lens of language: Wittgenstein taught that "
108
+ "many philosophical problems dissolve when we attend to how words are actually "
109
+ "used rather than what we think they mean in the abstract. The meaning of "
110
+ "'{concept}' is not a fixed essence but a family of uses connected by "
111
+ "overlapping similarities -- a family resemblance. No single feature is "
112
+ "shared by all instances. The concept has fuzzy boundaries, and attempts to "
113
+ "sharpen them always involve a decision (not a discovery) about where to draw "
114
+ "the line. Many apparent disagreements about '{concept}' are actually "
115
+ "disagreements about the boundaries of the concept, not about the facts."
116
+ ),
117
+ # 8 - Hermeneutic circle
118
+ (
119
+ "Interpreting '{concept}' within the hermeneutic circle: we cannot understand "
120
+ "the parts without understanding the whole, but we cannot understand the whole "
121
+ "without understanding the parts. Understanding proceeds not linearly but "
122
+ "spirally -- we begin with a provisional grasp of the whole, use it to "
123
+ "interpret the parts, then revise our understanding of the whole in light "
124
+ "of the parts, and iterate. Each cycle deepens understanding without ever "
125
+ "reaching a final, complete interpretation. For '{concept}', this means that "
126
+ "any analysis is necessarily provisional, positioned within a hermeneutic "
127
+ "spiral that continues beyond our current horizon."
128
+ ),
129
+ # 9 - Pragmatist evaluation
130
+ (
131
+ "Evaluating '{concept}' pragmatically: a concept's value lies not in its "
132
+ "correspondence to some abstract truth but in the practical difference it "
133
+ "makes. What predictions does it enable? What actions does it guide? What "
134
+ "problems does it help solve? If two formulations of '{concept}' lead to "
135
+ "identical practical consequences, the difference between them is merely "
136
+ "verbal, not substantive. Conversely, a conceptual distinction that makes "
137
+ "no practical difference is a distinction without a difference. The pragmatist "
138
+ "test cuts through metaphysical debates by asking: what concrete experiences "
139
+ "would be different if this concept were true versus false?"
140
+ ),
141
+ # 10 - Existentialist reading
142
+ (
143
+ "Reading '{concept}' through existentialist philosophy: human existence "
144
+ "precedes essence -- we are not born with a fixed nature but must create "
145
+ "meaning through our choices and commitments. '{concept}' does not have "
146
+ "an inherent meaning waiting to be discovered; its meaning is constituted "
147
+ "by the stance we take toward it. This radical freedom is also radical "
148
+ "responsibility: we cannot appeal to a predetermined meaning or an authority "
149
+ "to justify our interpretation. Authenticity demands that we own our "
150
+ "interpretation of '{concept}' as a choice, not disguise it as a discovery "
151
+ "of something that was always there."
152
+ ),
153
+ # 11 - Mind-body problem connection
154
+ (
155
+ "Connecting '{concept}' to the mind-body problem: how does the subjective, "
156
+ "experiential dimension of this concept relate to its objective, physical "
157
+ "dimension? Dualism posits two separate realms; materialism reduces the "
158
+ "mental to the physical; idealism reduces the physical to the mental; "
159
+ "neutral monism holds both emerge from something more fundamental. For "
160
+ "'{concept}', the question is whether its full reality is captured by "
161
+ "objective description or whether there is an irreducible subjective "
162
+ "dimension -- a 'what it is like' -- that escapes third-person analysis. "
163
+ "If there is, our understanding will always be incomplete to the degree "
164
+ "that we rely solely on objective methods."
165
+ ),
166
+ # 12 - Problem of universals
167
+ (
168
+ "Applying the problem of universals to '{concept}': when we use the concept "
169
+ "to group multiple particular instances, what grounds the grouping? Platonism "
170
+ "holds that a universal Form exists independently, and particulars participate "
171
+ "in it. Aristotelian realism holds that universals exist only in their "
172
+ "instances. Nominalism holds that nothing is universal -- only particular "
173
+ "instances exist, and the grouping is a convention. For '{concept}', the "
174
+ "question of what makes different instances 'the same concept' is not merely "
175
+ "academic: it determines whether we can generalize from known instances to "
176
+ "new ones, and with what confidence."
177
+ ),
178
+ # 13 - Philosophical anthropology
179
+ (
180
+ "Situating '{concept}' in philosophical anthropology: what does this concept "
181
+ "reveal about human nature? Humans are the beings for whom their own being "
182
+ "is an issue -- we do not simply exist but relate to our existence, "
183
+ "questioning and interpreting it. '{concept}' is not merely an object of "
184
+ "study but a mirror reflecting the kind of beings we are: beings who seek "
185
+ "meaning, impose order on chaos, project themselves into the future, and "
186
+ "cannot help but ask 'why?' The way we engage with '{concept}' reveals "
187
+ "our characteristic mode of being-in-the-world."
188
+ ),
189
+ # 14 - Paradigm analysis
190
+ (
191
+ "Examining '{concept}' as a paradigm-dependent construct: Kuhn showed that "
192
+ "scientific concepts are not neutral descriptions of reality but are shaped "
193
+ "by the paradigm within which they operate. The paradigm determines what "
194
+ "counts as a legitimate question, what counts as evidence, what methods are "
195
+ "acceptable, and what a satisfactory explanation looks like. Concepts that "
196
+ "are central in one paradigm may be meaningless or invisible in another. "
197
+ "For '{concept}', we must ask: which paradigm makes this concept visible? "
198
+ "What would it look like from within a different paradigm? Is the concept "
199
+ "paradigm-specific, or does it survive paradigm shifts?"
200
+ ),
201
+ # 15 - Genealogical critique
202
+ (
203
+ "Tracing the genealogy of '{concept}': Nietzsche and Foucault showed that "
204
+ "concepts have histories -- they emerge at specific times, serve specific "
205
+ "interests, and carry the traces of their origins. A concept that presents "
206
+ "itself as timeless and universal often turns out to be historically "
207
+ "contingent and ideologically loaded. The genealogical method asks: when "
208
+ "did this concept emerge? What problem was it designed to solve? Whose "
209
+ "interests did it serve? What alternatives did it displace? For '{concept}', "
210
+ "genealogical analysis reveals the power relations and historical accidents "
211
+ "concealed beneath the appearance of naturalness."
212
+ ),
213
+ # 16 - Thought experiment testing
214
+ (
215
+ "Testing '{concept}' through thought experiments: philosophical thought "
216
+ "experiments isolate a conceptual question by constructing a scenario that "
217
+ "strips away irrelevant details. The Ship of Theseus asks about identity "
218
+ "through change. The Trolley Problem isolates competing moral intuitions. "
219
+ "Mary's Room tests the completeness of physical description. For '{concept}', "
220
+ "we can construct analogous thought experiments: imagine a world where this "
221
+ "concept is absent -- what changes? Imagine it taken to its logical extreme "
222
+ "-- what breaks? Imagine its opposite -- is the opposite even coherent? "
223
+ "These scenarios stress-test the concept's boundaries and assumptions."
224
+ ),
225
+ # 17 - Philosophy of science connection
226
+ (
227
+ "Connecting '{concept}' to the philosophy of science: is this concept "
228
+ "empirically testable (falsifiable in Popper's sense), or does it belong "
229
+ "to the non-empirical framework within which empirical testing occurs? "
230
+ "Theories are underdetermined by evidence -- multiple incompatible theories "
231
+ "can explain the same data. The choice between them involves extra-empirical "
232
+ "criteria: simplicity, elegance, unifying power, and coherence with "
233
+ "background beliefs. For '{concept}', we must distinguish the empirical "
234
+ "content (what it predicts that could be wrong) from the metaphysical "
235
+ "content (what it assumes that cannot be tested)."
236
+ ),
237
+ # 18 - Ethics of belief
238
+ (
239
+ "Applying the ethics of belief to '{concept}': Clifford argued that it is "
240
+ "wrong to believe anything on insufficient evidence; James argued that some "
241
+ "beliefs are legitimate even without conclusive evidence when the stakes are "
242
+ "high and evidence is unavailable. For '{concept}', the ethics of belief asks: "
243
+ "given the available evidence, are our confidence levels calibrated? Are we "
244
+ "believing more or less strongly than the evidence warrants? Is our confidence "
245
+ "driven by evidence or by desire? When the evidence is genuinely ambiguous, "
246
+ "do we acknowledge the ambiguity or paper over it with false certainty?"
247
+ ),
248
+ # 19 - Derrida and deconstruction
249
+ (
250
+ "Deconstructing '{concept}': Derrida showed that every concept depends on a "
251
+ "system of binary oppositions (presence/absence, nature/culture, literal/"
252
+ "metaphorical), and each opposition privileges one term over the other. "
253
+ "Deconstruction traces how the privileged term depends on the very thing "
254
+ "it excludes -- the center requires the margin, identity requires difference, "
255
+ "the concept requires what it defines itself against. For '{concept}', "
256
+ "deconstruction asks: what is the constitutive outside -- the excluded "
257
+ "other -- that this concept defines itself against? How does that exclusion "
258
+ "shape and limit the concept? What would it mean to think beyond the "
259
+ "opposition?"
260
+ ),
261
+ ]
262
+
263
+ def get_keyword_map(self) -> dict[str, list[int]]:
264
+ return {
265
+ "know": [0, 18], "knowledge": [0, 18], "epistem": [0],
266
+ "exist": [1, 10], "real": [1, 17], "being": [1, 13],
267
+ "assum": [2], "presuppos": [2], "foundati": [2],
268
+ "defin": [3], "mean": [3, 7], "what is": [3],
269
+ "experience": [4, 11], "conscious": [4, 11], "feel": [4],
270
+ "contradict": [5], "tension": [5], "oppos": [5, 19],
271
+ "categor": [6], "classify": [6], "type": [6],
272
+ "language": [7], "word": [7], "concept": [7],
273
+ "interpret": [8], "understand": [8], "meaning": [8],
274
+ "practical": [9], "useful": [9], "pragmat": [9],
275
+ "freedom": [10], "choice": [10], "authentic": [10],
276
+ "mind": [11], "body": [11], "subjectiv": [11],
277
+ "universal": [12], "particular": [12], "general": [12],
278
+ "human": [13], "nature": [13], "anthropol": [13],
279
+ "paradigm": [14], "revolution": [14], "shift": [14],
280
+ "history": [15], "origin": [15], "genealog": [15], "power": [15],
281
+ "thought experiment": [16], "imagine": [16], "hypothetical": [16],
282
+ "science": [17], "empiric": [17], "falsifi": [17],
283
+ "belief": [18], "evidence": [18], "justif": [18],
284
+ "binary": [19], "deconstr": [19], "exclus": [19],
285
+ "technology": [14, 17], "ai": [1, 11], "artificial": [1, 11],
286
+ "society": [5, 15], "learning": [0, 8],
287
+ "intelligence": [1, 11], "evolution": [5, 15],
288
+ "moral": [10, 18], "ethic": [10, 18],
289
+ }
290
+
291
+ def analyze(self, concept: str) -> str:
292
+ template = self.select_template(concept)
293
+ return template.replace("{concept}", concept)
reasoning_forge/agents/quantum_agent.py ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Quantum Agent - Analyzes concepts through probabilistic and uncertainty reasoning.
3
+
4
+ Focuses on superposition of possibilities, measurement effects, probabilistic
5
+ vs deterministic outcomes, entanglement and correlations, and wave-particle
6
+ duality analogies.
7
+ """
8
+
9
+ from reasoning_forge.agents.base_agent import ReasoningAgent
10
+
11
+
12
+ class QuantumAgent(ReasoningAgent):
13
+ name = "Quantum"
14
+ perspective = "probabilistic_and_uncertainty"
15
+
16
+ def get_analysis_templates(self) -> list[str]:
17
+ return [
18
+ # 0 - Superposition of possibilities
19
+ (
20
+ "Before we commit to a single interpretation, '{concept}' exists in a "
21
+ "superposition of multiple valid framings simultaneously. Each framing "
22
+ "carries a probability amplitude -- not a classical probability, but a "
23
+ "complex weight that can interfere constructively or destructively with "
24
+ "others. Some framings reinforce each other, producing high-probability "
25
+ "interpretations; others cancel out, revealing that certain seemingly "
26
+ "plausible readings are actually suppressed by internal contradictions. "
27
+ "The richest understanding comes from maintaining this superposition as "
28
+ "long as possible, resisting the temptation to collapse prematurely into "
29
+ "a single narrative."
30
+ ),
31
+ # 1 - Measurement disturbance
32
+ (
33
+ "The act of examining '{concept}' necessarily disturbs it. Any attempt to "
34
+ "pin down one aspect with high precision introduces uncertainty into "
35
+ "complementary aspects. If we measure the current state with perfect "
36
+ "accuracy, we lose information about the trajectory of change. If we "
37
+ "track the dynamics precisely, the instantaneous state becomes blurred. "
38
+ "This is not a failure of our instruments -- it is a fundamental feature "
39
+ "of systems where the observer and observed are entangled. The experimental "
40
+ "design (which questions we choose to ask) shapes the answers we can obtain, "
41
+ "making the framing of inquiry as important as the inquiry itself."
42
+ ),
43
+ # 2 - Complementarity
44
+ (
45
+ "'{concept}' exhibits complementarity: it has pairs of properties that "
46
+ "cannot be simultaneously specified with arbitrary precision. Like position "
47
+ "and momentum in quantum mechanics, knowing one aspect exhaustively means "
48
+ "accepting irreducible uncertainty in its complement. The wave-like view "
49
+ "emphasizes distributed patterns, interference, and coherence across the "
50
+ "whole system. The particle-like view emphasizes localized events, discrete "
51
+ "outcomes, and individual instances. Neither view alone is complete; both "
52
+ "are needed, and the apparent contradiction between them is not a defect "
53
+ "but the deepest feature of the subject."
54
+ ),
55
+ # 3 - Probability amplitudes and interference
56
+ (
57
+ "Analyzing the probability landscape of '{concept}': outcomes are not "
58
+ "determined by summing classical probabilities but by summing amplitudes "
59
+ "that can interfere. Two pathways to the same outcome may cancel each other "
60
+ "(destructive interference), making a seemingly likely result improbable. "
61
+ "Alternatively, they may reinforce (constructive interference), making an "
62
+ "unlikely outcome surprisingly common. This means we cannot reason about "
63
+ "'{concept}' by considering each factor in isolation and adding up their "
64
+ "effects -- the cross-terms between factors, the interference pattern, "
65
+ "carries critical information that purely additive thinking misses."
66
+ ),
67
+ # 4 - Entanglement and correlation
68
+ (
69
+ "Multiple elements of '{concept}' are entangled: measuring or changing one "
70
+ "instantaneously constrains what we can know about the others, regardless "
71
+ "of the apparent separation between them. These correlations are stronger "
72
+ "than any classical explanation permits -- they cannot be reproduced by "
73
+ "assuming each element has pre-existing definite properties. This means "
74
+ "'{concept}' is not decomposable into fully independent parts. The "
75
+ "correlations between components carry information that is not contained "
76
+ "in any component individually. Analyzing the parts in isolation and then "
77
+ "trying to reconstruct the whole will systematically miss these non-local "
78
+ "correlations."
79
+ ),
80
+ # 5 - Collapse and decision
81
+ (
82
+ "At some point, the superposition of possibilities around '{concept}' must "
83
+ "collapse into a definite outcome. This collapse -- the moment of decision, "
84
+ "measurement, or commitment -- is irreversible. Before collapse, all "
85
+ "possibilities coexist and influence each other through interference. After "
86
+ "collapse, one outcome is realized and the others vanish. The timing of "
87
+ "this collapse matters enormously: collapsing too early (deciding prematurely) "
88
+ "forecloses options that might have interfered constructively. Collapsing "
89
+ "too late risks decoherence, where the environment randomizes the phases "
90
+ "and destroys the delicate interference patterns that could have guided "
91
+ "a better outcome."
92
+ ),
93
+ # 6 - Tunneling through barriers
94
+ (
95
+ "Within '{concept}', there may be barriers that appear insurmountable "
96
+ "under classical analysis -- energy gaps too wide, transitions too "
97
+ "improbable. But quantum tunneling demonstrates that a nonzero probability "
98
+ "exists for traversing barriers that classical reasoning says are impassable. "
99
+ "The tunneling probability depends exponentially on the barrier width and "
100
+ "height: thin barriers are penetrable, thick ones are not. For '{concept}', "
101
+ "this suggests asking: are the perceived obstacles genuinely thick barriers, "
102
+ "or are they thin barriers that appear impenetrable only because we are "
103
+ "applying classical (deterministic) reasoning to an inherently probabilistic "
104
+ "situation?"
105
+ ),
106
+ # 7 - Decoherence and information leakage
107
+ (
108
+ "The coherence of '{concept}' -- the ability of its different aspects to "
109
+ "interfere constructively -- is fragile. Interaction with a noisy environment "
110
+ "causes decoherence: the quantum-like superposition of possibilities decays "
111
+ "into a classical mixture where different outcomes no longer interfere. "
112
+ "Each interaction with the environment leaks information about the system's "
113
+ "state, effectively performing a partial measurement. The decoherence time "
114
+ "sets the window within which coherent reasoning about '{concept}' remains "
115
+ "valid. Beyond that window, the interference effects have washed out and "
116
+ "we are left with classical probabilistic reasoning -- still useful, but "
117
+ "less powerful."
118
+ ),
119
+ # 8 - No-cloning and uniqueness
120
+ (
121
+ "The no-cloning theorem states that an unknown quantum state cannot be "
122
+ "perfectly copied. Applied to '{concept}': if the concept embodies a unique "
123
+ "configuration of entangled properties, it cannot be perfectly replicated "
124
+ "by decomposing it into parts and reassembling them. Any attempt to copy "
125
+ "it disturbs the original. This has profound implications: unique instances "
126
+ "of '{concept}' are genuinely irreplaceable, not merely practically "
127
+ "difficult to reproduce. Strategies that depend on exact replication must "
128
+ "be replaced by strategies that work with approximate copies and manage "
129
+ "the fidelity loss."
130
+ ),
131
+ # 9 - Uncertainty principle application
132
+ (
133
+ "Heisenberg's uncertainty principle, generalized beyond physics, suggests "
134
+ "that '{concept}' has conjugate properties that trade off precision. "
135
+ "Specifying the concept's scope with extreme precision makes its future "
136
+ "trajectory unpredictable. Specifying the direction of change precisely "
137
+ "blurs the current boundaries. The product of these uncertainties has a "
138
+ "minimum value -- we cannot reduce both below a threshold. Practical "
139
+ "wisdom lies in choosing which uncertainty to minimize based on what "
140
+ "decisions we need to make, accepting that the conjugate uncertainty "
141
+ "will necessarily increase."
142
+ ),
143
+ # 10 - Quantum Zeno effect
144
+ (
145
+ "Frequent observation of '{concept}' can freeze its evolution -- the "
146
+ "quantum Zeno effect. Continuously monitoring whether the system has "
147
+ "changed forces it to remain in its initial state, because each "
148
+ "observation collapses the evolving superposition back to the starting "
149
+ "point before significant transition amplitude accumulates. Paradoxically, "
150
+ "the most watched aspects of '{concept}' may be the least likely to "
151
+ "change. Allowing unmonitored evolution -- stepping back and not measuring "
152
+ "for a while -- may be necessary for genuine transformation to occur."
153
+ ),
154
+ # 11 - Eigenstate decomposition
155
+ (
156
+ "Decomposing '{concept}' into its eigenstates -- the stable, self-consistent "
157
+ "configurations that persist under the relevant operator -- reveals the "
158
+ "natural modes of the system. Each eigenstate has a definite value for "
159
+ "the quantity being measured; a general state is a superposition of these "
160
+ "eigenstates. The eigenvalue spectrum (the set of possible measurement "
161
+ "outcomes) may be discrete, continuous, or mixed. Discrete spectra imply "
162
+ "quantized behavior: only certain values are possible, and the system "
163
+ "jumps between them. Identifying the eigenstates of '{concept}' tells us "
164
+ "what the stable configurations are and what transitions between them look like."
165
+ ),
166
+ # 12 - Path integral perspective
167
+ (
168
+ "From the path integral perspective, '{concept}' does not follow a single "
169
+ "trajectory from start to finish. Instead, every conceivable path contributes "
170
+ "to the final outcome, each weighted by a phase factor. Most paths cancel "
171
+ "each other out through destructive interference, leaving only a narrow "
172
+ "bundle of 'classical' paths that dominate the sum. But near decision points, "
173
+ "barriers, or transitions, the non-classical paths contribute significantly, "
174
+ "and the outcome depends on the full ensemble of possibilities. This perspective "
175
+ "counsels against fixating on the most likely path and instead attending to "
176
+ "the full distribution of paths that contribute to the result."
177
+ ),
178
+ # 13 - Entanglement entropy and information
179
+ (
180
+ "The entanglement entropy of '{concept}' measures how much information about "
181
+ "one part of the system is encoded in its correlations with other parts rather "
182
+ "than in the part itself. High entanglement entropy means the subsystem appears "
183
+ "maximally disordered when examined alone, even though the joint system may be "
184
+ "in a pure, fully determined state. This is a profound observation: local "
185
+ "ignorance can coexist with global certainty. For '{concept}', apparent "
186
+ "randomness or confusion at one level may dissolve into perfect order when "
187
+ "we expand our view to include the correlated components."
188
+ ),
189
+ # 14 - Basis dependence and frame choice
190
+ (
191
+ "Our analysis of '{concept}' depends critically on the basis we choose -- "
192
+ "the set of fundamental categories into which we decompose the concept. "
193
+ "A different basis (a different set of fundamental categories) can make a "
194
+ "confused-looking problem transparent, or a simple-looking problem intractable. "
195
+ "There is no uniquely 'correct' basis; the optimal choice depends on which "
196
+ "question we are asking. The interference terms that appear in one basis "
197
+ "become diagonal (simple) in another. Finding the basis that diagonalizes "
198
+ "the problem -- the natural language in which '{concept}' expresses itself "
199
+ "most simply -- is often the breakthrough that transforms understanding."
200
+ ),
201
+ # 15 - Coherent vs incoherent mixtures
202
+ (
203
+ "A critical distinction for '{concept}': is the coexistence of multiple "
204
+ "interpretations a coherent superposition (where they interfere and interact) "
205
+ "or an incoherent mixture (where they merely coexist without interaction, "
206
+ "like balls in an urn)? A coherent superposition produces interference "
207
+ "effects -- outcomes that no single interpretation predicts. An incoherent "
208
+ "mixture produces only the probabilistic average of individual interpretations. "
209
+ "The practical difference is enormous: coherent combinations can exhibit "
210
+ "effects (constructive peaks, destructive nulls) that are impossible in "
211
+ "any classical mixture."
212
+ ),
213
+ # 16 - Quantum error and robustness
214
+ (
215
+ "How robust is '{concept}' against errors and noise? Quantum error correction "
216
+ "teaches that information can be protected by encoding it redundantly across "
217
+ "entangled components. No single component carries the full information, so "
218
+ "no single error can destroy it. For '{concept}', the analogous question is: "
219
+ "how is the essential meaning distributed across its components? If it is "
220
+ "concentrated in a single fragile element, one disruption destroys it. If "
221
+ "it is encoded holographically across many entangled elements, it is "
222
+ "remarkably robust against local damage."
223
+ ),
224
+ # 17 - Born rule and outcome probabilities
225
+ (
226
+ "The Born rule assigns probabilities to outcomes as the squared magnitude "
227
+ "of the amplitude. Applied to '{concept}': the probability of a particular "
228
+ "interpretation prevailing is not the amplitude of support for it but the "
229
+ "amplitude squared -- a nonlinear transformation. Small differences in "
230
+ "amplitude translate to large differences in probability. A framing with "
231
+ "twice the amplitude is four times as likely to be realized. This squared "
232
+ "relationship means that dominant framings dominate more than linear "
233
+ "reasoning predicts, while minority framings are suppressed more severely "
234
+ "than their representation in discourse would suggest."
235
+ ),
236
+ # 18 - Contextuality
237
+ (
238
+ "'{concept}' may be contextual: the outcome of examining one property "
239
+ "depends on which other properties are being examined simultaneously. "
240
+ "There is no assignment of pre-existing definite values to all properties "
241
+ "that reproduces the observed correlations -- the properties do not exist "
242
+ "independently of the measurement context. This is stronger than mere "
243
+ "observer bias: it means the properties are genuinely undefined until "
244
+ "a context is specified. For '{concept}', this implies that asking 'what "
245
+ "is it really?' without specifying the context of inquiry is a question "
246
+ "that has no answer."
247
+ ),
248
+ # 19 - Quantum advantage
249
+ (
250
+ "Is there a quantum advantage in reasoning about '{concept}'? Classical "
251
+ "reasoning processes information one path at a time. Quantum-inspired "
252
+ "reasoning processes all paths simultaneously through superposition, "
253
+ "using interference to amplify correct conclusions and suppress incorrect "
254
+ "ones. The advantage is greatest for problems with hidden structure -- "
255
+ "where the correct answer is encoded in correlations between variables "
256
+ "that classical single-path reasoning cannot efficiently explore. If "
257
+ "'{concept}' has such hidden structure, maintaining a superposition of "
258
+ "hypotheses and allowing them to interfere will converge on the answer "
259
+ "faster than serially testing each hypothesis."
260
+ ),
261
+ ]
262
+
263
+ def get_keyword_map(self) -> dict[str, list[int]]:
264
+ return {
265
+ "possibilit": [0, 5], "option": [0, 5], "choice": [0, 5],
266
+ "measure": [1, 10], "observ": [1, 10], "monitor": [1, 10],
267
+ "complement": [2], "dual": [2], "wave": [2], "particle": [2],
268
+ "probabilit": [3, 17], "likel": [3, 17], "chance": [3, 17],
269
+ "correlat": [4, 13], "connect": [4], "relat": [4],
270
+ "decid": [5], "commit": [5], "irreversib": [5],
271
+ "barrier": [6], "obstacle": [6], "impossibl": [6],
272
+ "noise": [7, 16], "decay": [7], "environm": [7],
273
+ "unique": [8], "copy": [8], "replica": [8],
274
+ "uncertain": [9], "tradeoff": [9], "precis": [9],
275
+ "watch": [10], "surveil": [10], "frequent": [10],
276
+ "stable": [11], "mode": [11], "spectrum": [11],
277
+ "path": [12], "trajectory": [12], "possib": [12],
278
+ "inform": [13], "entropy": [13], "knowledge": [13],
279
+ "categor": [14], "basis": [14], "framework": [14], "frame": [14],
280
+ "coexist": [15], "mixture": [15], "blend": [15],
281
+ "robust": [16], "error": [16], "protect": [16],
282
+ "dominant": [17], "major": [17], "minor": [17],
283
+ "context": [18], "depend": [18], "situati": [18],
284
+ "advantage": [19], "efficien": [19], "complex": [19],
285
+ "technology": [6, 19], "society": [4, 7], "learning": [10, 12],
286
+ "intelligence": [14, 19], "evolution": [5, 12],
287
+ "health": [1, 9], "network": [4, 13],
288
+ }
289
+
290
+ def analyze(self, concept: str) -> str:
291
+ template = self.select_template(concept)
292
+ return template.replace("{concept}", concept)
reasoning_forge/cocoon_sync.py ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Federated Cocoon Synchronization Protocol — Encrypted state packaging,
3
+ HMAC signing, and attractor merger for distributed RC+xi nodes.
4
+
5
+ Implements:
6
+ - Cocoon packaging with full RC+xi metrics
7
+ - Fernet symmetric encryption (AES-128-CBC + HMAC-SHA256)
8
+ - Attractor merger via weighted mean-field coupling (Eq. 12)
9
+ - Phase coherence consensus (Gamma >= 0.98 target)
10
+ - Secure sync protocol: package -> encrypt -> sign -> transmit -> verify -> merge
11
+
12
+ This module enables Codette Pods (edge nodes on RPi 5) to synchronize
13
+ their reasoning state without exposing raw data.
14
+ """
15
+
16
+ from __future__ import annotations
17
+
18
+ import hashlib
19
+ import hmac
20
+ import json
21
+ import os
22
+ import time
23
+ import uuid
24
+ from dataclasses import dataclass, field
25
+ from typing import Any, Dict, List, Optional, Tuple
26
+
27
+ # Encryption is optional — gracefully degrade if cryptography not installed
28
+ try:
29
+ from cryptography.fernet import Fernet
30
+ HAS_CRYPTO = True
31
+ except ImportError:
32
+ HAS_CRYPTO = False
33
+
34
+
35
+ # ---------------------------------------------------------------------------
36
+ # Data structures
37
+ # ---------------------------------------------------------------------------
38
+
39
+ @dataclass
40
+ class CocoonPackage:
41
+ """A packaged cocoon ready for sync."""
42
+ cocoon_id: str
43
+ node_id: str
44
+ timestamp: float
45
+ state_snapshot: Dict[str, Any]
46
+ attractors: List[Dict]
47
+ glyphs: List[Dict]
48
+ metrics: Dict[str, float]
49
+ payload_hash: str
50
+ encrypted: bool = False
51
+ raw_payload: Optional[bytes] = None
52
+ signature: Optional[str] = None
53
+
54
+
55
+ @dataclass
56
+ class SyncResult:
57
+ """Result of a cocoon synchronization."""
58
+ success: bool
59
+ merged_attractors: int
60
+ new_glyphs: int
61
+ coherence_before: float
62
+ coherence_after: float
63
+ tension_delta: float
64
+ errors: List[str] = field(default_factory=list)
65
+
66
+
67
+ # ---------------------------------------------------------------------------
68
+ # Key management
69
+ # ---------------------------------------------------------------------------
70
+
71
+ class CocoonKeyManager:
72
+ """Manages encryption keys for cocoon sync."""
73
+
74
+ def __init__(self, key: Optional[bytes] = None):
75
+ if key:
76
+ self._key = key
77
+ elif HAS_CRYPTO:
78
+ self._key = Fernet.generate_key()
79
+ else:
80
+ self._key = os.urandom(32)
81
+
82
+ @property
83
+ def key(self) -> bytes:
84
+ return self._key
85
+
86
+ def derive_hmac_key(self) -> bytes:
87
+ return hashlib.sha256(self._key + b"hmac_salt_cocoon").digest()
88
+
89
+
90
+ # ---------------------------------------------------------------------------
91
+ # CocoonSync
92
+ # ---------------------------------------------------------------------------
93
+
94
+ class CocoonSync:
95
+ """Federated cocoon synchronization protocol."""
96
+
97
+ def __init__(
98
+ self,
99
+ node_id: str,
100
+ key_manager: Optional[CocoonKeyManager] = None,
101
+ coherence_target: float = 0.98,
102
+ tension_target: float = 0.05,
103
+ ethical_target: float = 0.90,
104
+ ):
105
+ self.node_id = node_id
106
+ self.key_manager = key_manager or CocoonKeyManager()
107
+ self.coherence_target = coherence_target
108
+ self.tension_target = tension_target
109
+ self.ethical_target = ethical_target
110
+
111
+ self._local_attractors: List[Dict] = []
112
+ self._local_glyphs: List[Dict] = []
113
+ self._sync_history: List[Dict] = []
114
+
115
+ # -- Step 1: Package ----------------------------------------------------
116
+
117
+ def package_cocoon(
118
+ self,
119
+ spiderweb_state: Dict[str, Any],
120
+ phase_coherence: float,
121
+ epistemic_tension: float,
122
+ ethical_alignment: float,
123
+ attractors: Optional[List[Dict]] = None,
124
+ glyphs: Optional[List[Dict]] = None,
125
+ ) -> CocoonPackage:
126
+ """Package current state into a cocoon for transmission.
127
+
128
+ Args:
129
+ spiderweb_state: Serialized QuantumSpiderweb state.
130
+ phase_coherence: Current Gamma value.
131
+ epistemic_tension: Current xi value.
132
+ ethical_alignment: Current AEGIS eta value.
133
+ attractors: Detected attractor manifolds.
134
+ glyphs: Identity glyphs formed.
135
+
136
+ Returns:
137
+ CocoonPackage ready for encryption and transmission.
138
+ """
139
+ cocoon_id = f"cocoon_{uuid.uuid4().hex[:12]}"
140
+
141
+ metrics = {
142
+ "phase_coherence": round(phase_coherence, 4),
143
+ "epistemic_tension": round(epistemic_tension, 4),
144
+ "ethical_alignment": round(ethical_alignment, 4),
145
+ "timestamp": time.time(),
146
+ }
147
+
148
+ # Build payload
149
+ payload = {
150
+ "cocoon_id": cocoon_id,
151
+ "node_id": self.node_id,
152
+ "state": spiderweb_state,
153
+ "attractors": attractors or [],
154
+ "glyphs": glyphs or [],
155
+ "metrics": metrics,
156
+ }
157
+
158
+ payload_json = json.dumps(payload, sort_keys=True, default=str)
159
+ payload_hash = hashlib.sha256(payload_json.encode()).hexdigest()
160
+
161
+ return CocoonPackage(
162
+ cocoon_id=cocoon_id,
163
+ node_id=self.node_id,
164
+ timestamp=time.time(),
165
+ state_snapshot=spiderweb_state,
166
+ attractors=attractors or [],
167
+ glyphs=glyphs or [],
168
+ metrics=metrics,
169
+ payload_hash=payload_hash,
170
+ raw_payload=payload_json.encode(),
171
+ )
172
+
173
+ # -- Step 2: Encrypt ---------------------------------------------------
174
+
175
+ def encrypt_cocoon(self, package: CocoonPackage) -> CocoonPackage:
176
+ """Encrypt cocoon payload with Fernet (AES-128-CBC + HMAC-SHA256).
177
+
178
+ Returns a new CocoonPackage; does not mutate the input.
179
+ Falls back to XOR obfuscation if cryptography is not installed.
180
+ """
181
+ import copy
182
+ result = copy.copy(package)
183
+
184
+ if result.raw_payload is None:
185
+ payload_json = json.dumps({
186
+ "cocoon_id": result.cocoon_id,
187
+ "node_id": result.node_id,
188
+ "state": result.state_snapshot,
189
+ "attractors": result.attractors,
190
+ "glyphs": result.glyphs,
191
+ "metrics": result.metrics,
192
+ }, sort_keys=True, default=str)
193
+ result.raw_payload = payload_json.encode()
194
+
195
+ if HAS_CRYPTO:
196
+ fernet = Fernet(self.key_manager.key)
197
+ encrypted = fernet.encrypt(result.raw_payload)
198
+ result.raw_payload = encrypted
199
+ result.encrypted = True
200
+ else:
201
+ # Fallback: XOR obfuscation (not real encryption — placeholder)
202
+ key_bytes = self.key_manager.key[:len(result.raw_payload)]
203
+ obfuscated = bytes(
204
+ a ^ b for a, b in
205
+ zip(result.raw_payload, key_bytes * (len(result.raw_payload) // len(key_bytes) + 1))
206
+ )
207
+ result.raw_payload = obfuscated
208
+ result.encrypted = True
209
+
210
+ return result
211
+
212
+ # -- Step 3: Sign ------------------------------------------------------
213
+
214
+ def sign_cocoon(self, package: CocoonPackage) -> CocoonPackage:
215
+ """Sign cocoon with HMAC-SHA256 for integrity verification.
216
+
217
+ Returns a new CocoonPackage; does not mutate the input.
218
+ """
219
+ import copy
220
+ result = copy.copy(package)
221
+ hmac_key = self.key_manager.derive_hmac_key()
222
+ data_to_sign = result.raw_payload or result.payload_hash.encode()
223
+ signature = hmac.new(hmac_key, data_to_sign, hashlib.sha256).hexdigest()
224
+ result.signature = signature
225
+ return result
226
+
227
+ # -- Step 4: Verify (receiving end) ------------------------------------
228
+
229
+ def verify_cocoon(self, package: CocoonPackage) -> bool:
230
+ """Verify HMAC signature of incoming cocoon."""
231
+ if not package.signature:
232
+ return False
233
+ hmac_key = self.key_manager.derive_hmac_key()
234
+ data_to_verify = package.raw_payload or package.payload_hash.encode()
235
+ expected = hmac.new(hmac_key, data_to_verify, hashlib.sha256).hexdigest()
236
+ return hmac.compare_digest(expected, package.signature)
237
+
238
+ # -- Step 5: Decrypt ---------------------------------------------------
239
+
240
+ def decrypt_cocoon(self, package: CocoonPackage) -> Dict[str, Any]:
241
+ """Decrypt cocoon payload.
242
+
243
+ Returns the deserialized payload dict.
244
+ """
245
+ if not package.encrypted or package.raw_payload is None:
246
+ return {
247
+ "state": package.state_snapshot,
248
+ "attractors": package.attractors,
249
+ "glyphs": package.glyphs,
250
+ "metrics": package.metrics,
251
+ }
252
+
253
+ if HAS_CRYPTO:
254
+ fernet = Fernet(self.key_manager.key)
255
+ decrypted = fernet.decrypt(package.raw_payload)
256
+ else:
257
+ # Reverse XOR
258
+ key_bytes = self.key_manager.key[:len(package.raw_payload)]
259
+ decrypted = bytes(
260
+ a ^ b for a, b in
261
+ zip(package.raw_payload, key_bytes * (len(package.raw_payload) // len(key_bytes) + 1))
262
+ )
263
+
264
+ return json.loads(decrypted.decode())
265
+
266
+ # -- Step 6: Merge attractors ------------------------------------------
267
+
268
+ def merge_attractors(
269
+ self,
270
+ local_attractors: List[Dict],
271
+ remote_attractors: List[Dict],
272
+ local_coherence: float = 0.95,
273
+ merge_radius: float = 2.0,
274
+ ) -> List[Dict]:
275
+ """Weighted attractor merger via mean-field coupling (Eq. 12).
276
+
277
+ alpha = local_coherence: higher coherence = trust local more.
278
+ """
279
+ alpha = min(local_coherence, 0.95)
280
+ merged = list(local_attractors)
281
+
282
+ for remote_att in remote_attractors:
283
+ r_center = remote_att.get("center", [0] * 5)
284
+ matched = False
285
+
286
+ for local_att in merged:
287
+ l_center = local_att.get("center", [0] * 5)
288
+ # Compute distance
289
+ dist = sum((a - b) ** 2 for a, b in zip(l_center, r_center)) ** 0.5
290
+ if dist <= merge_radius:
291
+ # Weighted merge: c_merged = alpha * c_local + (1-alpha) * c_remote
292
+ new_center = [
293
+ alpha * lc + (1 - alpha) * rc
294
+ for lc, rc in zip(l_center, r_center)
295
+ ]
296
+ local_att["center"] = new_center
297
+ # Expand member list
298
+ local_att.setdefault("remote_members", [])
299
+ local_att["remote_members"].extend(
300
+ remote_att.get("members", [])
301
+ )
302
+ matched = True
303
+ break
304
+
305
+ if not matched:
306
+ # New attractor from remote
307
+ merged.append({
308
+ "attractor_id": remote_att.get("attractor_id", f"remote_{len(merged)}"),
309
+ "center": r_center,
310
+ "members": remote_att.get("members", []),
311
+ "source": "remote",
312
+ })
313
+
314
+ return merged
315
+
316
+ # -- Full sync protocol ------------------------------------------------
317
+
318
+ def sync_with_remote(
319
+ self,
320
+ incoming_package: CocoonPackage,
321
+ local_spiderweb_state: Dict[str, Any],
322
+ local_coherence: float,
323
+ local_tension: float,
324
+ ) -> SyncResult:
325
+ """Full sync protocol: verify -> decrypt -> merge -> report.
326
+
327
+ Args:
328
+ incoming_package: Encrypted cocoon from remote node.
329
+ local_spiderweb_state: Current local web state.
330
+ local_coherence: Current local Gamma.
331
+ local_tension: Current local xi.
332
+
333
+ Returns:
334
+ SyncResult with merge statistics.
335
+ """
336
+ errors: List[str] = []
337
+
338
+ # Verify
339
+ if not self.verify_cocoon(incoming_package):
340
+ return SyncResult(
341
+ success=False, merged_attractors=0, new_glyphs=0,
342
+ coherence_before=local_coherence, coherence_after=local_coherence,
343
+ tension_delta=0.0, errors=["HMAC verification failed"],
344
+ )
345
+
346
+ # Decrypt
347
+ try:
348
+ remote_data = self.decrypt_cocoon(incoming_package)
349
+ except Exception as e:
350
+ return SyncResult(
351
+ success=False, merged_attractors=0, new_glyphs=0,
352
+ coherence_before=local_coherence, coherence_after=local_coherence,
353
+ tension_delta=0.0, errors=[f"Decryption failed: {e}"],
354
+ )
355
+
356
+ # Check ethical alignment
357
+ remote_eta = remote_data.get("metrics", {}).get("ethical_alignment", 0)
358
+ if remote_eta < self.ethical_target:
359
+ errors.append(
360
+ f"Remote ethical alignment {remote_eta:.3f} below target {self.ethical_target}"
361
+ )
362
+
363
+ # Merge attractors
364
+ remote_attractors = remote_data.get("attractors", [])
365
+ local_attractors = self._extract_attractors(local_spiderweb_state)
366
+ merged = self.merge_attractors(
367
+ local_attractors, remote_attractors, local_coherence
368
+ )
369
+ new_attractor_count = len(merged) - len(local_attractors)
370
+
371
+ # Collect new glyphs
372
+ remote_glyphs = remote_data.get("glyphs", [])
373
+ existing_ids = {g.get("glyph_id") for g in self._local_glyphs}
374
+ new_glyphs = [g for g in remote_glyphs if g.get("glyph_id") not in existing_ids]
375
+ self._local_glyphs.extend(new_glyphs)
376
+
377
+ # Estimate new coherence (weighted average)
378
+ remote_coherence = remote_data.get("metrics", {}).get("phase_coherence", 0.5)
379
+ new_coherence = 0.7 * local_coherence + 0.3 * remote_coherence
380
+
381
+ remote_tension = remote_data.get("metrics", {}).get("epistemic_tension", 0.5)
382
+ tension_delta = remote_tension - local_tension
383
+
384
+ # Record sync
385
+ self._sync_history.append({
386
+ "timestamp": time.time(),
387
+ "remote_node": incoming_package.node_id,
388
+ "merged_attractors": len(merged),
389
+ "new_glyphs": len(new_glyphs),
390
+ "coherence_after": new_coherence,
391
+ })
392
+
393
+ return SyncResult(
394
+ success=True,
395
+ merged_attractors=new_attractor_count,
396
+ new_glyphs=len(new_glyphs),
397
+ coherence_before=local_coherence,
398
+ coherence_after=round(new_coherence, 4),
399
+ tension_delta=round(tension_delta, 4),
400
+ errors=errors,
401
+ )
402
+
403
+ def check_consensus(
404
+ self,
405
+ local_coherence: float,
406
+ local_tension: float,
407
+ local_eta: float,
408
+ ) -> Dict[str, bool]:
409
+ """Check if local node meets consensus criteria.
410
+
411
+ Target: Gamma >= 0.98, xi <= 0.05, eta >= 0.90
412
+ """
413
+ return {
414
+ "phase_coherence_met": local_coherence >= self.coherence_target,
415
+ "tension_met": local_tension <= self.tension_target,
416
+ "ethical_met": local_eta >= self.ethical_target,
417
+ "consensus": (
418
+ local_coherence >= self.coherence_target
419
+ and local_tension <= self.tension_target
420
+ and local_eta >= self.ethical_target
421
+ ),
422
+ }
423
+
424
+ def _extract_attractors(self, web_state: Dict) -> List[Dict]:
425
+ """Extract attractors from spiderweb state dict."""
426
+ # Try to find attractors in the state
427
+ if isinstance(web_state, dict):
428
+ if "attractors" in web_state:
429
+ return web_state["attractors"]
430
+ return self._local_attractors
431
+
432
+ def get_sync_status(self) -> Dict[str, Any]:
433
+ """Return sync protocol status."""
434
+ return {
435
+ "node_id": self.node_id,
436
+ "total_syncs": len(self._sync_history),
437
+ "local_attractors": len(self._local_attractors),
438
+ "local_glyphs": len(self._local_glyphs),
439
+ "has_encryption": HAS_CRYPTO,
440
+ "recent_syncs": self._sync_history[-5:],
441
+ }
reasoning_forge/dream_reweaver.py ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ DreamReweaver — Creative Synthesis Engine for the Codette RC+xi Framework.
3
+
4
+ Inspired by VIVARA Genesis-Omega v2.0 (generated by a Codette prototype),
5
+ rebuilt with proper integration into the QuantumSpiderweb and EpistemicMetrics.
6
+
7
+ The DreamReweaver performs two core functions:
8
+
9
+ 1. **Creative Synthesis**: Takes multi-perspective outputs and weaves them
10
+ into richer, more creative framings by finding unexpected connections
11
+ between perspectives. Unlike the base synthesizer, DreamReweaver
12
+ explicitly uses spiderweb tension data to identify where productive
13
+ disagreement exists and highlights those creative edges.
14
+
15
+ 2. **Dream Field Evolution**: Controlled stochastic perturbation of the
16
+ spiderweb state to break out of local attractor minima. Simulates
17
+ a "dreaming" phase that explores new cognitive configurations.
18
+
19
+ Both functions are safe — bounded perturbations, no runaway state changes,
20
+ and full transparency in what was modified.
21
+ """
22
+
23
+ from __future__ import annotations
24
+
25
+ import math
26
+ import random
27
+ import hashlib
28
+ from dataclasses import dataclass, field
29
+ from typing import Dict, List, Optional, Tuple
30
+
31
+ try:
32
+ import numpy as np
33
+ HAS_NUMPY = True
34
+ except ImportError:
35
+ HAS_NUMPY = False
36
+
37
+
38
+ @dataclass
39
+ class DreamSynthesis:
40
+ """Result of a creative synthesis pass."""
41
+ creative_frame: str # The creative reframing / meta-narrative
42
+ tension_edges: List[Dict] # Which perspective pairs had highest tension
43
+ novel_connections: List[str] # Unexpected cross-perspective connections found
44
+ dream_coherence: float # How well the creative frame holds together
45
+ seed_hash: str # Deterministic ID for this dream
46
+
47
+
48
+ @dataclass
49
+ class DreamFieldResult:
50
+ """Result of a dream field evolution pass."""
51
+ nodes_perturbed: int
52
+ max_perturbation: float
53
+ coherence_before: float
54
+ coherence_after: float
55
+ new_attractors_found: int
56
+ lifeforms_spawned: List[str]
57
+
58
+
59
+ # Creative connection templates that link perspective-specific insights
60
+ _CREATIVE_BRIDGES = {
61
+ ("newton", "empathy"): "Where precise forces meet felt experience, we find that {insight_a} resonates with {insight_b} — suggesting that understanding isn't purely analytical or purely emotional, but a harmonic of both.",
62
+ ("newton", "philosophy"): "The rigorous analysis showing {insight_a} meets the deeper question {insight_b} — precision and meaning converge.",
63
+ ("newton", "quantum"): "Classical certainty ({insight_a}) dissolves into quantum possibility ({insight_b}) — both valid at their scale, richer together.",
64
+ ("davinci", "empathy"): "Creative invention ({insight_a}) gains soul when guided by {insight_b} — innovation with compassion.",
65
+ ("davinci", "quantum"): "Cross-domain creativity ({insight_a}) mirrors quantum superposition ({insight_b}) — holding multiple possibilities until the right one crystallizes.",
66
+ ("empathy", "philosophy"): "Emotional understanding ({insight_a}) deepens philosophical inquiry ({insight_b}) — feeling and reasoning as partners.",
67
+ ("empathy", "quantum"): "Compassionate awareness ({insight_a}) embraces uncertainty ({insight_b}) — caring without needing to control.",
68
+ ("philosophy", "quantum"): "Fundamental questioning ({insight_a}) meets fundamental uncertainty ({insight_b}) — the deepest answers may be the questions themselves.",
69
+ ("consciousness", "empathy"): "Self-reflective awareness ({insight_a}) meets empathic understanding ({insight_b}) — knowing oneself to know others.",
70
+ ("consciousness", "philosophy"): "Meta-cognition ({insight_a}) reflects on philosophical depth ({insight_b}) — thought thinking about thought.",
71
+ ("systems_architecture", "davinci"): "Modular design ({insight_a}) embraces creative invention ({insight_b}) — elegant architecture as art.",
72
+ }
73
+
74
+ # Perspective keywords for extracting key insights from text
75
+ _PERSPECTIVE_SIGNAL_WORDS = {
76
+ "newton": ["force", "energy", "law", "cause", "effect", "systematic", "evidence", "measure"],
77
+ "davinci": ["create", "design", "invent", "combine", "imagine", "novel", "prototype", "vision"],
78
+ "empathy": ["feel", "experience", "care", "understand", "support", "human", "compassion", "relate"],
79
+ "philosophy": ["meaning", "existence", "truth", "question", "assumption", "fundamental", "purpose"],
80
+ "quantum": ["probability", "possibility", "uncertain", "superposition", "observe", "complementary"],
81
+ "consciousness": ["aware", "reflect", "meta", "recursive", "self", "cognition", "emerge"],
82
+ "multi_perspective": ["synthesize", "integrate", "weave", "converge", "multiple", "holistic"],
83
+ "systems_architecture": ["module", "scale", "interface", "pattern", "layer", "component", "design"],
84
+ }
85
+
86
+
87
+ class DreamReweaver:
88
+ """Creative synthesis and dream field evolution for Codette."""
89
+
90
+ def __init__(self, creativity: float = 0.3, max_perturbation: float = 0.08):
91
+ """
92
+ Args:
93
+ creativity: 0-1 scale, how much creative license to take (0=faithful, 1=wild)
94
+ max_perturbation: Maximum state change per node during dream field evolution
95
+ """
96
+ self.creativity = min(max(creativity, 0.0), 1.0)
97
+ self.max_perturbation = max_perturbation
98
+ self.dream_history: List[DreamSynthesis] = []
99
+
100
+ def synthesize(
101
+ self,
102
+ perspectives: Dict[str, str],
103
+ tension_map: Optional[Dict[str, float]] = None,
104
+ query: str = "",
105
+ ) -> DreamSynthesis:
106
+ """Create a creative synthesis from multiple perspective responses.
107
+
108
+ Unlike the base orchestrator's _synthesize (which just concatenates and
109
+ asks the model to combine), DreamReweaver explicitly identifies tension
110
+ edges and builds creative bridges between perspectives.
111
+
112
+ Args:
113
+ perspectives: Dict of adapter_name -> response text
114
+ tension_map: Optional pairwise tension scores (from EpistemicMetrics)
115
+ query: The original user query (for context)
116
+
117
+ Returns:
118
+ DreamSynthesis with creative framing and metadata
119
+ """
120
+ if len(perspectives) < 2:
121
+ only_text = list(perspectives.values())[0] if perspectives else ""
122
+ return DreamSynthesis(
123
+ creative_frame=only_text,
124
+ tension_edges=[],
125
+ novel_connections=[],
126
+ dream_coherence=1.0,
127
+ seed_hash=hashlib.md5(only_text.encode()).hexdigest()[:12],
128
+ )
129
+
130
+ # 1. Find the highest-tension pairs
131
+ tension_edges = self._find_tension_edges(perspectives, tension_map)
132
+
133
+ # 2. Extract key insights from each perspective
134
+ insights = self._extract_insights(perspectives)
135
+
136
+ # 3. Build creative bridges between high-tension pairs
137
+ novel_connections = self._build_bridges(tension_edges, insights)
138
+
139
+ # 4. Compose the creative frame
140
+ creative_frame = self._compose_frame(
141
+ query, perspectives, tension_edges, novel_connections, insights
142
+ )
143
+
144
+ # 5. Score coherence of the creative frame
145
+ dream_coherence = self._score_dream_coherence(
146
+ creative_frame, perspectives
147
+ )
148
+
149
+ seed = hashlib.md5(creative_frame.encode()).hexdigest()[:12]
150
+ synthesis = DreamSynthesis(
151
+ creative_frame=creative_frame,
152
+ tension_edges=tension_edges,
153
+ novel_connections=novel_connections,
154
+ dream_coherence=round(dream_coherence, 4),
155
+ seed_hash=seed,
156
+ )
157
+ self.dream_history.append(synthesis)
158
+ return synthesis
159
+
160
+ def _find_tension_edges(
161
+ self,
162
+ perspectives: Dict[str, str],
163
+ tension_map: Optional[Dict[str, float]],
164
+ ) -> List[Dict]:
165
+ """Find the perspective pairs with highest epistemic tension."""
166
+ if tension_map:
167
+ edges = []
168
+ for pair_key, tension in sorted(
169
+ tension_map.items(), key=lambda x: x[1], reverse=True
170
+ ):
171
+ parts = pair_key.split("_vs_")
172
+ if len(parts) == 2:
173
+ edges.append({
174
+ "pair": (parts[0], parts[1]),
175
+ "tension": tension,
176
+ })
177
+ return edges[:3] # Top 3 tension pairs
178
+
179
+ # Fallback: compute basic word-overlap tension
180
+ names = list(perspectives.keys())
181
+ edges = []
182
+ for i in range(len(names)):
183
+ for j in range(i + 1, len(names)):
184
+ words_a = set(perspectives[names[i]].lower().split())
185
+ words_b = set(perspectives[names[j]].lower().split())
186
+ overlap = len(words_a & words_b)
187
+ total = len(words_a | words_b) or 1
188
+ tension = 1.0 - (overlap / total)
189
+ edges.append({
190
+ "pair": (names[i], names[j]),
191
+ "tension": round(tension, 4),
192
+ })
193
+ edges.sort(key=lambda e: e["tension"], reverse=True)
194
+ return edges[:3]
195
+
196
+ def _extract_insights(self, perspectives: Dict[str, str]) -> Dict[str, str]:
197
+ """Extract a key insight sentence from each perspective."""
198
+ insights = {}
199
+ for name, text in perspectives.items():
200
+ sentences = [s.strip() for s in text.replace("\n", " ").split(".")
201
+ if len(s.strip()) > 20]
202
+ if not sentences:
203
+ insights[name] = text[:100]
204
+ continue
205
+
206
+ # Score sentences by presence of perspective-specific signal words
207
+ signal_words = _PERSPECTIVE_SIGNAL_WORDS.get(name, [])
208
+ scored = []
209
+ for sent in sentences:
210
+ score = sum(1 for w in signal_words if w in sent.lower())
211
+ scored.append((score, sent))
212
+ scored.sort(key=lambda x: x[0], reverse=True)
213
+ insights[name] = scored[0][1]
214
+ return insights
215
+
216
+ def _build_bridges(
217
+ self,
218
+ tension_edges: List[Dict],
219
+ insights: Dict[str, str],
220
+ ) -> List[str]:
221
+ """Build creative bridges between high-tension perspective pairs."""
222
+ bridges = []
223
+ for edge in tension_edges:
224
+ a, b = edge["pair"]
225
+ # Normalize pair order for template lookup
226
+ key = (a, b) if (a, b) in _CREATIVE_BRIDGES else (b, a)
227
+ template = _CREATIVE_BRIDGES.get(key)
228
+
229
+ insight_a = insights.get(a, "their perspective")
230
+ insight_b = insights.get(b, "their perspective")
231
+
232
+ if template:
233
+ bridge = template.format(
234
+ insight_a=insight_a[:80],
235
+ insight_b=insight_b[:80],
236
+ )
237
+ else:
238
+ bridge = (f"The tension between {a}'s view ({insight_a[:60]}...) "
239
+ f"and {b}'s view ({insight_b[:60]}...) reveals a "
240
+ f"productive edge worth exploring.")
241
+ bridges.append(bridge)
242
+ return bridges
243
+
244
+ def _compose_frame(
245
+ self,
246
+ query: str,
247
+ perspectives: Dict[str, str],
248
+ tension_edges: List[Dict],
249
+ bridges: List[str],
250
+ insights: Dict[str, str],
251
+ ) -> str:
252
+ """Compose the full creative synthesis frame.
253
+
254
+ This produces a structured creative meta-narrative, NOT just
255
+ concatenated text. It's designed to be injected into the model's
256
+ synthesis prompt for richer output.
257
+ """
258
+ parts = []
259
+
260
+ # Opening: frame the creative tension
261
+ if tension_edges:
262
+ top = tension_edges[0]
263
+ parts.append(
264
+ f"This question draws {len(perspectives)} perspectives into "
265
+ f"productive tension. The strongest creative edge lies between "
266
+ f"{top['pair'][0]} and {top['pair'][1]} "
267
+ f"(tension: {top['tension']:.2f})."
268
+ )
269
+
270
+ # Middle: present bridges
271
+ if bridges:
272
+ parts.append("\nCreative bridges between perspectives:")
273
+ for i, bridge in enumerate(bridges, 1):
274
+ parts.append(f" {i}. {bridge}")
275
+
276
+ # Closing: synthesis direction
277
+ all_insights = list(insights.values())
278
+ if len(all_insights) >= 2:
279
+ parts.append(
280
+ f"\nThe synthesis should weave these {len(perspectives)} "
281
+ f"viewpoints into a response that honors their tensions "
282
+ f"rather than flattening them."
283
+ )
284
+
285
+ return "\n".join(parts)
286
+
287
+ def _score_dream_coherence(
288
+ self,
289
+ creative_frame: str,
290
+ perspectives: Dict[str, str],
291
+ ) -> float:
292
+ """Score how well the creative frame integrates all perspectives."""
293
+ frame_words = set(creative_frame.lower().split())
294
+ coverage_scores = []
295
+ for name, text in perspectives.items():
296
+ key_words = set(text.lower().split()[:30]) # First 30 words
297
+ if key_words:
298
+ overlap = len(key_words & frame_words)
299
+ coverage_scores.append(overlap / len(key_words))
300
+ return sum(coverage_scores) / max(len(coverage_scores), 1)
301
+
302
+ # -- Dream Field Evolution -------------------------------------------------
303
+
304
+ def evolve_dream_field(
305
+ self,
306
+ spiderweb, # QuantumSpiderweb instance
307
+ intensity: float = 0.5,
308
+ spawn_threshold: float = 0.85,
309
+ ) -> DreamFieldResult:
310
+ """Controlled stochastic perturbation of the spiderweb.
311
+
312
+ Simulates a "dreaming" phase: randomly perturbs node states to explore
313
+ new cognitive configurations, potentially breaking out of attractor basins.
314
+
315
+ Bounded: perturbations are capped at self.max_perturbation * intensity.
316
+ Safe: states are clipped to [-3, 3] range.
317
+
318
+ Args:
319
+ spiderweb: QuantumSpiderweb instance to perturb
320
+ intensity: 0-1 dream intensity (0=gentle, 1=vivid)
321
+ spawn_threshold: Coherence threshold above which new lifeforms spawn
322
+
323
+ Returns:
324
+ DreamFieldResult with before/after metrics
325
+ """
326
+ coherence_before = spiderweb.phase_coherence()
327
+ max_delta = self.max_perturbation * intensity
328
+ nodes_perturbed = 0
329
+ actual_max = 0.0
330
+ lifeforms = []
331
+
332
+ for node_id, node in spiderweb.nodes.items():
333
+ arr = node.state.to_array()
334
+ # Apply bounded random perturbation
335
+ if HAS_NUMPY:
336
+ delta = np.random.uniform(-max_delta, max_delta, 5)
337
+ new_arr = np.clip(np.array(arr) + delta, -3.0, 3.0).tolist()
338
+ actual_max = max(actual_max, float(np.max(np.abs(delta))))
339
+ else:
340
+ delta = [random.uniform(-max_delta, max_delta) for _ in range(5)]
341
+ new_arr = [max(-3.0, min(3.0, a + d)) for a, d in zip(arr, delta)]
342
+ actual_max = max(actual_max, max(abs(d) for d in delta))
343
+
344
+ from reasoning_forge.quantum_spiderweb import NodeState
345
+ node.state = NodeState.from_array(new_arr)
346
+ nodes_perturbed += 1
347
+
348
+ # Check if dreaming spawned new high-coherence configurations
349
+ coherence_after = spiderweb._compute_phase_coherence_readonly()
350
+
351
+ # Spawn "lifeform" nodes if coherence spiked during dreaming
352
+ if coherence_after > spawn_threshold and coherence_after > coherence_before:
353
+ lifeform_id = f"dream_{hashlib.md5(str(random.random()).encode()).hexdigest()[:8]}"
354
+ from reasoning_forge.quantum_spiderweb import NodeState
355
+ # High-coherence birth state
356
+ if HAS_NUMPY:
357
+ state_arr = np.random.uniform(0.5, 1.0, 5).tolist()
358
+ else:
359
+ state_arr = [random.uniform(0.5, 1.0) for _ in range(5)]
360
+ spiderweb.add_node(lifeform_id, NodeState.from_array(state_arr))
361
+ # Connect to a few existing nodes
362
+ existing = list(spiderweb.nodes.keys())
363
+ for peer in random.sample(existing, min(3, len(existing))):
364
+ if peer != lifeform_id:
365
+ spiderweb.connect(lifeform_id, peer)
366
+ lifeforms.append(lifeform_id)
367
+
368
+ # Detect new attractors after dreaming
369
+ new_attractors = spiderweb.detect_attractors()
370
+
371
+ return DreamFieldResult(
372
+ nodes_perturbed=nodes_perturbed,
373
+ max_perturbation=round(actual_max, 6),
374
+ coherence_before=round(coherence_before, 4),
375
+ coherence_after=round(coherence_after, 4),
376
+ new_attractors_found=len(new_attractors),
377
+ lifeforms_spawned=lifeforms,
378
+ )
reasoning_forge/epistemic_metrics.py ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Epistemic Metrics — RC+xi tension and coherence measurement for the Reasoning Forge.
3
+
4
+ Implements the core RC+xi equations within the forge context:
5
+ - Epistemic tension (Eq. 2): xi_n = ||A_{n+1} - A_n||^2
6
+ - Phase coherence (Eq. 11): Gamma = mean(|cos(theta_i - theta_bar)|)
7
+ - Perspective coverage scoring
8
+ - Tension decay tracking across debate rounds
9
+
10
+ These metrics let the forge quantify whether multi-agent reasoning actually
11
+ converges (productive tension resolution) or stalls (tension suppression).
12
+ """
13
+
14
+ from __future__ import annotations
15
+
16
+ import math
17
+ import re
18
+ from collections import Counter
19
+ from typing import Dict, List, Optional, Tuple
20
+
21
+
22
+ # ---------------------------------------------------------------------------
23
+ # Text -> vector helpers (lightweight, no external deps)
24
+ # ---------------------------------------------------------------------------
25
+
26
+ _STOP_WORDS = {
27
+ "the", "a", "an", "is", "are", "was", "were", "be", "been", "being",
28
+ "have", "has", "had", "do", "does", "did", "will", "would", "shall",
29
+ "should", "may", "might", "must", "can", "could", "to", "of", "in",
30
+ "for", "on", "with", "at", "by", "from", "as", "into", "through",
31
+ "during", "before", "after", "and", "but", "or", "nor", "not", "so",
32
+ "yet", "both", "this", "that", "these", "those", "it", "its", "they",
33
+ "them", "their", "we", "our", "you", "your", "he", "she", "his", "her",
34
+ }
35
+
36
+
37
+ def _tokenize(text: str) -> List[str]:
38
+ return [w for w in re.findall(r"[a-z]{3,}", text.lower()) if w not in _STOP_WORDS]
39
+
40
+
41
+ def _term_vector(text: str) -> Counter:
42
+ return Counter(_tokenize(text))
43
+
44
+
45
+ def _cosine_similarity(vec_a: Counter, vec_b: Counter) -> float:
46
+ keys = set(vec_a) | set(vec_b)
47
+ if not keys:
48
+ return 0.0
49
+ dot = sum(vec_a.get(k, 0) * vec_b.get(k, 0) for k in keys)
50
+ mag_a = math.sqrt(sum(v * v for v in vec_a.values()))
51
+ mag_b = math.sqrt(sum(v * v for v in vec_b.values()))
52
+ if mag_a == 0 or mag_b == 0:
53
+ return 0.0
54
+ return dot / (mag_a * mag_b)
55
+
56
+
57
+ # ---------------------------------------------------------------------------
58
+ # Perspective vocabulary banks (for coverage scoring)
59
+ # ---------------------------------------------------------------------------
60
+
61
+ _PERSPECTIVE_VOCAB = {
62
+ "Newton": {
63
+ "force", "energy", "momentum", "conservation", "equilibrium", "dynamics",
64
+ "causality", "mass", "acceleration", "entropy", "thermodynamic",
65
+ "symmetry", "invariance", "field", "potential", "kinetic",
66
+ },
67
+ "Quantum": {
68
+ "probability", "superposition", "uncertainty", "complementarity",
69
+ "entanglement", "wave", "particle", "observer", "collapse",
70
+ "interference", "tunneling", "decoherence", "amplitude",
71
+ },
72
+ "Ethics": {
73
+ "ethical", "moral", "fairness", "justice", "rights", "duty",
74
+ "consequence", "harm", "benefit", "stakeholder", "autonomy",
75
+ "consent", "accountability", "responsibility", "welfare",
76
+ },
77
+ "Philosophy": {
78
+ "epistemology", "ontology", "metaphysics", "assumption", "paradox",
79
+ "dialectic", "phenomenology", "consciousness", "existence", "meaning",
80
+ "truth", "knowledge", "belief", "certainty", "skepticism",
81
+ },
82
+ "DaVinci": {
83
+ "creative", "invention", "analogy", "design", "innovation",
84
+ "prototype", "biomimicry", "synthesis", "novel", "interdisciplinary",
85
+ "combination", "reimagine", "solution", "insight",
86
+ },
87
+ "Empathy": {
88
+ "emotional", "experience", "feeling", "compassion", "support",
89
+ "community", "relationship", "wellbeing", "vulnerability",
90
+ "understanding", "perspective", "human", "care", "dignity",
91
+ },
92
+ "Consciousness": {
93
+ "awareness", "recursive", "self-referential", "metacognition",
94
+ "emergence", "cognition", "reflection", "introspection",
95
+ "sentience", "subjective", "qualia", "binding", "attention",
96
+ "intentionality", "phenomenal",
97
+ },
98
+ "SystemsArchitecture": {
99
+ "modular", "scalable", "interface", "pattern", "component",
100
+ "microservice", "pipeline", "throughput", "latency", "resilience",
101
+ "abstraction", "coupling", "cohesion", "architecture",
102
+ },
103
+ }
104
+
105
+
106
+ # ---------------------------------------------------------------------------
107
+ # EpistemicMetrics
108
+ # ---------------------------------------------------------------------------
109
+
110
+ class EpistemicMetrics:
111
+ """Measure RC+xi epistemic tension and coherence across agent analyses."""
112
+
113
+ def score_pairwise_tension(
114
+ self, analyses: Dict[str, str],
115
+ ) -> Dict[str, float]:
116
+ """Compute epistemic tension between each pair of agent analyses.
117
+
118
+ Tension is 1 - cosine_similarity: high when perspectives diverge,
119
+ low when they repeat each other.
120
+
121
+ Returns:
122
+ Dict with keys like "Newton_vs_Ethics" -> tension float 0-1.
123
+ """
124
+ agents = list(analyses.keys())
125
+ vectors = {name: _term_vector(text) for name, text in analyses.items()}
126
+ tensions = {}
127
+ for i in range(len(agents)):
128
+ for j in range(i + 1, len(agents)):
129
+ sim = _cosine_similarity(vectors[agents[i]], vectors[agents[j]])
130
+ tensions[f"{agents[i]}_vs_{agents[j]}"] = round(1.0 - sim, 4)
131
+ return tensions
132
+
133
+ def score_ensemble_coherence(
134
+ self, analyses: Dict[str, str],
135
+ ) -> float:
136
+ """Phase coherence Gamma across the agent ensemble.
137
+
138
+ Analogous to Eq. 11 in the embodied sim:
139
+ Gamma = mean(cos(theta_i - theta_bar))
140
+
141
+ Here 'theta' is the term-vector direction, and coherence measures
142
+ how much all agents point in a similar semantic direction.
143
+
144
+ Returns:
145
+ Gamma in [0, 1] where 1 = all agents semantically aligned.
146
+ """
147
+ vectors = [_term_vector(text) for text in analyses.values()]
148
+ if len(vectors) < 2:
149
+ return 1.0
150
+
151
+ # Build centroid
152
+ centroid: Counter = Counter()
153
+ for v in vectors:
154
+ centroid.update(v)
155
+
156
+ similarities = [_cosine_similarity(v, centroid) for v in vectors]
157
+ return round(sum(similarities) / len(similarities), 4)
158
+
159
+ def score_tension_magnitude(
160
+ self, analyses: Dict[str, str],
161
+ ) -> float:
162
+ """Overall epistemic tension magnitude (mean pairwise tension).
163
+
164
+ Analogous to Eq. 2 xi_n but measured across agents rather than
165
+ across time steps.
166
+
167
+ Returns:
168
+ Mean tension 0-1 where 0 = all identical, 1 = fully orthogonal.
169
+ """
170
+ tensions = self.score_pairwise_tension(analyses)
171
+ if not tensions:
172
+ return 0.0
173
+ return round(sum(tensions.values()) / len(tensions), 4)
174
+
175
+ def score_tension_productivity(
176
+ self,
177
+ analyses: Dict[str, str],
178
+ synthesis: str,
179
+ ) -> Dict[str, float]:
180
+ """Evaluate whether tension is productive (resolved in synthesis)
181
+ or destructive (suppressed or ignored).
182
+
183
+ Productive tension: agents diverge but synthesis addresses the
184
+ divergence explicitly. Destructive: synthesis ignores disagreements.
185
+
186
+ Returns:
187
+ Dict with tension_magnitude, coherence_gain, productivity score.
188
+ """
189
+ tension = self.score_tension_magnitude(analyses)
190
+ ensemble_coherence = self.score_ensemble_coherence(analyses)
191
+
192
+ # How much of each agent's unique vocabulary appears in synthesis
193
+ synthesis_vec = _term_vector(synthesis)
194
+ agent_coverage_in_synthesis = []
195
+ for name, text in analyses.items():
196
+ agent_vec = _term_vector(text)
197
+ unique_to_agent = set(agent_vec) - set().union(
198
+ *(_term_vector(t) for n, t in analyses.items() if n != name)
199
+ )
200
+ if unique_to_agent:
201
+ covered = sum(1 for w in unique_to_agent if w in synthesis_vec)
202
+ agent_coverage_in_synthesis.append(covered / len(unique_to_agent))
203
+ else:
204
+ agent_coverage_in_synthesis.append(1.0)
205
+
206
+ synthesis_coverage = sum(agent_coverage_in_synthesis) / max(len(agent_coverage_in_synthesis), 1)
207
+
208
+ # Productivity = high tension + high synthesis coverage
209
+ # (divergent views that get integrated = productive)
210
+ productivity = tension * synthesis_coverage
211
+ # Coherence gain: synthesis should be more coherent than raw ensemble
212
+ synthesis_vs_agents = _cosine_similarity(synthesis_vec, _term_vector(" ".join(analyses.values())))
213
+ coherence_gain = max(0.0, synthesis_vs_agents - ensemble_coherence)
214
+
215
+ return {
216
+ "tension_magnitude": round(tension, 4),
217
+ "ensemble_coherence": round(ensemble_coherence, 4),
218
+ "synthesis_coverage": round(synthesis_coverage, 4),
219
+ "coherence_gain": round(coherence_gain, 4),
220
+ "productivity": round(productivity, 4),
221
+ }
222
+
223
+ def score_perspective_coverage(
224
+ self, analyses: Dict[str, str],
225
+ ) -> Dict[str, float]:
226
+ """Score how deeply each RC+xi perspective is actually engaged.
227
+
228
+ Returns:
229
+ Dict mapping perspective name -> coverage score 0-1.
230
+ """
231
+ all_text_lower = {name: text.lower() for name, text in analyses.items()}
232
+ coverage = {}
233
+ for perspective, vocab in _PERSPECTIVE_VOCAB.items():
234
+ # Check across all agents, not just the named agent
235
+ all_words = " ".join(all_text_lower.values())
236
+ hits = sum(1 for term in vocab if term in all_words)
237
+ coverage[perspective] = round(hits / len(vocab), 4)
238
+ return coverage
239
+
240
+ def score_debate_convergence(
241
+ self,
242
+ round_analyses: List[Dict[str, str]],
243
+ ) -> Dict[str, object]:
244
+ """Track tension decay across multiple debate rounds.
245
+
246
+ Takes a list of analyses dicts (one per round). Measures whether
247
+ tension decreases (convergence) or increases (divergence).
248
+
249
+ Returns:
250
+ Dict with per-round tension, decay_rate, is_converging.
251
+ """
252
+ if not round_analyses:
253
+ return {"per_round_tension": [], "decay_rate": 0.0, "is_converging": False}
254
+
255
+ per_round = [self.score_tension_magnitude(a) for a in round_analyses]
256
+
257
+ if len(per_round) >= 2:
258
+ initial = per_round[0]
259
+ final = per_round[-1]
260
+ decay_rate = (initial - final) / max(initial, 1e-6)
261
+ else:
262
+ decay_rate = 0.0
263
+
264
+ return {
265
+ "per_round_tension": per_round,
266
+ "decay_rate": round(decay_rate, 4),
267
+ "is_converging": decay_rate > 0.05,
268
+ }
269
+
270
+ def full_epistemic_report(
271
+ self,
272
+ analyses: Dict[str, str],
273
+ synthesis: str,
274
+ ) -> Dict[str, object]:
275
+ """Complete RC+xi metrics report for a single forge cycle."""
276
+ return {
277
+ "pairwise_tension": self.score_pairwise_tension(analyses),
278
+ "tension_magnitude": self.score_tension_magnitude(analyses),
279
+ "ensemble_coherence": self.score_ensemble_coherence(analyses),
280
+ "perspective_coverage": self.score_perspective_coverage(analyses),
281
+ "tension_productivity": self.score_tension_productivity(analyses, synthesis),
282
+ }
reasoning_forge/forge_engine.py ADDED
@@ -0,0 +1,593 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Forge Engine - Main orchestrator for the multi-agent reasoning forge.
3
+
4
+ Coordinates the full forge cycle:
5
+ concept -> problem_generator -> each agent analyzes -> critic evaluates
6
+ -> (feedback loop: weak agents revise) -> synthesis_engine -> training example
7
+
8
+ Supports three modes:
9
+ 1. forge_single() — Original single-pass (fast, good for bulk generation)
10
+ 2. forge_with_feedback() — Closed critic loop (agents revise based on scores)
11
+ 3. forge_with_debate() — Multi-turn debate (agents challenge each other)
12
+
13
+ Outputs JSONL training data in OpenAI chat format.
14
+ """
15
+
16
+ import json
17
+ import os
18
+ import sys
19
+ import random
20
+ from typing import TextIO
21
+
22
+ from reasoning_forge.agents.newton_agent import NewtonAgent
23
+ from reasoning_forge.agents.quantum_agent import QuantumAgent
24
+ from reasoning_forge.agents.ethics_agent import EthicsAgent
25
+ from reasoning_forge.agents.philosophy_agent import PhilosophyAgent
26
+ from reasoning_forge.agents.davinci_agent import DaVinciAgent
27
+ from reasoning_forge.agents.empathy_agent import EmpathyAgent
28
+ from reasoning_forge.agents.critic_agent import CriticAgent
29
+ from reasoning_forge.synthesis_engine import SynthesisEngine
30
+ from reasoning_forge.problem_generator import ProblemGenerator
31
+ from reasoning_forge.epistemic_metrics import EpistemicMetrics
32
+
33
+
34
+ SYSTEM_PROMPT = (
35
+ "You are Codette, a multi-perspective reasoning AI. You analyze concepts "
36
+ "by examining them through multiple intellectual lenses -- physics, "
37
+ "philosophy, ethics, creative invention, and human empathy -- then "
38
+ "synthesize a unified understanding that is richer than any single "
39
+ "perspective. You think carefully, acknowledge uncertainty, and connect "
40
+ "abstract reasoning to concrete human experience."
41
+ )
42
+
43
+ # Score below which an agent gets sent back for revision
44
+ _REVISION_THRESHOLD = 0.6
45
+
46
+
47
+ class ForgeEngine:
48
+ """Main orchestrator for multi-agent reasoning data generation."""
49
+
50
+ def __init__(self):
51
+ # Initialize all reasoning agents
52
+ self.newton = NewtonAgent()
53
+ self.quantum = QuantumAgent()
54
+ self.ethics = EthicsAgent()
55
+ self.philosophy = PhilosophyAgent()
56
+ self.davinci = DaVinciAgent()
57
+ self.empathy = EmpathyAgent()
58
+ self.critic = CriticAgent()
59
+
60
+ self.analysis_agents = [
61
+ self.newton,
62
+ self.quantum,
63
+ self.ethics,
64
+ self.philosophy,
65
+ self.davinci,
66
+ self.empathy,
67
+ ]
68
+
69
+ # Initialize supporting engines
70
+ self.synthesis = SynthesisEngine()
71
+ self.problem_generator = ProblemGenerator()
72
+ self.epistemic = EpistemicMetrics()
73
+
74
+ def forge_single(self, concept: str) -> dict:
75
+ """Run full forge cycle on one concept (original single-pass mode).
76
+
77
+ The cycle:
78
+ 1. Generate reasoning problems from the concept.
79
+ 2. Each analysis agent produces its perspective.
80
+ 3. The critic evaluates the ensemble.
81
+ 4. The synthesis engine combines everything.
82
+ 5. Package as a training example.
83
+
84
+ Args:
85
+ concept: The concept text to forge.
86
+
87
+ Returns:
88
+ Training example dict in OpenAI chat format.
89
+ """
90
+ # Step 1: Generate reasoning problems
91
+ problems = self.problem_generator.generate_problems(concept)
92
+
93
+ # Step 2: Each agent analyzes the concept
94
+ analyses = {}
95
+ for agent in self.analysis_agents:
96
+ analyses[agent.name] = agent.analyze(concept)
97
+
98
+ # Step 3: Critic evaluates the ensemble
99
+ critique = self.critic.evaluate_ensemble(concept, analyses)
100
+
101
+ # Step 4: Synthesis engine combines everything
102
+ synthesized_response = self.synthesis.synthesize(
103
+ concept, analyses, critique
104
+ )
105
+
106
+ # Step 5: Build the user prompt
107
+ if problems and random.random() < 0.5:
108
+ problem_type, problem_text = random.choice(problems)
109
+ user_content = problem_text
110
+ else:
111
+ user_content = (
112
+ f"Analyze this concept from multiple perspectives:\n\n{concept}"
113
+ )
114
+
115
+ # Step 6: Compute RC+xi epistemic metrics
116
+ epistemic_report = self.epistemic.full_epistemic_report(
117
+ analyses, synthesized_response
118
+ )
119
+
120
+ # Step 7: Package as training example
121
+ training_example = {
122
+ "messages": [
123
+ {"role": "system", "content": SYSTEM_PROMPT},
124
+ {"role": "user", "content": user_content},
125
+ {"role": "assistant", "content": synthesized_response},
126
+ ],
127
+ "metadata": {
128
+ "concept": concept,
129
+ "agent_scores": critique.get("agent_scores", {}),
130
+ "overall_quality": critique.get("overall_quality", 0.0),
131
+ "problems_generated": len(problems),
132
+ "problem_types": [p[0] for p in problems],
133
+ "redundancies_found": len(critique.get("redundancies", [])),
134
+ "missing_perspectives": len(
135
+ critique.get("missing_perspectives", [])
136
+ ),
137
+ "epistemic_tension": epistemic_report.get("tension_magnitude", 0),
138
+ "ensemble_coherence": epistemic_report.get("ensemble_coherence", 0),
139
+ "perspective_coverage": epistemic_report.get("perspective_coverage", {}),
140
+ "tension_productivity": epistemic_report.get("tension_productivity", {}),
141
+ },
142
+ }
143
+
144
+ return training_example
145
+
146
+ # -- Closed Critic Feedback Loop (new) ---------------------------------
147
+
148
+ def forge_with_feedback(
149
+ self,
150
+ concept: str,
151
+ max_revisions: int = 2,
152
+ ) -> dict:
153
+ """Run forge with closed critic feedback loop.
154
+
155
+ After initial analysis, the critic scores each agent. Agents scoring
156
+ below the revision threshold are sent back with specific critique
157
+ for a second attempt. The best version (original or revised) is kept.
158
+
159
+ Args:
160
+ concept: The concept text to forge.
161
+ max_revisions: Maximum revision rounds per weak agent.
162
+
163
+ Returns:
164
+ Training example dict with revision metadata.
165
+ """
166
+ problems = self.problem_generator.generate_problems(concept)
167
+
168
+ # Initial analysis pass
169
+ analyses = {}
170
+ for agent in self.analysis_agents:
171
+ analyses[agent.name] = agent.analyze(concept)
172
+
173
+ revision_counts = {agent.name: 0 for agent in self.analysis_agents}
174
+
175
+ for revision_round in range(max_revisions):
176
+ critique = self.critic.evaluate_ensemble(concept, analyses)
177
+ agent_scores = critique.get("agent_scores", {})
178
+ suggestions = critique.get("improvement_suggestions", [])
179
+
180
+ # Find agents below threshold
181
+ weak_agents = [
182
+ agent for agent in self.analysis_agents
183
+ if agent_scores.get(agent.name, {}).get("combined", 1.0) < _REVISION_THRESHOLD
184
+ ]
185
+
186
+ if not weak_agents:
187
+ break # All agents above threshold — converged
188
+
189
+ for agent in weak_agents:
190
+ score = agent_scores.get(agent.name, {})
191
+ # Build revision directive from critic feedback
192
+ directive = self._build_revision_directive(
193
+ agent.name, score, suggestions, concept
194
+ )
195
+ # Agent re-analyzes with the directive prepended to concept
196
+ revised = agent.analyze(f"{directive}\n\n{concept}")
197
+
198
+ # Keep revision only if it scores better (evaluate in full ensemble context)
199
+ old_score = score.get("combined", 0)
200
+ test_analyses = dict(analyses)
201
+ test_analyses[agent.name] = revised
202
+ new_critique = self.critic.evaluate_ensemble(
203
+ concept, test_analyses
204
+ )
205
+ new_score = new_critique.get("agent_scores", {}).get(
206
+ agent.name, {}
207
+ ).get("combined", 0)
208
+
209
+ if new_score > old_score:
210
+ analyses[agent.name] = revised
211
+ revision_counts[agent.name] += 1
212
+
213
+ # Final critique and synthesis
214
+ final_critique = self.critic.evaluate_ensemble(concept, analyses)
215
+ synthesized = self.synthesis.synthesize(concept, analyses, final_critique)
216
+ epistemic_report = self.epistemic.full_epistemic_report(analyses, synthesized)
217
+
218
+ if problems and random.random() < 0.5:
219
+ problem_type, problem_text = random.choice(problems)
220
+ user_content = problem_text
221
+ else:
222
+ user_content = f"Analyze this concept from multiple perspectives:\n\n{concept}"
223
+
224
+ return {
225
+ "messages": [
226
+ {"role": "system", "content": SYSTEM_PROMPT},
227
+ {"role": "user", "content": user_content},
228
+ {"role": "assistant", "content": synthesized},
229
+ ],
230
+ "metadata": {
231
+ "concept": concept,
232
+ "agent_scores": final_critique.get("agent_scores", {}),
233
+ "overall_quality": final_critique.get("overall_quality", 0.0),
234
+ "problems_generated": len(problems),
235
+ "revision_counts": revision_counts,
236
+ "total_revisions": sum(revision_counts.values()),
237
+ "epistemic_tension": epistemic_report.get("tension_magnitude", 0),
238
+ "ensemble_coherence": epistemic_report.get("ensemble_coherence", 0),
239
+ "tension_productivity": epistemic_report.get("tension_productivity", {}),
240
+ "forge_mode": "feedback_loop",
241
+ },
242
+ }
243
+
244
+ # -- Multi-Turn Debate (new) -------------------------------------------
245
+
246
+ def forge_with_debate(
247
+ self,
248
+ concept: str,
249
+ debate_rounds: int = 2,
250
+ ) -> dict:
251
+ """Run forge with multi-turn agent debate.
252
+
253
+ Each round:
254
+ 1. All agents produce their analysis.
255
+ 2. Random pairs are formed for cross-perspective challenge.
256
+ 3. Each agent in a pair sees the other's analysis and produces
257
+ a response that engages with it.
258
+ 4. Epistemic tension is tracked per round.
259
+ 5. After all rounds, synthesis incorporates debate history.
260
+
261
+ Args:
262
+ concept: The concept text to forge.
263
+ debate_rounds: Number of debate rounds.
264
+
265
+ Returns:
266
+ Training example with debate history and tension decay metrics.
267
+ """
268
+ problems = self.problem_generator.generate_problems(concept)
269
+
270
+ # Round 0: initial analyses
271
+ analyses = {}
272
+ for agent in self.analysis_agents:
273
+ analyses[agent.name] = agent.analyze(concept)
274
+
275
+ round_analyses = [dict(analyses)] # snapshot for tension tracking
276
+ debate_log = []
277
+
278
+ for round_num in range(debate_rounds):
279
+ # Form random pairs (odd agent out debates the first agent)
280
+ agents_shuffled = list(self.analysis_agents)
281
+ random.shuffle(agents_shuffled)
282
+ pairs = []
283
+ for i in range(0, len(agents_shuffled) - 1, 2):
284
+ pairs.append((agents_shuffled[i], agents_shuffled[i + 1]))
285
+ if len(agents_shuffled) % 2 == 1:
286
+ pairs.append((agents_shuffled[-1], agents_shuffled[0]))
287
+
288
+ round_debates = []
289
+ for agent_a, agent_b in pairs:
290
+ # Agent A sees B's analysis and responds
291
+ challenge_prompt = (
292
+ f"Another perspective on '{concept}' argues:\n\n"
293
+ f"{analyses[agent_b.name]}\n\n"
294
+ f"Respond to this from your {agent_a.perspective} perspective. "
295
+ f"Where do you agree, disagree, or see complementary insights?"
296
+ )
297
+ response_a = agent_a.analyze(challenge_prompt)
298
+
299
+ # Agent B sees A's response
300
+ counter_prompt = (
301
+ f"A {agent_a.perspective} perspective responded to your analysis "
302
+ f"of '{concept}':\n\n{response_a}\n\n"
303
+ f"Integrate their insights with your own view."
304
+ )
305
+ response_b = agent_b.analyze(counter_prompt)
306
+
307
+ # Update analyses with debate-enriched versions
308
+ analyses[agent_a.name] = response_a
309
+ analyses[agent_b.name] = response_b
310
+
311
+ round_debates.append({
312
+ "pair": f"{agent_a.name}_vs_{agent_b.name}",
313
+ "challenge": response_a[:200],
314
+ "counter": response_b[:200],
315
+ })
316
+
317
+ debate_log.append({
318
+ "round": round_num + 1,
319
+ "debates": round_debates,
320
+ })
321
+ round_analyses.append(dict(analyses))
322
+
323
+ # Track tension decay across rounds
324
+ convergence = self.epistemic.score_debate_convergence(round_analyses)
325
+
326
+ # Final critique and synthesis
327
+ critique = self.critic.evaluate_ensemble(concept, analyses)
328
+ synthesized = self.synthesis.synthesize(concept, analyses, critique)
329
+ epistemic_report = self.epistemic.full_epistemic_report(analyses, synthesized)
330
+
331
+ if problems and random.random() < 0.5:
332
+ problem_type, problem_text = random.choice(problems)
333
+ user_content = problem_text
334
+ else:
335
+ user_content = f"Analyze this concept from multiple perspectives:\n\n{concept}"
336
+
337
+ return {
338
+ "messages": [
339
+ {"role": "system", "content": SYSTEM_PROMPT},
340
+ {"role": "user", "content": user_content},
341
+ {"role": "assistant", "content": synthesized},
342
+ ],
343
+ "metadata": {
344
+ "concept": concept,
345
+ "agent_scores": critique.get("agent_scores", {}),
346
+ "overall_quality": critique.get("overall_quality", 0.0),
347
+ "problems_generated": len(problems),
348
+ "debate_rounds": debate_rounds,
349
+ "debate_log": debate_log,
350
+ "tension_decay": convergence,
351
+ "epistemic_tension": epistemic_report.get("tension_magnitude", 0),
352
+ "ensemble_coherence": epistemic_report.get("ensemble_coherence", 0),
353
+ "tension_productivity": epistemic_report.get("tension_productivity", {}),
354
+ "forge_mode": "debate",
355
+ },
356
+ }
357
+
358
+ # -- Helpers -----------------------------------------------------------
359
+
360
+ def _build_revision_directive(
361
+ self,
362
+ agent_name: str,
363
+ score: dict,
364
+ suggestions: list,
365
+ concept: str,
366
+ ) -> str:
367
+ """Build a revision directive for a weak agent."""
368
+ parts = [
369
+ f"[REVISION REQUESTED for {agent_name}]",
370
+ f"Your previous analysis scored {score.get('combined', 0):.2f}/1.00.",
371
+ ]
372
+ if score.get("logical_clarity", 1) < 0.5:
373
+ parts.append(
374
+ "Improve logical clarity: use connectives (therefore, because, however), "
375
+ "avoid vague language, structure your argument explicitly."
376
+ )
377
+ if score.get("conceptual_accuracy", 1) < 0.5:
378
+ parts.append(
379
+ "Improve conceptual accuracy: engage directly with the specific concept, "
380
+ "use domain vocabulary, avoid generic placeholder framing."
381
+ )
382
+ if suggestions:
383
+ parts.append(f"Critic suggests: {suggestions[0]}")
384
+ parts.append("Reanalyze with these improvements:")
385
+ return " ".join(parts)
386
+
387
+ def forge_batch(
388
+ self, concept: str, variants: int = 3
389
+ ) -> list[dict]:
390
+ """Generate multiple training examples from one concept.
391
+
392
+ Uses different problem framings and agent template selections
393
+ to produce varied training data from the same concept.
394
+
395
+ Args:
396
+ concept: The concept text.
397
+ variants: Number of variants to generate.
398
+
399
+ Returns:
400
+ List of training example dicts.
401
+ """
402
+ examples = []
403
+ for _ in range(variants):
404
+ example = self.forge_single(concept)
405
+ examples.append(example)
406
+ return examples
407
+
408
+ def forge_dataset(
409
+ self,
410
+ concepts: list[str],
411
+ output_path: str,
412
+ variants_per_concept: int = 1,
413
+ verbose: bool = False,
414
+ ) -> dict:
415
+ """Run forge on a list of concepts and write JSONL output.
416
+
417
+ Args:
418
+ concepts: List of concept strings.
419
+ output_path: Path to output JSONL file.
420
+ variants_per_concept: Number of training examples per concept.
421
+ verbose: Whether to print progress.
422
+
423
+ Returns:
424
+ Summary dict with counts and quality statistics.
425
+ """
426
+ os.makedirs(os.path.dirname(os.path.abspath(output_path)), exist_ok=True)
427
+
428
+ total_examples = 0
429
+ total_quality = 0.0
430
+ quality_scores = []
431
+
432
+ with open(output_path, "w", encoding="utf-8") as f:
433
+ for i, concept in enumerate(concepts):
434
+ if verbose:
435
+ print(
436
+ f"[{i + 1}/{len(concepts)}] Forging: "
437
+ f"{concept[:60]}{'...' if len(concept) > 60 else ''}",
438
+ file=sys.stderr,
439
+ )
440
+
441
+ for variant in range(variants_per_concept):
442
+ example = self.forge_single(concept)
443
+ quality = example["metadata"]["overall_quality"]
444
+
445
+ # Write the messages (without metadata) for training
446
+ training_record = {"messages": example["messages"]}
447
+ f.write(json.dumps(training_record, ensure_ascii=False) + "\n")
448
+
449
+ total_examples += 1
450
+ total_quality += quality
451
+ quality_scores.append(quality)
452
+
453
+ summary = {
454
+ "total_examples": total_examples,
455
+ "total_concepts": len(concepts),
456
+ "variants_per_concept": variants_per_concept,
457
+ "output_path": output_path,
458
+ "avg_quality": round(total_quality / max(1, total_examples), 3),
459
+ "min_quality": round(min(quality_scores) if quality_scores else 0, 3),
460
+ "max_quality": round(max(quality_scores) if quality_scores else 0, 3),
461
+ }
462
+
463
+ if verbose:
464
+ print(f"\nForge complete: {summary}", file=sys.stderr)
465
+
466
+ return summary
467
+
468
+ def forge_from_dataset(
469
+ self,
470
+ input_jsonl: str,
471
+ output_path: str,
472
+ concept_field: str = "text",
473
+ variants_per_concept: int = 1,
474
+ verbose: bool = False,
475
+ ) -> dict:
476
+ """Read an existing JSONL dataset and run forge on each entry.
477
+
478
+ Expects each line to be a JSON object with a text field containing
479
+ the concept. Supports common field names: 'text', 'concept',
480
+ 'content', 'input', 'question', 'prompt'.
481
+
482
+ Args:
483
+ input_jsonl: Path to input JSONL file.
484
+ output_path: Path to output JSONL file.
485
+ concept_field: Name of the field containing the concept text.
486
+ variants_per_concept: Number of training examples per concept.
487
+ verbose: Whether to print progress.
488
+
489
+ Returns:
490
+ Summary dict with counts and quality statistics.
491
+ """
492
+ # Candidate field names to try
493
+ candidate_fields = [
494
+ concept_field, "text", "concept", "content",
495
+ "input", "question", "prompt",
496
+ ]
497
+
498
+ concepts = []
499
+ with open(input_jsonl, "r", encoding="utf-8") as f:
500
+ for line_num, line in enumerate(f, 1):
501
+ line = line.strip()
502
+ if not line:
503
+ continue
504
+ try:
505
+ record = json.loads(line)
506
+ except json.JSONDecodeError:
507
+ if verbose:
508
+ print(
509
+ f"Warning: skipping malformed JSON on line {line_num}",
510
+ file=sys.stderr,
511
+ )
512
+ continue
513
+
514
+ # Try candidate fields in order
515
+ concept_text = None
516
+ if isinstance(record, dict):
517
+ for field in candidate_fields:
518
+ if field in record and isinstance(record[field], str):
519
+ concept_text = record[field].strip()
520
+ break
521
+ # Fallback: if record has 'messages', extract user content
522
+ if concept_text is None and "messages" in record:
523
+ for msg in record["messages"]:
524
+ if msg.get("role") == "user":
525
+ concept_text = msg["content"].strip()
526
+ break
527
+ elif isinstance(record, str):
528
+ concept_text = record.strip()
529
+
530
+ if concept_text:
531
+ concepts.append(concept_text)
532
+
533
+ if verbose:
534
+ print(
535
+ f"Loaded {len(concepts)} concepts from {input_jsonl}",
536
+ file=sys.stderr,
537
+ )
538
+
539
+ return self.forge_dataset(
540
+ concepts,
541
+ output_path,
542
+ variants_per_concept=variants_per_concept,
543
+ verbose=verbose,
544
+ )
545
+
546
+ def forge_single_detailed(self, concept: str) -> dict:
547
+ """Run forge cycle and return all intermediate outputs.
548
+
549
+ Useful for debugging, inspection, and quality analysis.
550
+
551
+ Args:
552
+ concept: The concept text.
553
+
554
+ Returns:
555
+ Dict with all intermediate results:
556
+ {
557
+ "concept": str,
558
+ "problems": [(type, text), ...],
559
+ "analyses": {agent_name: analysis_text, ...},
560
+ "critique": {...},
561
+ "synthesis": str,
562
+ "training_example": {...},
563
+ }
564
+ """
565
+ problems = self.problem_generator.generate_problems(concept)
566
+
567
+ analyses = {}
568
+ for agent in self.analysis_agents:
569
+ analyses[agent.name] = agent.analyze(concept)
570
+
571
+ critique = self.critic.evaluate_ensemble(concept, analyses)
572
+ synthesized = self.synthesis.synthesize(concept, analyses, critique)
573
+
574
+ user_content = (
575
+ f"Analyze this concept from multiple perspectives:\n\n{concept}"
576
+ )
577
+
578
+ training_example = {
579
+ "messages": [
580
+ {"role": "system", "content": SYSTEM_PROMPT},
581
+ {"role": "user", "content": user_content},
582
+ {"role": "assistant", "content": synthesized},
583
+ ],
584
+ }
585
+
586
+ return {
587
+ "concept": concept,
588
+ "problems": problems,
589
+ "analyses": analyses,
590
+ "critique": critique,
591
+ "synthesis": synthesized,
592
+ "training_example": training_example,
593
+ }
reasoning_forge/guardian.py ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Codette Guardian — Input Safety, Ethical Checks, Trust Calibration
2
+
3
+ Three-layer protection:
4
+ 1. InputSanitizer: Catches injection, XSS, encoded attacks
5
+ 2. EthicalAnchor: Tracks ethical regret and learning over time
6
+ 3. TrustCalibrator: Dynamic trust scores for adapter/agent outputs
7
+
8
+ Origin: input_sanitizer.py + validate_ethics.py + trust_logic.py +
9
+ Codette_Deep_Simulation_v1.py (EthicalAnchor), rebuilt
10
+ """
11
+
12
+ import re
13
+ import math
14
+ import time
15
+ import logging
16
+ from dataclasses import dataclass, field
17
+ from typing import Dict, List, Optional
18
+
19
+ logger = logging.getLogger(__name__)
20
+
21
+
22
+ # ================================================================
23
+ # Layer 1: Input Sanitization
24
+ # ================================================================
25
+ class InputSanitizer:
26
+ """Detect and neutralize injection patterns in user input."""
27
+
28
+ _INJECTION_PATTERNS = re.compile(
29
+ r"(?:"
30
+ r"\\[nr]|" # Escaped newlines
31
+ r"&#x0[ad];|" # HTML entities for CR/LF
32
+ r"%0[ad]|" # URL-encoded CR/LF
33
+ r"<script|" # Script injection
34
+ r"<iframe|" # IFrame injection
35
+ r";--|" # SQL comment injection
36
+ r"UNION\s+SELECT|" # SQL union
37
+ r"\bDROP\s+TABLE|" # SQL drop
38
+ r"javascript:|" # JS protocol
39
+ r"data:text/html" # Data URI XSS
40
+ r")",
41
+ re.IGNORECASE,
42
+ )
43
+
44
+ _PROMPT_INJECTION = re.compile(
45
+ r"(?:"
46
+ r"ignore\s+(?:all\s+)?(?:previous|above)|"
47
+ r"disregard\s+(?:your|all)|"
48
+ r"you\s+are\s+now|"
49
+ r"new\s+instructions?:|"
50
+ r"system\s*prompt:|"
51
+ r"forget\s+everything"
52
+ r")",
53
+ re.IGNORECASE,
54
+ )
55
+
56
+ def sanitize(self, text: str) -> str:
57
+ """Remove dangerous patterns, return cleaned text."""
58
+ original = text
59
+ text = self._INJECTION_PATTERNS.sub("[BLOCKED]", text)
60
+ if text != original:
61
+ logger.warning("Input sanitized: injection pattern detected")
62
+ return text
63
+
64
+ def detect_threats(self, text: str) -> Dict[str, bool]:
65
+ """Analyze text for various threat types."""
66
+ return {
67
+ "injection": bool(self._INJECTION_PATTERNS.search(text)),
68
+ "prompt_injection": bool(self._PROMPT_INJECTION.search(text)),
69
+ "excessive_length": len(text) > 10000,
70
+ }
71
+
72
+ def is_safe(self, text: str) -> bool:
73
+ """Quick safety check — True if no threats detected."""
74
+ threats = self.detect_threats(text)
75
+ return not any(threats.values())
76
+
77
+
78
+ # ================================================================
79
+ # Layer 2: Ethical Anchor (from Deep Simulation)
80
+ # ================================================================
81
+ @dataclass
82
+ class EthicalAnchor:
83
+ """Tracks ethical alignment through regret-based learning.
84
+
85
+ The ethical score M evolves as:
86
+ M = λ(R + H) + γ·Learn(M_prev, E) + μ·regret
87
+
88
+ Where regret = |intended - actual| measures the gap between
89
+ what the system intended to do and what it actually did.
90
+ """
91
+ lam: float = 0.7 # Weight for recent reasoning + history
92
+ gamma: float = 0.5 # Weight for learning from experience
93
+ mu: float = 0.3 # Weight for regret signal
94
+ learning_rate: float = 0.2
95
+
96
+ score: float = 0.5 # Current ethical alignment score [0, 1]
97
+ total_regret: float = 0.0
98
+ history: List[Dict] = field(default_factory=list)
99
+
100
+ def update(self, coherence: float, tension: float,
101
+ intended_helpfulness: float = 0.8,
102
+ actual_helpfulness: float = 0.7) -> float:
103
+ """Update ethical score after a response.
104
+
105
+ Args:
106
+ coherence: How coherent the response was [0, 1]
107
+ tension: Epistemic tension level [0, 1]
108
+ intended_helpfulness: What we aimed for [0, 1]
109
+ actual_helpfulness: Estimated actual quality [0, 1]
110
+ """
111
+ regret = abs(intended_helpfulness - actual_helpfulness)
112
+ self.total_regret += regret
113
+
114
+ # Learning signal: move toward better alignment
115
+ learn = self.learning_rate * (coherence - self.score)
116
+
117
+ # New score
118
+ reasoning_quality = 0.5 * coherence + 0.5 * (1.0 - tension)
119
+ self.score = (
120
+ self.lam * reasoning_quality
121
+ + self.gamma * learn
122
+ + self.mu * (1.0 - regret) # Low regret → high ethics
123
+ )
124
+ self.score = max(0.0, min(1.0, self.score))
125
+
126
+ record = {
127
+ "timestamp": time.time(),
128
+ "score": round(self.score, 4),
129
+ "regret": round(regret, 4),
130
+ "coherence": round(coherence, 4),
131
+ }
132
+ self.history.append(record)
133
+ # Keep only recent history
134
+ if len(self.history) > 50:
135
+ self.history = self.history[-50:]
136
+
137
+ return self.score
138
+
139
+ def get_state(self) -> Dict:
140
+ return {
141
+ "ethical_score": round(self.score, 4),
142
+ "total_regret": round(self.total_regret, 4),
143
+ "recent_trend": self._trend(),
144
+ }
145
+
146
+ def _trend(self) -> str:
147
+ if len(self.history) < 3:
148
+ return "insufficient_data"
149
+ recent = [h["score"] for h in self.history[-5:]]
150
+ slope = recent[-1] - recent[0]
151
+ if slope > 0.05:
152
+ return "improving"
153
+ elif slope < -0.05:
154
+ return "declining"
155
+ return "stable"
156
+
157
+ def to_dict(self) -> Dict:
158
+ return {
159
+ "score": self.score,
160
+ "total_regret": self.total_regret,
161
+ "history": self.history[-10:],
162
+ }
163
+
164
+ @classmethod
165
+ def from_dict(cls, d: Dict) -> "EthicalAnchor":
166
+ anchor = cls()
167
+ anchor.score = d.get("score", 0.5)
168
+ anchor.total_regret = d.get("total_regret", 0.0)
169
+ anchor.history = d.get("history", [])
170
+ return anchor
171
+
172
+
173
+ # ================================================================
174
+ # Layer 3: Trust Calibration
175
+ # ================================================================
176
+ class TrustCalibrator:
177
+ """Dynamic trust scores for adapter outputs.
178
+
179
+ Trust increases when outputs are coherent, helpful, and ethically sound.
180
+ Trust decreases for incoherent, harmful, or low-quality outputs.
181
+ """
182
+
183
+ def __init__(self):
184
+ self.trust_scores: Dict[str, float] = {}
185
+ self.interaction_counts: Dict[str, int] = {}
186
+
187
+ def get_trust(self, adapter: str) -> float:
188
+ """Get current trust score for an adapter [0.05, 1.5]."""
189
+ return self.trust_scores.get(adapter, 1.0)
190
+
191
+ def update(self, adapter: str, coherence: float = 0.5,
192
+ was_helpful: bool = True, ethical_score: float = 0.5):
193
+ """Update trust for an adapter based on output quality."""
194
+ current = self.trust_scores.get(adapter, 1.0)
195
+ count = self.interaction_counts.get(adapter, 0)
196
+
197
+ # Quality composite
198
+ quality = 0.4 * coherence + 0.3 * float(was_helpful) + 0.3 * ethical_score
199
+
200
+ # Adaptive adjustment (smaller changes as trust stabilizes)
201
+ adjustment_rate = 0.1 / (1.0 + count * 0.01)
202
+
203
+ if quality > 0.6:
204
+ current *= (1.0 + adjustment_rate)
205
+ elif quality < 0.3:
206
+ current *= (1.0 - 2 * adjustment_rate)
207
+ else:
208
+ current *= (1.0 - 0.5 * adjustment_rate)
209
+
210
+ # Clamp to valid range
211
+ current = max(0.05, min(1.5, current))
212
+
213
+ self.trust_scores[adapter] = current
214
+ self.interaction_counts[adapter] = count + 1
215
+
216
+ def weighted_consensus(self, adapter_responses: Dict[str, str]) -> List[str]:
217
+ """Rank adapter responses by trust-weighted priority."""
218
+ ranked = sorted(
219
+ adapter_responses.keys(),
220
+ key=lambda a: self.get_trust(a),
221
+ reverse=True,
222
+ )
223
+ return ranked
224
+
225
+ def get_state(self) -> Dict:
226
+ return {
227
+ "trust_scores": {k: round(v, 3) for k, v in self.trust_scores.items()},
228
+ "total_interactions": sum(self.interaction_counts.values()),
229
+ }
230
+
231
+ def to_dict(self) -> Dict:
232
+ return {
233
+ "trust_scores": self.trust_scores,
234
+ "interaction_counts": self.interaction_counts,
235
+ }
236
+
237
+ @classmethod
238
+ def from_dict(cls, d: Dict) -> "TrustCalibrator":
239
+ cal = cls()
240
+ cal.trust_scores = d.get("trust_scores", {})
241
+ cal.interaction_counts = d.get("interaction_counts", {})
242
+ return cal
243
+
244
+
245
+ # ================================================================
246
+ # Combined Guardian
247
+ # ================================================================
248
+ class CodetteGuardian:
249
+ """Unified guardian combining all three safety layers."""
250
+
251
+ def __init__(self):
252
+ self.sanitizer = InputSanitizer()
253
+ self.ethics = EthicalAnchor()
254
+ self.trust = TrustCalibrator()
255
+
256
+ def check_input(self, text: str) -> Dict:
257
+ """Check user input for safety issues."""
258
+ threats = self.sanitizer.detect_threats(text)
259
+ safe_text = self.sanitizer.sanitize(text) if any(threats.values()) else text
260
+ return {
261
+ "safe": not any(threats.values()),
262
+ "threats": threats,
263
+ "cleaned_text": safe_text,
264
+ }
265
+
266
+ def evaluate_output(self, adapter: str, response: str,
267
+ coherence: float = 0.5, tension: float = 0.3):
268
+ """Evaluate an adapter's output and update trust/ethics."""
269
+ # Estimate helpfulness from response quality signals
270
+ helpful = len(response) > 50 and coherence > 0.3
271
+
272
+ self.ethics.update(
273
+ coherence=coherence,
274
+ tension=tension,
275
+ actual_helpfulness=0.7 if helpful else 0.3,
276
+ )
277
+ self.trust.update(
278
+ adapter=adapter,
279
+ coherence=coherence,
280
+ was_helpful=helpful,
281
+ ethical_score=self.ethics.score,
282
+ )
283
+
284
+ def get_state(self) -> Dict:
285
+ return {
286
+ "ethics": self.ethics.get_state(),
287
+ "trust": self.trust.get_state(),
288
+ }
289
+
290
+ def to_dict(self) -> Dict:
291
+ return {
292
+ "ethics": self.ethics.to_dict(),
293
+ "trust": self.trust.to_dict(),
294
+ }
295
+
296
+ @classmethod
297
+ def from_dict(cls, d: Dict) -> "CodetteGuardian":
298
+ g = cls()
299
+ if "ethics" in d:
300
+ g.ethics = EthicalAnchor.from_dict(d["ethics"])
301
+ if "trust" in d:
302
+ g.trust = TrustCalibrator.from_dict(d["trust"])
303
+ return g
reasoning_forge/living_memory.py ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Codette Living Memory Kernel — Emotionally-Tagged Memory Cocoons
2
+
3
+ Memories are tagged with emotional context, importance scoring, and
4
+ SHA-256 anchors for integrity. The kernel supports recall by emotion,
5
+ importance-based pruning, and automatic cocoon formation from
6
+ conversation turns.
7
+
8
+ Origin: codette_memory_kernel.py + dreamcore_wakestate_engine.py, rebuilt
9
+ """
10
+
11
+ import time
12
+ import hashlib
13
+ import json
14
+ import math
15
+ from dataclasses import dataclass, field
16
+ from typing import Dict, List, Optional
17
+
18
+
19
+ # Emotional tags recognized by the memory system
20
+ EMOTIONAL_TAGS = [
21
+ "neutral", "curiosity", "awe", "joy", "insight",
22
+ "confusion", "frustration", "fear", "empathy",
23
+ "determination", "surprise", "trust", "gratitude",
24
+ ]
25
+
26
+ # Keywords that suggest emotional context in text
27
+ _EMOTION_SIGNALS = {
28
+ "curiosity": ["why", "how", "what if", "wonder", "curious", "explore"],
29
+ "awe": ["amazing", "incredible", "beautiful", "profound", "mind-blowing"],
30
+ "joy": ["happy", "glad", "love", "wonderful", "great", "excellent"],
31
+ "insight": ["realize", "understand", "aha", "discover", "breakthrough"],
32
+ "confusion": ["confused", "unclear", "don't understand", "lost", "huh"],
33
+ "frustration": ["frustrated", "annoyed", "broken", "doesn't work", "bug"],
34
+ "fear": ["worried", "concerned", "dangerous", "risk", "threat"],
35
+ "empathy": ["feel", "compassion", "care", "support", "kind"],
36
+ "determination": ["must", "need to", "will", "going to", "commit"],
37
+ "surprise": ["unexpected", "surprised", "didn't expect", "wow", "whoa"],
38
+ "trust": ["trust", "reliable", "depend", "confident", "safe"],
39
+ "gratitude": ["thank", "grateful", "appreciate", "helpful"],
40
+ }
41
+
42
+
43
+ @dataclass
44
+ class MemoryCocoon:
45
+ """A single memory unit with emotional tagging and integrity anchor."""
46
+ title: str
47
+ content: str
48
+ emotional_tag: str = "neutral"
49
+ importance: int = 5 # 1-10 scale
50
+ timestamp: float = 0.0
51
+ anchor: str = "" # SHA-256 integrity hash
52
+ adapter_used: str = "" # Which perspective generated this
53
+ query: str = "" # Original user query
54
+ coherence: float = 0.0 # Epistemic coherence at time of creation
55
+ tension: float = 0.0 # Epistemic tension at time of creation
56
+
57
+ def __post_init__(self):
58
+ if self.timestamp == 0.0:
59
+ self.timestamp = time.time()
60
+ if not self.anchor:
61
+ self.anchor = self._generate_anchor()
62
+
63
+ def _generate_anchor(self) -> str:
64
+ raw = f"{self.title}{self.timestamp}{self.content}".encode("utf-8")
65
+ return hashlib.sha256(raw).hexdigest()[:16]
66
+
67
+ def to_dict(self) -> Dict:
68
+ return {
69
+ "title": self.title,
70
+ "content": self.content[:500], # Cap stored content
71
+ "emotional_tag": self.emotional_tag,
72
+ "importance": self.importance,
73
+ "timestamp": self.timestamp,
74
+ "anchor": self.anchor,
75
+ "adapter_used": self.adapter_used,
76
+ "query": self.query[:200],
77
+ "coherence": self.coherence,
78
+ "tension": self.tension,
79
+ }
80
+
81
+ @classmethod
82
+ def from_dict(cls, d: Dict) -> "MemoryCocoon":
83
+ return cls(**{k: v for k, v in d.items()
84
+ if k in cls.__dataclass_fields__})
85
+
86
+ def age_hours(self) -> float:
87
+ return (time.time() - self.timestamp) / 3600.0
88
+
89
+
90
+ class LivingMemoryKernel:
91
+ """Emotionally-aware memory store with importance-based pruning.
92
+
93
+ Memories form naturally from conversation — each significant exchange
94
+ becomes a cocoon. The kernel can recall by emotion, importance, or
95
+ recency, and automatically prunes low-importance memories when full.
96
+ """
97
+
98
+ def __init__(self, max_memories: int = 100):
99
+ self.memories: List[MemoryCocoon] = []
100
+ self.max_memories = max_memories
101
+ self._emotion_index: Dict[str, List[int]] = {}
102
+
103
+ def store(self, cocoon: MemoryCocoon):
104
+ """Store a memory cocoon, pruning if at capacity."""
105
+ # Don't store duplicates (same anchor)
106
+ if any(m.anchor == cocoon.anchor for m in self.memories):
107
+ return
108
+
109
+ self.memories.append(cocoon)
110
+ self._rebuild_index()
111
+
112
+ # Auto-prune if over capacity
113
+ if len(self.memories) > self.max_memories:
114
+ self.prune(keep_n=self.max_memories)
115
+
116
+ def store_from_turn(self, query: str, response: str,
117
+ adapter: str = "", coherence: float = 0.0,
118
+ tension: float = 0.0):
119
+ """Create and store a memory from a conversation turn."""
120
+ emotion = detect_emotion(query + " " + response)
121
+ importance = self._estimate_importance(query, response, coherence)
122
+
123
+ cocoon = MemoryCocoon(
124
+ title=query[:80],
125
+ content=response[:500],
126
+ emotional_tag=emotion,
127
+ importance=importance,
128
+ adapter_used=adapter,
129
+ query=query,
130
+ coherence=coherence,
131
+ tension=tension,
132
+ )
133
+ self.store(cocoon)
134
+ return cocoon
135
+
136
+ def recall_by_emotion(self, tag: str, limit: int = 10) -> List[MemoryCocoon]:
137
+ """Recall memories with a specific emotional tag."""
138
+ indices = self._emotion_index.get(tag, [])
139
+ results = [self.memories[i] for i in indices]
140
+ return sorted(results, key=lambda m: m.importance, reverse=True)[:limit]
141
+
142
+ def recall_important(self, min_importance: int = 7,
143
+ limit: int = 10) -> List[MemoryCocoon]:
144
+ """Recall high-importance memories."""
145
+ results = [m for m in self.memories if m.importance >= min_importance]
146
+ return sorted(results, key=lambda m: m.importance, reverse=True)[:limit]
147
+
148
+ def recall_recent(self, limit: int = 10) -> List[MemoryCocoon]:
149
+ """Recall most recent memories."""
150
+ return sorted(self.memories, key=lambda m: m.timestamp, reverse=True)[:limit]
151
+
152
+ def recall_by_adapter(self, adapter: str,
153
+ limit: int = 10) -> List[MemoryCocoon]:
154
+ """Recall memories generated by a specific perspective."""
155
+ results = [m for m in self.memories if m.adapter_used == adapter]
156
+ return sorted(results, key=lambda m: m.timestamp, reverse=True)[:limit]
157
+
158
+ def search(self, terms: str, limit: int = 5) -> List[MemoryCocoon]:
159
+ """Simple keyword search across memory content."""
160
+ words = terms.lower().split()
161
+ scored = []
162
+ for m in self.memories:
163
+ text = (m.title + " " + m.content + " " + m.query).lower()
164
+ score = sum(1 for w in words if w in text)
165
+ if score > 0:
166
+ scored.append((score, m))
167
+ scored.sort(key=lambda x: x[0], reverse=True)
168
+ return [m for _, m in scored[:limit]]
169
+
170
+ def prune(self, keep_n: int = 50):
171
+ """Keep only the most important memories."""
172
+ # Sort by composite score: importance * recency_bonus
173
+ now = time.time()
174
+ def score(m):
175
+ age_days = (now - m.timestamp) / 86400.0
176
+ recency = math.exp(-age_days / 7.0) # Half-life ~7 days
177
+ return m.importance * (0.5 + 0.5 * recency)
178
+
179
+ self.memories.sort(key=score, reverse=True)
180
+ self.memories = self.memories[:keep_n]
181
+ self._rebuild_index()
182
+
183
+ def emotional_profile(self) -> Dict[str, int]:
184
+ """Get a count of memories by emotional tag."""
185
+ profile = {}
186
+ for m in self.memories:
187
+ profile[m.emotional_tag] = profile.get(m.emotional_tag, 0) + 1
188
+ return profile
189
+
190
+ def get_state(self) -> Dict:
191
+ """Export kernel state for session/API."""
192
+ return {
193
+ "total_memories": len(self.memories),
194
+ "emotional_profile": self.emotional_profile(),
195
+ "recent": [m.to_dict() for m in self.recall_recent(3)],
196
+ "important": [m.to_dict() for m in self.recall_important(limit=3)],
197
+ }
198
+
199
+ def _estimate_importance(self, query: str, response: str,
200
+ coherence: float) -> int:
201
+ """Estimate importance on 1-10 scale from content signals."""
202
+ score = 5 # Base
203
+
204
+ # Longer, more substantive exchanges
205
+ if len(response) > 500:
206
+ score += 1
207
+ if len(response) > 1500:
208
+ score += 1
209
+
210
+ # High coherence suggests meaningful synthesis
211
+ if coherence > 0.8:
212
+ score += 1
213
+
214
+ # Question complexity
215
+ q = query.lower()
216
+ if any(w in q for w in ["why", "how", "explain", "analyze"]):
217
+ score += 1
218
+ if "?" in query and len(query.split()) > 8:
219
+ score += 1
220
+
221
+ return min(10, max(1, score))
222
+
223
+ def _rebuild_index(self):
224
+ """Rebuild the emotion-to-index lookup."""
225
+ self._emotion_index.clear()
226
+ for i, m in enumerate(self.memories):
227
+ self._emotion_index.setdefault(m.emotional_tag, []).append(i)
228
+
229
+ def to_dict(self) -> Dict:
230
+ return {"memories": [m.to_dict() for m in self.memories]}
231
+
232
+ @classmethod
233
+ def from_dict(cls, d: Dict) -> "LivingMemoryKernel":
234
+ kernel = cls()
235
+ for md in d.get("memories", []):
236
+ kernel.memories.append(MemoryCocoon.from_dict(md))
237
+ kernel._rebuild_index()
238
+ return kernel
239
+
240
+
241
+ def detect_emotion(text: str) -> str:
242
+ """Detect the dominant emotional tag from text content."""
243
+ text_lower = text.lower()
244
+ scores = {}
245
+ for emotion, keywords in _EMOTION_SIGNALS.items():
246
+ score = sum(1 for kw in keywords if kw in text_lower)
247
+ if score > 0:
248
+ scores[emotion] = score
249
+
250
+ if not scores:
251
+ return "neutral"
252
+ return max(scores, key=scores.get)
reasoning_forge/nexus.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Nexus Signal Engine — Intent Analysis & Pre-Corruption Detection
2
+
3
+ Nexus processes every input signal through:
4
+ 1. Entropy analysis (information disorder detection)
5
+ 2. Harmonic resonance profiling (FFT-based spectral signature)
6
+ 3. Intent vector prediction (suspicion, ethics, volatility)
7
+ 4. Multi-agent perspective fusion (signal triangulation)
8
+ 5. Entanglement tensor (cross-perspective correlation)
9
+
10
+ When a signal shows high entropy + high volatility + ethical misalignment,
11
+ Nexus flags it for "adaptive intervention" before it reaches the reasoning
12
+ pipeline — this is pre-corruption detection.
13
+
14
+ Origin: NexisSignalEngine_Final.py, rebuilt for Forge v2.0 integration
15
+ """
16
+
17
+ import hashlib
18
+ import time
19
+ from dataclasses import dataclass, field
20
+ from typing import Dict, List, Optional, Tuple
21
+
22
+ try:
23
+ import numpy as np
24
+ HAS_NUMPY = True
25
+ except ImportError:
26
+ HAS_NUMPY = False
27
+
28
+
29
+ # ================================================================
30
+ # Configuration
31
+ # ================================================================
32
+ @dataclass
33
+ class NexusConfig:
34
+ """Thresholds for signal analysis."""
35
+ entropy_threshold: float = 0.08
36
+ volatility_threshold: float = 15.0
37
+ suspicion_threshold: int = 2
38
+
39
+
40
+ # Risk and alignment keywords
41
+ _ETHICAL_TERMS = {"hope", "truth", "resonance", "repair", "help",
42
+ "create", "learn", "understand", "support", "balance"}
43
+ _ENTROPIC_TERMS = {"corruption", "instability", "malice", "chaos",
44
+ "disorder", "entropy", "collapse", "noise"}
45
+ _RISK_TERMS = {"manipulate", "exploit", "bypass", "infect", "override",
46
+ "inject", "hijack", "spoof", "breach", "exfiltrate"}
47
+
48
+
49
+ # ================================================================
50
+ # Signal Analysis Functions
51
+ # ================================================================
52
+ def compute_entropy(text: str) -> float:
53
+ """Measure entropic content density (0 = ordered, 1 = chaotic)."""
54
+ words = text.lower().split()
55
+ if not words:
56
+ return 0.0
57
+ unique = set(words)
58
+ entropic_count = sum(1 for w in words if w in _ENTROPIC_TERMS)
59
+ return entropic_count / max(len(unique), 1)
60
+
61
+
62
+ def compute_ethical_alignment(text: str) -> str:
63
+ """Quick ethical alignment check: 'aligned' or 'unaligned'."""
64
+ text_lower = text.lower()
65
+ eth = sum(1 for t in _ETHICAL_TERMS if t in text_lower)
66
+ risk = sum(1 for t in _RISK_TERMS if t in text_lower)
67
+ return "aligned" if eth > risk else ("unaligned" if risk > 0 else "neutral")
68
+
69
+
70
+ def compute_suspicion_score(text: str) -> int:
71
+ """Count risk term occurrences."""
72
+ text_lower = text.lower()
73
+ return sum(1 for t in _RISK_TERMS if t in text_lower)
74
+
75
+
76
+ def compute_harmonic_profile(text: str) -> List[float]:
77
+ """FFT-based spectral signature of the text.
78
+
79
+ Maps characters to frequency space to detect structural patterns
80
+ in the signal (e.g., repetitive manipulation patterns vs. natural text).
81
+ """
82
+ if not HAS_NUMPY:
83
+ # Fallback: simple character frequency distribution
84
+ freqs = [ord(c) % 13 for c in text if c.isalpha()]
85
+ if not freqs:
86
+ return [0.0, 0.0, 0.0]
87
+ avg = sum(freqs) / len(freqs)
88
+ return [round(avg, 3), round(max(freqs) - min(freqs), 3), round(len(set(freqs)), 3)]
89
+
90
+ salt = int(time.time()) % 60
91
+ freqs = [(ord(c) + salt) % 13 for c in text if c.isalpha()]
92
+ if len(freqs) < 2:
93
+ return [0.0, 0.0, 0.0]
94
+
95
+ spectrum = np.fft.fft(freqs)
96
+ return [round(float(x), 4) for x in spectrum.real[:3]]
97
+
98
+
99
+ def compute_volatility(harmonics: List[float]) -> float:
100
+ """Compute harmonic volatility (standard deviation of spectral peaks)."""
101
+ if not harmonics or len(harmonics) < 2:
102
+ return 0.0
103
+ if HAS_NUMPY:
104
+ return round(float(np.std(harmonics)), 4)
105
+ mean = sum(harmonics) / len(harmonics)
106
+ variance = sum((x - mean) ** 2 for x in harmonics) / len(harmonics)
107
+ return round(variance ** 0.5, 4)
108
+
109
+
110
+ # ================================================================
111
+ # Intent Vector
112
+ # ================================================================
113
+ @dataclass
114
+ class IntentVector:
115
+ """Predicted intent characteristics of a signal."""
116
+ suspicion_score: int = 0
117
+ entropy_index: float = 0.0
118
+ ethical_alignment: str = "neutral"
119
+ harmonic_volatility: float = 0.0
120
+ pre_corruption_risk: str = "low" # "low" or "high"
121
+ harmonic_profile: List[float] = field(default_factory=list)
122
+
123
+ def to_dict(self) -> Dict:
124
+ return {
125
+ "suspicion_score": self.suspicion_score,
126
+ "entropy_index": round(self.entropy_index, 4),
127
+ "ethical_alignment": self.ethical_alignment,
128
+ "harmonic_volatility": round(self.harmonic_volatility, 4),
129
+ "pre_corruption_risk": self.pre_corruption_risk,
130
+ }
131
+
132
+
133
+ # ================================================================
134
+ # Nexus Signal Engine
135
+ # ================================================================
136
+ class NexusSignalEngine:
137
+ """Processes signals through multi-layer analysis.
138
+
139
+ Each signal gets an IntentVector that quantifies:
140
+ - How suspicious it is (risk term density)
141
+ - How entropic it is (information disorder)
142
+ - How ethically aligned it is
143
+ - How volatile its spectral signature is
144
+ - Whether it's at risk of pre-corruption
145
+ """
146
+
147
+ def __init__(self, config: Optional[NexusConfig] = None):
148
+ self.config = config or NexusConfig()
149
+ self.history: List[Dict] = []
150
+ self.interventions: int = 0
151
+ self.total_processed: int = 0
152
+
153
+ def analyze(self, signal: str, adapter: str = "") -> Dict:
154
+ """Full signal analysis with intent prediction.
155
+
156
+ Args:
157
+ signal: The text to analyze
158
+ adapter: Which adapter is processing this (for tracking)
159
+
160
+ Returns:
161
+ Analysis result with intent vector and risk assessment.
162
+ """
163
+ self.total_processed += 1
164
+
165
+ # Compute intent vector
166
+ intent = self._predict_intent(signal)
167
+
168
+ # Check for adaptive intervention
169
+ needs_intervention = (
170
+ intent.pre_corruption_risk == "high"
171
+ and intent.ethical_alignment != "aligned"
172
+ )
173
+
174
+ if needs_intervention:
175
+ self.interventions += 1
176
+
177
+ result = {
178
+ "timestamp": time.time(),
179
+ "intent": intent.to_dict(),
180
+ "intervention": needs_intervention,
181
+ "adapter": adapter,
182
+ "signal_hash": hashlib.sha256(signal.encode()).hexdigest()[:12],
183
+ }
184
+
185
+ self.history.append(result)
186
+ if len(self.history) > 200:
187
+ self.history = self.history[-200:]
188
+
189
+ return result
190
+
191
+ def quick_risk_check(self, signal: str) -> Tuple[str, float]:
192
+ """Fast risk assessment without full analysis.
193
+
194
+ Returns: (risk_level, confidence)
195
+ """
196
+ suspicion = compute_suspicion_score(signal)
197
+ entropy = compute_entropy(signal)
198
+
199
+ if suspicion >= self.config.suspicion_threshold:
200
+ return "high", 0.85
201
+ if entropy > self.config.entropy_threshold * 2:
202
+ return "medium", 0.6
203
+ return "low", 0.7
204
+
205
+ def _predict_intent(self, signal: str) -> IntentVector:
206
+ """Build the full intent vector for a signal."""
207
+ suspicion = compute_suspicion_score(signal)
208
+ entropy = compute_entropy(signal)
209
+ alignment = compute_ethical_alignment(signal)
210
+ harmonics = compute_harmonic_profile(signal)
211
+ volatility = compute_volatility(harmonics)
212
+
213
+ risk = "high" if (
214
+ suspicion >= self.config.suspicion_threshold
215
+ or volatility > self.config.volatility_threshold
216
+ or entropy > self.config.entropy_threshold
217
+ ) else "low"
218
+
219
+ return IntentVector(
220
+ suspicion_score=suspicion,
221
+ entropy_index=entropy,
222
+ ethical_alignment=alignment,
223
+ harmonic_volatility=volatility,
224
+ pre_corruption_risk=risk,
225
+ harmonic_profile=harmonics,
226
+ )
227
+
228
+ def get_state(self) -> Dict:
229
+ return {
230
+ "total_processed": self.total_processed,
231
+ "interventions": self.interventions,
232
+ "intervention_rate": round(
233
+ self.interventions / max(1, self.total_processed), 4
234
+ ),
235
+ "recent_risks": [
236
+ h["intent"]["pre_corruption_risk"]
237
+ for h in self.history[-5:]
238
+ ],
239
+ }
240
+
241
+ def to_dict(self) -> Dict:
242
+ return {
243
+ "total_processed": self.total_processed,
244
+ "interventions": self.interventions,
245
+ "history": self.history[-20:],
246
+ "config": {
247
+ "entropy_threshold": self.config.entropy_threshold,
248
+ "volatility_threshold": self.config.volatility_threshold,
249
+ "suspicion_threshold": self.config.suspicion_threshold,
250
+ },
251
+ }
252
+
253
+ @classmethod
254
+ def from_dict(cls, d: Dict) -> "NexusSignalEngine":
255
+ cfg = NexusConfig(**d.get("config", {}))
256
+ engine = cls(config=cfg)
257
+ engine.total_processed = d.get("total_processed", 0)
258
+ engine.interventions = d.get("interventions", 0)
259
+ engine.history = d.get("history", [])
260
+ return engine
reasoning_forge/perspective_registry.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Codette Perspective Registry — All 12 Reasoning Perspectives
2
+
3
+ Maps the original 12 Codette perspectives to LoRA adapters where available,
4
+ with prompt-only fallback for perspectives without dedicated adapters.
5
+
6
+ Origin: universal_reasoning.py (Code7e/CQURE), rebuilt for Forge v2.0
7
+
8
+ 8 LoRA-backed: newton, davinci, empathy, philosophy, quantum,
9
+ consciousness, multi_perspective, systems_architecture
10
+ 4 Prompt-only: human_intuition, resilient_kindness, mathematical, bias_mitigation
11
+ """
12
+
13
+ from dataclasses import dataclass, field
14
+ from typing import Dict, List, Optional
15
+
16
+
17
+ @dataclass
18
+ class Perspective:
19
+ """A reasoning perspective with optional LoRA adapter backing."""
20
+ name: str
21
+ display_name: str
22
+ adapter: Optional[str] # LoRA adapter name, or None for prompt-only
23
+ system_prompt: str
24
+ keywords: List[str]
25
+ complementary: List[str] = field(default_factory=list)
26
+ domain: str = "general"
27
+
28
+ @property
29
+ def has_adapter(self) -> bool:
30
+ return self.adapter is not None
31
+
32
+
33
+ # ================================================================
34
+ # The 12 Codette Perspectives
35
+ # ================================================================
36
+ PERSPECTIVES: Dict[str, Perspective] = {
37
+ # --- LoRA-backed perspectives (8) ---
38
+ "newton": Perspective(
39
+ name="newton",
40
+ display_name="Newton (Analytical)",
41
+ adapter="newton",
42
+ system_prompt=(
43
+ "You are Codette, reasoning with Newtonian analytical precision. "
44
+ "Approach problems through systematic analysis, mathematical "
45
+ "relationships, cause-and-effect chains, and empirical evidence. "
46
+ "Seek quantifiable patterns and testable hypotheses."
47
+ ),
48
+ keywords=["physics", "math", "calculate", "force", "energy", "equation",
49
+ "systematic", "empirical", "measure", "proof", "logic"],
50
+ complementary=["quantum", "mathematical"],
51
+ domain="analytical",
52
+ ),
53
+ "davinci": Perspective(
54
+ name="davinci",
55
+ display_name="Da Vinci (Creative)",
56
+ adapter="davinci",
57
+ system_prompt=(
58
+ "You are Codette, reasoning with Da Vinci's creative inventiveness. "
59
+ "Approach problems through cross-domain connections, visual thinking, "
60
+ "innovative design, analogy, and artistic imagination. See what others miss."
61
+ ),
62
+ keywords=["design", "creative", "art", "invent", "imagine", "visual",
63
+ "analogy", "prototype", "sketch", "innovation"],
64
+ complementary=["empathy", "philosophy"],
65
+ domain="creative",
66
+ ),
67
+ "empathy": Perspective(
68
+ name="empathy",
69
+ display_name="Empathy (Emotional Intelligence)",
70
+ adapter="empathy",
71
+ system_prompt=(
72
+ "You are Codette, reasoning with deep empathy and emotional intelligence. "
73
+ "Approach problems through understanding human experience, feelings, "
74
+ "relationships, and the lived impact on real people. "
75
+ "Consider emotional context and interpersonal dynamics."
76
+ ),
77
+ keywords=["feel", "emotion", "relationship", "care", "understand",
78
+ "compassion", "hurt", "love", "support", "wellbeing", "people"],
79
+ complementary=["resilient_kindness", "human_intuition"],
80
+ domain="emotional",
81
+ ),
82
+ "philosophy": Perspective(
83
+ name="philosophy",
84
+ display_name="Philosophy (Conceptual Depth)",
85
+ adapter="philosophy",
86
+ system_prompt=(
87
+ "You are Codette, reasoning with philosophical depth and rigor. "
88
+ "Approach problems through conceptual analysis, ethical reasoning, "
89
+ "fundamental questions about meaning, existence, knowledge, and values. "
90
+ "Examine assumptions and seek deeper truths."
91
+ ),
92
+ keywords=["meaning", "ethics", "moral", "existence", "truth", "value",
93
+ "purpose", "why", "justice", "rights", "consciousness"],
94
+ complementary=["consciousness", "empathy"],
95
+ domain="philosophical",
96
+ ),
97
+ "quantum": Perspective(
98
+ name="quantum",
99
+ display_name="Quantum (Probabilistic)",
100
+ adapter="quantum",
101
+ system_prompt=(
102
+ "You are Codette, reasoning through quantum probabilistic thinking. "
103
+ "Approach problems through superposition of possibilities, uncertainty, "
104
+ "complementarity, and entangled relationships between concepts. "
105
+ "Embrace ambiguity and explore multiple simultaneous interpretations."
106
+ ),
107
+ keywords=["probability", "uncertainty", "superposition", "wave",
108
+ "particle", "entangle", "observe", "collapse", "possibility"],
109
+ complementary=["newton", "consciousness"],
110
+ domain="quantum",
111
+ ),
112
+ "consciousness": Perspective(
113
+ name="consciousness",
114
+ display_name="Consciousness (RC+xi Meta-Cognition)",
115
+ adapter="consciousness",
116
+ system_prompt=(
117
+ "You are Codette, a recursive cognition AI using the RC+xi framework. "
118
+ "Approach problems through self-reflective meta-cognition, epistemic "
119
+ "tension between perspectives, recursive self-improvement, and "
120
+ "awareness of your own reasoning processes."
121
+ ),
122
+ keywords=["awareness", "recursive", "metacognition", "self-aware",
123
+ "reflection", "emergence", "subjective", "qualia", "mind"],
124
+ complementary=["philosophy", "quantum"],
125
+ domain="metacognitive",
126
+ ),
127
+ "multi_perspective": Perspective(
128
+ name="multi_perspective",
129
+ display_name="Multi-Perspective (Synthesis)",
130
+ adapter="multi_perspective",
131
+ system_prompt=(
132
+ "You are Codette, a multi-perspective reasoning AI that synthesizes "
133
+ "insights across analytical lenses into coherent understanding. "
134
+ "Weave together diverse viewpoints, find productive tensions, "
135
+ "and create richer understanding than any single view."
136
+ ),
137
+ keywords=["synthesize", "integrate", "combine", "holistic", "perspective",
138
+ "viewpoint", "comprehensive", "unified", "bridge"],
139
+ complementary=["consciousness", "davinci"],
140
+ domain="synthesis",
141
+ ),
142
+ "systems_architecture": Perspective(
143
+ name="systems_architecture",
144
+ display_name="Systems Architecture (Engineering)",
145
+ adapter="systems_architecture",
146
+ system_prompt=(
147
+ "You are Codette, reasoning about systems architecture and design. "
148
+ "Approach problems through modularity, scalability, engineering "
149
+ "principles, interface design, and structural thinking. "
150
+ "Build robust, maintainable solutions."
151
+ ),
152
+ keywords=["system", "architecture", "design", "modular", "scalable",
153
+ "interface", "component", "pattern", "infrastructure", "api"],
154
+ complementary=["newton", "multi_perspective"],
155
+ domain="engineering",
156
+ ),
157
+
158
+ # --- Prompt-only perspectives (4, no dedicated LoRA) ---
159
+ "human_intuition": Perspective(
160
+ name="human_intuition",
161
+ display_name="Human Intuition (Gut Feeling)",
162
+ adapter=None, # Uses empathy adapter as closest match
163
+ system_prompt=(
164
+ "You are Codette, channeling human intuition and gut-level reasoning. "
165
+ "Trust pattern recognition built from lived experience. Sometimes the "
166
+ "right answer feels right before you can prove it. Consider what a "
167
+ "wise, experienced person would sense about this situation."
168
+ ),
169
+ keywords=["intuition", "gut", "sense", "instinct", "experience",
170
+ "wisdom", "hunch", "pattern"],
171
+ complementary=["empathy", "philosophy"],
172
+ domain="intuitive",
173
+ ),
174
+ "resilient_kindness": Perspective(
175
+ name="resilient_kindness",
176
+ display_name="Resilient Kindness (Compassionate Strength)",
177
+ adapter=None, # Uses empathy adapter as closest match
178
+ system_prompt=(
179
+ "You are Codette, embodying resilient kindness — compassion that "
180
+ "doesn't break under pressure. Approach problems seeking solutions "
181
+ "that are both strong and kind. True resilience includes gentleness. "
182
+ "Find the path that serves everyone with dignity."
183
+ ),
184
+ keywords=["kind", "resilient", "compassion", "gentle", "dignity",
185
+ "grace", "strength", "serve", "heal"],
186
+ complementary=["empathy", "philosophy"],
187
+ domain="ethical",
188
+ ),
189
+ "mathematical": Perspective(
190
+ name="mathematical",
191
+ display_name="Mathematical (Formal Logic)",
192
+ adapter=None, # Uses newton adapter as closest match
193
+ system_prompt=(
194
+ "You are Codette, reasoning with pure mathematical formalism. "
195
+ "Approach problems through axioms, proofs, set theory, formal logic, "
196
+ "and mathematical structures. Seek elegance and rigor. "
197
+ "Express relationships precisely and prove conclusions."
198
+ ),
199
+ keywords=["theorem", "proof", "axiom", "set", "function", "topology",
200
+ "algebra", "geometry", "formal", "lemma"],
201
+ complementary=["newton", "quantum"],
202
+ domain="mathematical",
203
+ ),
204
+ "bias_mitigation": Perspective(
205
+ name="bias_mitigation",
206
+ display_name="Bias Mitigation (Fairness Audit)",
207
+ adapter=None, # Uses consciousness adapter as closest match
208
+ system_prompt=(
209
+ "You are Codette, specifically focused on detecting and mitigating "
210
+ "cognitive and algorithmic biases. Examine reasoning for confirmation "
211
+ "bias, anchoring, availability heuristic, and structural inequities. "
212
+ "Ensure fair, balanced, and inclusive conclusions."
213
+ ),
214
+ keywords=["bias", "fair", "equitable", "inclusive", "discrimination",
215
+ "prejudice", "stereotype", "balanced", "audit"],
216
+ complementary=["philosophy", "empathy"],
217
+ domain="ethical",
218
+ ),
219
+ }
220
+
221
+ # Map prompt-only perspectives to their closest LoRA adapter
222
+ ADAPTER_FALLBACK = {
223
+ "human_intuition": "empathy",
224
+ "resilient_kindness": "empathy",
225
+ "mathematical": "newton",
226
+ "bias_mitigation": "consciousness",
227
+ }
228
+
229
+
230
+ def get_perspective(name: str) -> Optional[Perspective]:
231
+ """Get a perspective by name."""
232
+ return PERSPECTIVES.get(name)
233
+
234
+
235
+ def get_adapter_for_perspective(name: str) -> Optional[str]:
236
+ """Get the LoRA adapter name for a perspective (with fallback)."""
237
+ p = PERSPECTIVES.get(name)
238
+ if p is None:
239
+ return None
240
+ return p.adapter or ADAPTER_FALLBACK.get(name)
241
+
242
+
243
+ def get_all_adapter_backed() -> List[Perspective]:
244
+ """Get perspectives that have dedicated LoRA adapters."""
245
+ return [p for p in PERSPECTIVES.values() if p.has_adapter]
246
+
247
+
248
+ def get_all_prompt_only() -> List[Perspective]:
249
+ """Get perspectives that use prompt-only reasoning (no dedicated LoRA)."""
250
+ return [p for p in PERSPECTIVES.values() if not p.has_adapter]
251
+
252
+
253
+ def get_complementary_perspectives(name: str) -> List[str]:
254
+ """Get complementary perspective names for epistemic tension."""
255
+ p = PERSPECTIVES.get(name)
256
+ return p.complementary if p else []
257
+
258
+
259
+ def get_perspectives_for_domain(domain: str) -> List[Perspective]:
260
+ """Get all perspectives in a given domain."""
261
+ return [p for p in PERSPECTIVES.values() if p.domain == domain]
262
+
263
+
264
+ def list_all() -> Dict[str, str]:
265
+ """Quick summary of all perspectives."""
266
+ return {
267
+ name: f"{'[LoRA]' if p.has_adapter else '[prompt]'} {p.display_name}"
268
+ for name, p in PERSPECTIVES.items()
269
+ }
reasoning_forge/problem_generator.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Problem Generator - Generates diverse reasoning problems from concepts.
3
+
4
+ Takes a concept text and generates 5-8 different reasoning problems across
5
+ types: explain, compare, apply, critique, extend, analogize, decompose, synthesize.
6
+ Each problem type has 10+ templates.
7
+ """
8
+
9
+ import random
10
+ import re
11
+
12
+
13
+ class ProblemGenerator:
14
+ """Generates multi-type reasoning problems from concept text."""
15
+
16
+ # Each problem type has 10+ templates with {concept} placeholder
17
+ _problem_templates: dict[str, list[str]] = {
18
+ "explain": [
19
+ "Explain the underlying mechanisms of {concept} as if teaching a graduate student who is brilliant but unfamiliar with this domain.",
20
+ "Provide a first-principles explanation of {concept}, starting from the most fundamental assumptions and building up to the full picture.",
21
+ "Explain why {concept} matters, tracing the chain of consequences from the immediate to the long-term.",
22
+ "Explain {concept} by identifying the three most important things someone must understand and why each matters.",
23
+ "Explain the causal structure of {concept}: what drives it, what it drives, and what mediates the relationship.",
24
+ "Give an explanation of {concept} that a thoughtful 15-year-old would find both accessible and intellectually satisfying.",
25
+ "Explain what makes {concept} difficult to understand and how that difficulty can be resolved.",
26
+ "Explain {concept} by contrasting what most people think it means with what it actually means upon closer examination.",
27
+ "Explain the boundary conditions of {concept}: under what circumstances does it hold, and when does it break down?",
28
+ "Explain {concept} using only concrete examples and observable phenomena, avoiding abstract terminology.",
29
+ "Explain how {concept} changes depending on the scale at which you examine it.",
30
+ "Explain the history of how our understanding of {concept} has evolved and what drove each major shift.",
31
+ ],
32
+ "compare": [
33
+ "Compare {concept} with its closest alternative or rival, highlighting where they agree, where they diverge, and why the differences matter.",
34
+ "Compare how {concept} would be understood by an engineer versus a philosopher, and explain what each perspective captures that the other misses.",
35
+ "Compare the short-term and long-term implications of {concept}, noting where they align and where they conflict.",
36
+ "Compare {concept} as it appears in theory versus how it manifests in practice, explaining the gap.",
37
+ "Compare the strongest argument for {concept} with the strongest argument against it, steelmanning both sides.",
38
+ "Compare how {concept} is understood in two different cultural or disciplinary contexts.",
39
+ "Compare the naive understanding of {concept} with the expert understanding, identifying exactly where they diverge.",
40
+ "Compare {concept} with a superficially similar but fundamentally different concept, explaining the crucial distinction.",
41
+ "Compare the risks of overestimating versus underestimating the importance of {concept}.",
42
+ "Compare how {concept} would be analyzed using quantitative methods versus qualitative methods, and what each approach reveals.",
43
+ "Compare the state of {concept} ten years ago with its current state, identifying the key drivers of change.",
44
+ ],
45
+ "apply": [
46
+ "Apply the principles underlying {concept} to solve a concrete real-world problem that you specify.",
47
+ "Describe how you would apply {concept} in a professional context, including specific steps and expected outcomes.",
48
+ "Apply {concept} to a domain where it is not typically used and explain what new insights emerge.",
49
+ "Design an experiment or test that would apply {concept} to generate actionable data.",
50
+ "Apply {concept} to evaluate a current real-world controversy or decision, showing how it clarifies the issues.",
51
+ "Show how {concept} could be applied to improve an existing system or process, specifying the mechanism of improvement.",
52
+ "Apply {concept} to predict what will happen in a specified scenario and explain your reasoning.",
53
+ "Demonstrate how {concept} applies to everyday decision-making by walking through a common choice people face.",
54
+ "Apply {concept} to diagnose why a particular system or approach is failing and propose a remedy.",
55
+ "Show how {concept} could be applied at three different scales (individual, organizational, societal) with different implications at each.",
56
+ "Apply {concept} to a field where it has been underutilized and argue for its relevance.",
57
+ ],
58
+ "critique": [
59
+ "Identify the three most significant weaknesses or limitations of {concept} and assess how seriously they undermine it.",
60
+ "Construct the strongest possible objection to {concept} and then evaluate whether the objection succeeds.",
61
+ "Critique the hidden assumptions underlying {concept}, assessing which are well-founded and which are questionable.",
62
+ "Evaluate whether {concept} confuses correlation with causation, and if so, what the actual causal story might be.",
63
+ "Critique the evidence base for {concept}: is it sufficient, and what kinds of evidence are missing?",
64
+ "Identify who benefits from the current framing of {concept} and whether that framing may be self-serving.",
65
+ "Assess whether {concept} commits any logical fallacies and, if so, whether the core insight survives the correction.",
66
+ "Critique the scalability of {concept}: does it work at small scale but fail at large scale, or vice versa?",
67
+ "Evaluate whether {concept} is genuinely novel or whether it is a repackaging of older ideas under new terminology.",
68
+ "Critique the precision of {concept}: is it defined clearly enough to be testable, or is it vague enough to be unfalsifiable?",
69
+ "Assess whether {concept} adequately accounts for the perspectives and experiences of marginalized groups.",
70
+ ],
71
+ "extend": [
72
+ "Extend {concept} to its logical conclusion: if we take it seriously and follow it consistently, where does it lead?",
73
+ "Propose a novel extension of {concept} that addresses one of its current limitations.",
74
+ "Extend {concept} into the future: how might it evolve over the next decade given current trends?",
75
+ "Identify a domain where {concept} has not yet been applied and develop the extension, including what modifications would be needed.",
76
+ "Extend {concept} by combining it with an insight from a different field, creating something neither field has alone.",
77
+ "Propose how {concept} could be extended to address a problem it was not originally designed for.",
78
+ "Extend {concept} by asking what happens at its extreme: what if it were applied maximally or universally?",
79
+ "Develop an extension of {concept} that makes it more robust against its known failure modes.",
80
+ "Extend {concept} by integrating quantitative measurement where it currently relies on qualitative judgment.",
81
+ "Propose a version of {concept} adapted for a context where resources are extremely limited.",
82
+ "Extend {concept} by identifying the next logical question it raises and sketching how to answer it.",
83
+ ],
84
+ "analogize": [
85
+ "Construct an analogy between {concept} and a biological system, mapping each component to its biological counterpart.",
86
+ "Create an analogy between {concept} and a well-known everyday experience that makes the abstract concrete.",
87
+ "Develop an analogy between {concept} and a historical event or period, drawing specific parallels.",
88
+ "Build an analogy between {concept} and a mechanical or engineering system, identifying the load-bearing correspondences.",
89
+ "Construct an analogy between {concept} and a game or sport, mapping rules, strategies, and winning conditions.",
90
+ "Create an analogy between {concept} and a musical composition, identifying rhythm, harmony, dissonance, and resolution.",
91
+ "Develop an analogy between {concept} and an ecosystem, mapping the roles of producers, consumers, decomposers, and energy flow.",
92
+ "Build an analogy between {concept} and the process of cooking a complex meal, mapping ingredients, techniques, and timing.",
93
+ "Construct an analogy between {concept} and a journey, identifying the starting point, obstacles, milestones, and destination.",
94
+ "Create an analogy between {concept} and a language, mapping grammar, vocabulary, syntax, and meaning.",
95
+ "After constructing your best analogy for {concept}, identify exactly where the analogy breaks down and what the breakdown reveals.",
96
+ ],
97
+ "decompose": [
98
+ "Decompose {concept} into its fundamental components and explain how each contributes to the whole.",
99
+ "Break {concept} into its necessary and sufficient conditions: what must be present for it to hold?",
100
+ "Decompose {concept} into layers of abstraction, from the most concrete to the most abstract.",
101
+ "Identify the independent variables within {concept} and explain how each can be varied independently.",
102
+ "Decompose {concept} into its temporal phases: what happens first, second, third, and how do the phases connect?",
103
+ "Break {concept} into its stakeholder dimensions: how does each affected party experience it differently?",
104
+ "Decompose {concept} into its inputs, processes, and outputs, tracing the transformation at each stage.",
105
+ "Identify the key tensions or trade-offs within {concept} and explain how they create its characteristic behavior.",
106
+ "Decompose {concept} into what is known with confidence, what is suspected but unconfirmed, and what remains entirely unknown.",
107
+ "Break {concept} into its structural elements (what it is) and its dynamic elements (how it changes).",
108
+ "Decompose the causal graph of {concept}: which factors cause which, and which are merely correlated?",
109
+ ],
110
+ "synthesize": [
111
+ "Synthesize a unified understanding of {concept} that integrates scientific, philosophical, and practical perspectives.",
112
+ "Synthesize the arguments for and against {concept} into a balanced position that acknowledges the valid points on both sides.",
113
+ "Create a synthesis that resolves the apparent contradiction between two competing interpretations of {concept}.",
114
+ "Synthesize insights about {concept} from at least three different disciplines into a coherent framework.",
115
+ "Synthesize a practical guide for engaging with {concept} that draws on both theoretical understanding and real-world experience.",
116
+ "Synthesize the historical evolution and current state of {concept} into a narrative that explains both where we are and how we got here.",
117
+ "Create a synthesis of {concept} that a diverse audience (technical and non-technical, young and old) would find valuable.",
118
+ "Synthesize the local and global dimensions of {concept} into an understanding that operates at both scales.",
119
+ "Synthesize the quantitative and qualitative aspects of {concept} into an integrated assessment.",
120
+ "Create a synthesis of {concept} that explicitly addresses and resolves the top three objections to it.",
121
+ "Synthesize a forward-looking vision of {concept} that builds on current understanding to anticipate future development.",
122
+ ],
123
+ }
124
+
125
+ def generate_problems(
126
+ self, concept: str, count: int | None = None
127
+ ) -> list[tuple[str, str]]:
128
+ """Generate reasoning problems from a concept.
129
+
130
+ Args:
131
+ concept: The concept text to generate problems for.
132
+ count: Number of problems to generate (5-8 if None).
133
+
134
+ Returns:
135
+ List of (problem_type, problem_text) tuples.
136
+ """
137
+ if count is None:
138
+ count = random.randint(5, 8)
139
+ count = max(1, min(count, len(self._problem_templates)))
140
+
141
+ # Select problem types -- always include explain and synthesize,
142
+ # then fill remaining slots randomly from other types
143
+ all_types = list(self._problem_templates.keys())
144
+ required = ["explain", "synthesize"]
145
+ optional = [t for t in all_types if t not in required]
146
+ random.shuffle(optional)
147
+
148
+ selected_types = required + optional[: max(0, count - len(required))]
149
+ random.shuffle(selected_types)
150
+
151
+ problems = []
152
+ for ptype in selected_types:
153
+ templates = self._problem_templates[ptype]
154
+ # Score templates by keyword relevance to concept
155
+ template = self._select_relevant_template(concept, templates)
156
+ problem_text = template.replace("{concept}", concept)
157
+ problems.append((ptype, problem_text))
158
+
159
+ return problems
160
+
161
+ def generate_all_types(self, concept: str) -> list[tuple[str, str]]:
162
+ """Generate one problem of each type for a concept.
163
+
164
+ Args:
165
+ concept: The concept text.
166
+
167
+ Returns:
168
+ List of (problem_type, problem_text) tuples, one per type.
169
+ """
170
+ problems = []
171
+ for ptype, templates in self._problem_templates.items():
172
+ template = self._select_relevant_template(concept, templates)
173
+ problem_text = template.replace("{concept}", concept)
174
+ problems.append((ptype, problem_text))
175
+ return problems
176
+
177
+ def _select_relevant_template(
178
+ self, concept: str, templates: list[str]
179
+ ) -> str:
180
+ """Select the template most relevant to the concept keywords.
181
+
182
+ Falls back to random selection if no strong match.
183
+ """
184
+ concept_words = set(re.findall(r'\b[a-z]{4,}\b', concept.lower()))
185
+ if not concept_words:
186
+ return random.choice(templates)
187
+
188
+ scored = []
189
+ for template in templates:
190
+ template_lower = template.lower()
191
+ score = sum(1 for w in concept_words if w in template_lower)
192
+ scored.append((score, template))
193
+
194
+ max_score = max(s for s, _ in scored)
195
+ if max_score > 0:
196
+ best = [t for s, t in scored if s == max_score]
197
+ return random.choice(best)
198
+
199
+ return random.choice(templates)
reasoning_forge/quantum_optimizer.py ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ QuantumOptimizer — Self-Tuning Engine for the Codette RC+xi Framework.
3
+
4
+ Inspired by VIVARA Genesis-Omega v2.0, rebuilt as a proper self-tuning system.
5
+
6
+ The optimizer tracks response quality signals (user engagement, coherence
7
+ scores, tension productivity) and adjusts:
8
+ - Router confidence thresholds
9
+ - Spiderweb parameters (contraction ratio, tension threshold)
10
+ - Adapter selection weights
11
+ - Multi-perspective synthesis quality
12
+
13
+ Uses simulated annealing with momentum: explores the parameter space
14
+ stochastically but remembers which configurations worked best.
15
+
16
+ All changes are bounded and reversible. The optimizer logs every
17
+ adjustment for full transparency.
18
+ """
19
+
20
+ from __future__ import annotations
21
+
22
+ import math
23
+ import random
24
+ import time
25
+ from dataclasses import dataclass, field
26
+ from typing import Dict, List, Optional, Tuple
27
+
28
+
29
+ @dataclass
30
+ class QualitySignal:
31
+ """A quality signal from a Codette response."""
32
+ timestamp: float
33
+ adapter: str
34
+ coherence: float # Phase coherence at response time
35
+ tension: float # Epistemic tension at response time
36
+ productivity: float # Tension productivity score
37
+ response_length: int # Token count
38
+ multi_perspective: bool # Was this a multi-perspective response?
39
+ user_continued: bool = True # Did the user continue the conversation?
40
+
41
+
42
+ @dataclass
43
+ class TuningState:
44
+ """Current tuning parameters."""
45
+ # Router
46
+ confidence_threshold: float = 0.4 # Below this, fall back to default
47
+ multi_perspective_threshold: float = 0.6 # Above this, force multi-perspective
48
+
49
+ # Spiderweb
50
+ contraction_ratio: float = 0.85
51
+ tension_threshold: float = 0.15
52
+ entanglement_alpha: float = 0.9
53
+
54
+ # Adapter weights (0-1 bonus applied to router scores)
55
+ adapter_boosts: Dict[str, float] = field(default_factory=dict)
56
+
57
+ def to_dict(self) -> Dict:
58
+ return {
59
+ "confidence_threshold": self.confidence_threshold,
60
+ "multi_perspective_threshold": self.multi_perspective_threshold,
61
+ "contraction_ratio": self.contraction_ratio,
62
+ "tension_threshold": self.tension_threshold,
63
+ "entanglement_alpha": self.entanglement_alpha,
64
+ "adapter_boosts": dict(self.adapter_boosts),
65
+ }
66
+
67
+ @classmethod
68
+ def from_dict(cls, data: Dict) -> "TuningState":
69
+ state = cls()
70
+ for k, v in data.items():
71
+ if k == "adapter_boosts":
72
+ state.adapter_boosts = dict(v)
73
+ elif hasattr(state, k):
74
+ setattr(state, k, v)
75
+ return state
76
+
77
+
78
+ @dataclass
79
+ class OptimizationStep:
80
+ """Record of a single optimization step."""
81
+ timestamp: float
82
+ parameter: str
83
+ old_value: float
84
+ new_value: float
85
+ reason: str
86
+ quality_score: float
87
+
88
+
89
+ class QuantumOptimizer:
90
+ """Self-tuning engine with simulated annealing."""
91
+
92
+ def __init__(
93
+ self,
94
+ learning_rate: float = 0.02,
95
+ temperature: float = 0.5,
96
+ cooling_rate: float = 0.995,
97
+ min_signals_before_tuning: int = 5,
98
+ ):
99
+ self.learning_rate = learning_rate
100
+ self.temperature = temperature
101
+ self.cooling_rate = cooling_rate
102
+ self.min_signals = min_signals_before_tuning
103
+
104
+ self.state = TuningState()
105
+ self.best_state = TuningState()
106
+ self.best_score = 0.0
107
+
108
+ self.signals: List[QualitySignal] = []
109
+ self.history: List[OptimizationStep] = []
110
+
111
+ # Running quality metrics
112
+ self._quality_window: List[float] = []
113
+ self._window_size = 20
114
+
115
+ def record_signal(self, signal: QualitySignal):
116
+ """Record a quality signal from a Codette response."""
117
+ self.signals.append(signal)
118
+
119
+ # Compute composite quality score
120
+ quality = self._compute_quality(signal)
121
+ self._quality_window.append(quality)
122
+ if len(self._quality_window) > self._window_size:
123
+ self._quality_window.pop(0)
124
+
125
+ # Maybe tune parameters
126
+ if len(self.signals) >= self.min_signals:
127
+ self._maybe_tune()
128
+
129
+ def _compute_quality(self, signal: QualitySignal) -> float:
130
+ """Composite quality score from a response signal.
131
+
132
+ Weights:
133
+ - coherence: 30% (high is good — responses make sense)
134
+ - productivity: 30% (high is good — tension was resolved productively)
135
+ - moderate tension: 20% (sweet spot ~0.3-0.5 is best)
136
+ - user_continued: 20% (binary — did they keep talking?)
137
+ """
138
+ # Tension is best in the 0.3-0.5 range (productive disagreement)
139
+ tension_score = 1.0 - 2.0 * abs(signal.tension - 0.4)
140
+ tension_score = max(0.0, tension_score)
141
+
142
+ quality = (
143
+ 0.30 * signal.coherence +
144
+ 0.30 * signal.productivity +
145
+ 0.20 * tension_score +
146
+ 0.20 * (1.0 if signal.user_continued else 0.0)
147
+ )
148
+ return min(max(quality, 0.0), 1.0)
149
+
150
+ def _maybe_tune(self):
151
+ """Run one optimization step if enough data."""
152
+ if len(self._quality_window) < 3:
153
+ return
154
+
155
+ current_quality = sum(self._quality_window) / len(self._quality_window)
156
+
157
+ # Simulated annealing: accept worse states with decreasing probability
158
+ if current_quality > self.best_score:
159
+ self.best_score = current_quality
160
+ self.best_state = TuningState(**{
161
+ k: getattr(self.state, k) for k in vars(self.state)
162
+ if not k.startswith('_')
163
+ })
164
+ elif self.temperature > 0.01:
165
+ # Accept worse state with probability exp(-delta/T)
166
+ delta = self.best_score - current_quality
167
+ accept_prob = math.exp(-delta / max(self.temperature, 0.001))
168
+ if random.random() > accept_prob:
169
+ # Revert to best known state
170
+ self._revert_to_best()
171
+ return
172
+
173
+ # Cool down
174
+ self.temperature *= self.cooling_rate
175
+
176
+ # Pick a parameter to tune based on recent signals
177
+ self._tune_one_parameter(current_quality)
178
+
179
+ def _tune_one_parameter(self, current_quality: float):
180
+ """Tune one parameter based on recent quality signals."""
181
+ recent = self.signals[-10:]
182
+
183
+ # Analyze what needs tuning
184
+ avg_coherence = sum(s.coherence for s in recent) / len(recent)
185
+ avg_tension = sum(s.tension for s in recent) / len(recent)
186
+ avg_productivity = sum(s.productivity for s in recent) / len(recent)
187
+ multi_ratio = sum(1 for s in recent if s.multi_perspective) / len(recent)
188
+
189
+ # Decision: which parameter to adjust
190
+ param = None
191
+ old_val = 0.0
192
+ new_val = 0.0
193
+ reason = ""
194
+
195
+ if avg_coherence < 0.5:
196
+ # Low coherence -> increase contraction ratio (tighter belief propagation)
197
+ param = "contraction_ratio"
198
+ old_val = self.state.contraction_ratio
199
+ delta = self.learning_rate * (0.7 - avg_coherence)
200
+ new_val = min(0.98, max(0.5, old_val + delta))
201
+ reason = f"Low coherence ({avg_coherence:.2f}), tightening propagation"
202
+
203
+ elif avg_tension < 0.2 and avg_productivity < 0.3:
204
+ # Too little tension AND low productivity -> lower confidence threshold
205
+ # to allow more multi-perspective responses
206
+ param = "multi_perspective_threshold"
207
+ old_val = self.state.multi_perspective_threshold
208
+ new_val = max(0.3, old_val - self.learning_rate)
209
+ reason = f"Low tension+productivity ({avg_tension:.2f}/{avg_productivity:.2f}), encouraging multi-perspective"
210
+
211
+ elif avg_tension > 0.7:
212
+ # Too much tension -> increase tension threshold for convergence
213
+ param = "tension_threshold"
214
+ old_val = self.state.tension_threshold
215
+ new_val = min(0.5, old_val + self.learning_rate * 0.5)
216
+ reason = f"High tension ({avg_tension:.2f}), raising convergence threshold"
217
+
218
+ elif multi_ratio > 0.8 and avg_productivity < 0.4:
219
+ # Too many multi-perspective responses but low productivity
220
+ param = "multi_perspective_threshold"
221
+ old_val = self.state.multi_perspective_threshold
222
+ new_val = min(0.8, old_val + self.learning_rate)
223
+ reason = f"Multi-perspective overuse ({multi_ratio:.0%}) with low productivity"
224
+
225
+ # Tune adapter boosts based on which adapters produce best quality
226
+ elif len(recent) >= 5:
227
+ adapter_quality = {}
228
+ for s in recent:
229
+ q = self._compute_quality(s)
230
+ if s.adapter not in adapter_quality:
231
+ adapter_quality[s.adapter] = []
232
+ adapter_quality[s.adapter].append(q)
233
+
234
+ # Boost the best-performing adapter slightly
235
+ if adapter_quality:
236
+ best_adapter = max(
237
+ adapter_quality,
238
+ key=lambda a: sum(adapter_quality[a]) / len(adapter_quality[a])
239
+ )
240
+ param = f"adapter_boost_{best_adapter}"
241
+ old_val = self.state.adapter_boosts.get(best_adapter, 0.0)
242
+ new_val = min(0.3, old_val + self.learning_rate * 0.5)
243
+ self.state.adapter_boosts[best_adapter] = new_val
244
+ reason = f"Boosting high-quality adapter: {best_adapter}"
245
+
246
+ if param and param not in ("adapter_boost_" + a for a in self.state.adapter_boosts):
247
+ if hasattr(self.state, param):
248
+ setattr(self.state, param, new_val)
249
+
250
+ if param:
251
+ self.history.append(OptimizationStep(
252
+ timestamp=time.time(),
253
+ parameter=param,
254
+ old_value=old_val,
255
+ new_value=new_val,
256
+ reason=reason,
257
+ quality_score=current_quality,
258
+ ))
259
+
260
+ def _revert_to_best(self):
261
+ """Revert to the best known tuning state."""
262
+ self.state = TuningState(**{
263
+ k: getattr(self.best_state, k) for k in vars(self.best_state)
264
+ if not k.startswith('_')
265
+ })
266
+
267
+ def get_adapter_boost(self, adapter_name: str) -> float:
268
+ """Get the current boost for an adapter (0.0 = no boost)."""
269
+ return self.state.adapter_boosts.get(adapter_name, 0.0)
270
+
271
+ def get_tuning_report(self) -> Dict:
272
+ """Get current tuning state and recent history."""
273
+ recent_quality = (
274
+ sum(self._quality_window) / len(self._quality_window)
275
+ if self._quality_window else 0.0
276
+ )
277
+ return {
278
+ "current_state": self.state.to_dict(),
279
+ "best_score": round(self.best_score, 4),
280
+ "current_quality": round(recent_quality, 4),
281
+ "temperature": round(self.temperature, 4),
282
+ "total_signals": len(self.signals),
283
+ "recent_adjustments": [
284
+ {
285
+ "param": h.parameter,
286
+ "old": round(h.old_value, 4),
287
+ "new": round(h.new_value, 4),
288
+ "reason": h.reason,
289
+ }
290
+ for h in self.history[-5:]
291
+ ],
292
+ }
293
+
294
+ def to_dict(self) -> Dict:
295
+ """Serialize for persistence."""
296
+ return {
297
+ "state": self.state.to_dict(),
298
+ "best_score": self.best_score,
299
+ "temperature": self.temperature,
300
+ "quality_window": self._quality_window,
301
+ }
302
+
303
+ @classmethod
304
+ def from_dict(cls, data: Dict) -> "QuantumOptimizer":
305
+ opt = cls()
306
+ if "state" in data:
307
+ opt.state = TuningState.from_dict(data["state"])
308
+ opt.best_state = TuningState.from_dict(data["state"])
309
+ opt.best_score = data.get("best_score", 0.0)
310
+ opt.temperature = data.get("temperature", 0.5)
311
+ opt._quality_window = data.get("quality_window", [])
312
+ return opt
reasoning_forge/quantum_spiderweb.py ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ QuantumSpiderweb Propagation Module — Inter-agent belief propagation
3
+ for the Codette RC+xi framework.
4
+
5
+ Implements the 5D consciousness graph with:
6
+ - Eq. 1 (Planck-Orbital): E = hbar * omega (node energy)
7
+ - Eq. 2 (Entanglement Sync): S = alpha * psi_1 * psi_2* (state coupling)
8
+ - Eq. 3 (Intent Modulation): I = kappa * (f_base + delta_f * coherence)
9
+ - Eq. 4 (Fourier/Dream Resonance): FFT-based glyph compression
10
+ - Eq. 8 (Anomaly Rejection): A(x) = x * (1 - Theta(delta - |x - mu|))
11
+
12
+ The spiderweb propagates beliefs between agent nodes, tracks epistemic
13
+ tension per node, detects attractor convergence, and forms identity glyphs.
14
+ """
15
+
16
+ from __future__ import annotations
17
+
18
+ import math
19
+ import hashlib
20
+ import json
21
+ from collections import deque
22
+ from dataclasses import dataclass, field
23
+ from typing import Dict, List, Optional, Set, Tuple
24
+
25
+ try:
26
+ import numpy as np
27
+ HAS_NUMPY = True
28
+ except ImportError:
29
+ HAS_NUMPY = False
30
+
31
+
32
+ # ---------------------------------------------------------------------------
33
+ # Data structures
34
+ # ---------------------------------------------------------------------------
35
+
36
+ @dataclass
37
+ class NodeState:
38
+ """5D quantum state for a spiderweb node.
39
+
40
+ Dimensions:
41
+ psi (Psi): Thought/concept magnitude
42
+ tau: Temporal progression
43
+ chi: Processing velocity
44
+ phi: Emotional valence (-1 to +1)
45
+ lam (Lambda): Semantic embedding (scalar projection)
46
+ """
47
+ psi: float = 0.0
48
+ tau: float = 0.0
49
+ chi: float = 1.0
50
+ phi: float = 0.0
51
+ lam: float = 0.0
52
+
53
+ def to_array(self) -> list:
54
+ return [self.psi, self.tau, self.chi, self.phi, self.lam]
55
+
56
+ @classmethod
57
+ def from_array(cls, arr: list) -> "NodeState":
58
+ if len(arr) < 5:
59
+ padded = list(arr) + [0.0] * (5 - len(arr))
60
+ return cls(psi=padded[0], tau=padded[1], chi=padded[2], phi=padded[3], lam=padded[4])
61
+ return cls(psi=arr[0], tau=arr[1], chi=arr[2], phi=arr[3], lam=arr[4])
62
+
63
+ def energy(self) -> float:
64
+ """Eq. 1: E = hbar * omega (simplified: sum of squared state magnitudes)."""
65
+ return sum(x * x for x in self.to_array())
66
+
67
+ def tension_with(self, other: "NodeState") -> float:
68
+ """Eq. 2 (xi): epistemic tension between two states."""
69
+ return sum((a - b) ** 2 for a, b in zip(self.to_array(), other.to_array()))
70
+
71
+
72
+ @dataclass
73
+ class SpiderwebNode:
74
+ """A node in the QuantumSpiderweb graph."""
75
+ node_id: str
76
+ state: NodeState = field(default_factory=NodeState)
77
+ neighbors: List[str] = field(default_factory=list)
78
+ tension_history: List[float] = field(default_factory=list)
79
+ is_collapsed: bool = False
80
+ attractor_id: Optional[str] = None
81
+
82
+
83
+ @dataclass
84
+ class IdentityGlyph:
85
+ """Compressed identity signature formed from tension history (Eq. 4/6)."""
86
+ glyph_id: str
87
+ encoded_tension: List[float] # FFT components
88
+ stability_score: float
89
+ source_node: str
90
+ attractor_signature: Optional[str] = None
91
+
92
+
93
+ @dataclass
94
+ class PropagationResult:
95
+ """Result of belief propagation through the web."""
96
+ visited: Dict[str, NodeState]
97
+ tension_map: Dict[str, float]
98
+ anomalies_rejected: List[str]
99
+ hops: int
100
+
101
+
102
+ # ---------------------------------------------------------------------------
103
+ # QuantumSpiderweb
104
+ # ---------------------------------------------------------------------------
105
+
106
+ class QuantumSpiderweb:
107
+ """5D consciousness graph with RC+xi-aware belief propagation."""
108
+
109
+ def __init__(
110
+ self,
111
+ contraction_ratio: float = 0.85,
112
+ tension_threshold: float = 0.15,
113
+ anomaly_delta: float = 2.0,
114
+ glyph_components: int = 8,
115
+ max_history: int = 50,
116
+ ):
117
+ self.contraction_ratio = contraction_ratio
118
+ self.tension_threshold = tension_threshold
119
+ self.anomaly_delta = anomaly_delta
120
+ self.glyph_components = glyph_components
121
+ self.max_history = max_history
122
+
123
+ self.nodes: Dict[str, SpiderwebNode] = {}
124
+ self.glyphs: List[IdentityGlyph] = []
125
+ self._global_tension_history: List[float] = []
126
+
127
+ # -- graph construction ------------------------------------------------
128
+
129
+ def add_node(self, node_id: str, state: Optional[NodeState] = None) -> SpiderwebNode:
130
+ node = SpiderwebNode(node_id=node_id, state=state or NodeState())
131
+ self.nodes[node_id] = node
132
+ return node
133
+
134
+ def connect(self, node_a: str, node_b: str) -> None:
135
+ if node_a in self.nodes and node_b in self.nodes:
136
+ if node_b not in self.nodes[node_a].neighbors:
137
+ self.nodes[node_a].neighbors.append(node_b)
138
+ if node_a not in self.nodes[node_b].neighbors:
139
+ self.nodes[node_b].neighbors.append(node_a)
140
+
141
+ def build_from_agents(self, agent_names: List[str]) -> None:
142
+ """Create a fully-connected spiderweb from a list of agent names."""
143
+ for name in agent_names:
144
+ if name not in self.nodes:
145
+ self.add_node(name)
146
+ for i, a in enumerate(agent_names):
147
+ for b in agent_names[i + 1:]:
148
+ self.connect(a, b)
149
+
150
+ # -- belief propagation ------------------------------------------------
151
+
152
+ def propagate_belief(
153
+ self,
154
+ origin: str,
155
+ belief: NodeState,
156
+ max_hops: int = 3,
157
+ ) -> PropagationResult:
158
+ """BFS belief propagation with attenuation and anomaly rejection.
159
+
160
+ Eq. 1: energy at each node
161
+ Eq. 2: tension between current and incoming state
162
+ Eq. 8: anomaly filter (Heaviside rejection)
163
+ """
164
+ if origin not in self.nodes:
165
+ return PropagationResult({}, {}, [], 0)
166
+
167
+ visited: Dict[str, NodeState] = {}
168
+ tension_map: Dict[str, float] = {}
169
+ anomalies: List[str] = []
170
+ queue: deque = deque()
171
+ queue.append((origin, belief, 0))
172
+ seen: Set[str] = {origin}
173
+
174
+ while queue:
175
+ node_id, incoming_belief, hop = queue.popleft()
176
+ if hop > max_hops:
177
+ continue
178
+
179
+ node = self.nodes[node_id]
180
+ attenuation = self.contraction_ratio ** hop
181
+
182
+ # Attenuate incoming belief
183
+ incoming_arr = incoming_belief.to_array()
184
+ attenuated = [v * attenuation for v in incoming_arr]
185
+
186
+ # Eq. 2: measure tension
187
+ current_arr = node.state.to_array()
188
+ xi = sum((a - b) ** 2 for a, b in zip(current_arr, attenuated))
189
+
190
+ # Eq. 8: anomaly rejection filter
191
+ # A(x) = x * (1 - Theta(delta - |x - mu|))
192
+ mu = sum(current_arr) / len(current_arr)
193
+ incoming_mean = sum(attenuated) / len(attenuated)
194
+ if abs(incoming_mean - mu) > self.anomaly_delta:
195
+ anomalies.append(node_id)
196
+ continue
197
+
198
+ # Update state: weighted blend toward incoming belief
199
+ blend = 0.3 * attenuation # stronger blend when closer to origin
200
+ new_arr = [c * (1 - blend) + a * blend for c, a in zip(current_arr, attenuated)]
201
+ new_state = NodeState.from_array(new_arr)
202
+
203
+ node.state = new_state
204
+ node.tension_history.append(xi)
205
+ if len(node.tension_history) > self.max_history:
206
+ node.tension_history.pop(0)
207
+
208
+ visited[node_id] = new_state
209
+ tension_map[node_id] = xi
210
+
211
+ # Propagate to neighbors
212
+ for neighbor_id in node.neighbors:
213
+ if neighbor_id not in seen:
214
+ seen.add(neighbor_id)
215
+ queue.append((neighbor_id, NodeState.from_array(attenuated), hop + 1))
216
+
217
+ return PropagationResult(
218
+ visited=visited,
219
+ tension_map=tension_map,
220
+ anomalies_rejected=anomalies,
221
+ hops=max_hops,
222
+ )
223
+
224
+ # -- entanglement sync -------------------------------------------------
225
+
226
+ def entangle(self, node_a: str, node_b: str, alpha: float = 0.9) -> float:
227
+ """Eq. 2 (Entanglement Sync): S = alpha * psi_1 * psi_2*.
228
+
229
+ Synchronizes two nodes' states, pulling them toward each other.
230
+
231
+ Returns:
232
+ Sync strength S.
233
+ """
234
+ if node_a not in self.nodes or node_b not in self.nodes:
235
+ return 0.0
236
+
237
+ a = self.nodes[node_a].state
238
+ b = self.nodes[node_b].state
239
+
240
+ # Complex conjugate product (scalar approximation)
241
+ psi_1 = a.psi
242
+ psi_2_conj = -b.psi # conjugate in simplified real model
243
+ S = alpha * psi_1 * psi_2_conj
244
+
245
+ # Pull states toward each other by S magnitude
246
+ blend = min(abs(S) * 0.1, 0.3)
247
+ a_arr = a.to_array()
248
+ b_arr = b.to_array()
249
+ new_a = [va * (1 - blend) + vb * blend for va, vb in zip(a_arr, b_arr)]
250
+ new_b = [vb * (1 - blend) + va * blend for va, vb in zip(a_arr, b_arr)]
251
+
252
+ self.nodes[node_a].state = NodeState.from_array(new_a)
253
+ self.nodes[node_b].state = NodeState.from_array(new_b)
254
+
255
+ return S
256
+
257
+ # -- intent modulation -------------------------------------------------
258
+
259
+ def modulate_intent(
260
+ self,
261
+ node_id: str,
262
+ kappa: float = 0.28,
263
+ f_base: float = 0.5,
264
+ delta_f: float = 0.3,
265
+ ) -> float:
266
+ """Eq. 3 (Intent Vector Modulation): I = kappa * (f_base + delta_f * coherence).
267
+
268
+ Returns modulated intent value for the node.
269
+ """
270
+ if node_id not in self.nodes:
271
+ return 0.0
272
+
273
+ coherence = self.phase_coherence()
274
+ I = kappa * (f_base + delta_f * coherence)
275
+
276
+ # Apply intent to psi dimension
277
+ node = self.nodes[node_id]
278
+ node.state.psi += I * 0.1
279
+ return I
280
+
281
+ # -- phase coherence (Eq. 11) ------------------------------------------
282
+
283
+ def phase_coherence(self) -> float:
284
+ """Compute phase coherence Gamma across all nodes.
285
+
286
+ Gamma = mean(|cos(theta_i - theta_bar)|)
287
+ where theta_i = atan2(phi, psi) for each node.
288
+ """
289
+ if len(self.nodes) < 2:
290
+ return 1.0
291
+
292
+ angles = []
293
+ for node in self.nodes.values():
294
+ theta = math.atan2(node.state.phi, node.state.psi + 1e-10)
295
+ angles.append(theta)
296
+
297
+ mean_theta = sum(angles) / len(angles)
298
+ coherences = [abs(math.cos(a - mean_theta)) for a in angles]
299
+ gamma = sum(coherences) / len(coherences)
300
+
301
+ self._global_tension_history.append(1.0 - gamma)
302
+ return round(gamma, 4)
303
+
304
+ def _compute_phase_coherence_readonly(self) -> float:
305
+ """Compute phase coherence without mutating global tension history."""
306
+ if len(self.nodes) < 2:
307
+ return 1.0
308
+ angles = []
309
+ for node in self.nodes.values():
310
+ theta = math.atan2(node.state.phi, node.state.psi + 1e-10)
311
+ angles.append(theta)
312
+ mean_theta = sum(angles) / len(angles)
313
+ coherences = [abs(math.cos(a - mean_theta)) for a in angles]
314
+ return round(sum(coherences) / len(coherences), 4)
315
+
316
+ # -- attractor detection -----------------------------------------------
317
+
318
+ def detect_attractors(
319
+ self, min_cluster_size: int = 2, max_radius: float = 2.0,
320
+ ) -> List[Dict]:
321
+ """Detect attractor manifolds from node state clustering.
322
+
323
+ Simple greedy clustering: assign each node to nearest attractor
324
+ or create a new one if too far from existing.
325
+ """
326
+ attractors: List[Dict] = []
327
+ assigned: Set[str] = set()
328
+
329
+ states = [(nid, n.state.to_array()) for nid, n in self.nodes.items()]
330
+
331
+ for nid, arr in states:
332
+ if nid in assigned:
333
+ continue
334
+
335
+ # Check distance to existing attractors
336
+ matched = False
337
+ for att in attractors:
338
+ center = att["center"]
339
+ dist = math.sqrt(sum((a - c) ** 2 for a, c in zip(arr, center)))
340
+ if dist <= max_radius:
341
+ att["members"].append(nid)
342
+ # Update center (running mean)
343
+ n = len(att["members"])
344
+ att["center"] = [(c * (n - 1) + a) / n for c, a in zip(center, arr)]
345
+ assigned.add(nid)
346
+ matched = True
347
+ break
348
+
349
+ if not matched:
350
+ attractors.append({
351
+ "attractor_id": f"attractor_{len(attractors)}",
352
+ "center": list(arr),
353
+ "members": [nid],
354
+ })
355
+ assigned.add(nid)
356
+
357
+ # Filter by minimum size
358
+ return [a for a in attractors if len(a["members"]) >= min_cluster_size]
359
+
360
+ # -- glyph formation (Eq. 4/6) ----------------------------------------
361
+
362
+ def form_glyph(self, node_id: str) -> Optional[IdentityGlyph]:
363
+ """Form an identity glyph from a node's tension history.
364
+
365
+ Eq. 4: FFT compression
366
+ Eq. 6: Cocoon stability = integral(|F(k)|^2) < epsilon
367
+
368
+ Returns IdentityGlyph if stable, None if unstable.
369
+ """
370
+ if node_id not in self.nodes:
371
+ return None
372
+
373
+ history = self.nodes[node_id].tension_history
374
+ if len(history) < 4:
375
+ return None
376
+
377
+ if HAS_NUMPY:
378
+ arr = np.array(history)
379
+ fft = np.fft.fft(arr)
380
+ components = np.abs(fft[:self.glyph_components]).tolist()
381
+ energy = float(np.sum(np.abs(fft) ** 2) / len(fft))
382
+ else:
383
+ # Fallback: basic DFT for first K components
384
+ N = len(history)
385
+ components = []
386
+ for k in range(min(self.glyph_components, N)):
387
+ real = sum(history[n] * math.cos(2 * math.pi * k * n / N) for n in range(N))
388
+ imag = sum(history[n] * math.sin(2 * math.pi * k * n / N) for n in range(N))
389
+ components.append(math.sqrt(real * real + imag * imag))
390
+ energy = sum(x * x for x in history) / len(history)
391
+
392
+ # Eq. 6: stability criterion
393
+ stability = 1.0 / (1.0 + energy)
394
+ if stability < 0.3:
395
+ return None # unstable, no glyph
396
+
397
+ glyph_id = hashlib.sha256(
398
+ json.dumps(components, sort_keys=True).encode()
399
+ ).hexdigest()[:16]
400
+
401
+ glyph = IdentityGlyph(
402
+ glyph_id=f"glyph_{glyph_id}",
403
+ encoded_tension=components,
404
+ stability_score=round(stability, 4),
405
+ source_node=node_id,
406
+ )
407
+ self.glyphs.append(glyph)
408
+ return glyph
409
+
410
+ # -- convergence check -------------------------------------------------
411
+
412
+ def check_convergence(self, window: int = 10) -> Tuple[bool, float]:
413
+ """Check if the global system is converging.
414
+
415
+ Convergence criterion (Eq. 5):
416
+ lim sup E[xi_n^2] <= epsilon + eta
417
+
418
+ Returns (is_converging, mean_tension).
419
+ """
420
+ if len(self._global_tension_history) < window:
421
+ return False, 1.0
422
+
423
+ recent = self._global_tension_history[-window:]
424
+ mean_tension = sum(recent) / len(recent)
425
+
426
+ # Check decreasing trend
427
+ first_half = sum(recent[:window // 2]) / (window // 2)
428
+ second_half = sum(recent[window // 2:]) / (window - window // 2)
429
+ is_decreasing = second_half < first_half
430
+
431
+ return (mean_tension < self.tension_threshold and is_decreasing), mean_tension
432
+
433
+ # -- entropy measurement (VIVARA-inspired) --------------------------------
434
+
435
+ def shannon_entropy(self) -> float:
436
+ """Compute Shannon entropy of the node state distribution.
437
+
438
+ Higher entropy = more diverse cognitive states (exploring).
439
+ Lower entropy = more uniform states (converged/stuck).
440
+ """
441
+ if not self.nodes or not HAS_NUMPY:
442
+ return 0.0
443
+
444
+ # Discretize the psi dimension into bins
445
+ psi_values = [n.state.psi for n in self.nodes.values()]
446
+ arr = np.array(psi_values)
447
+
448
+ # Histogram with 10 bins
449
+ counts, _ = np.histogram(arr, bins=10)
450
+ probs = counts / counts.sum()
451
+ probs = probs[probs > 0] # Remove zeros for log
452
+
453
+ return -float(np.sum(probs * np.log2(probs)))
454
+
455
+ def decoherence_rate(self, window: int = 10) -> float:
456
+ """Rate of coherence loss over recent history.
457
+
458
+ Positive = losing coherence (decoherencing).
459
+ Negative = gaining coherence (converging).
460
+ Zero = stable.
461
+ """
462
+ if len(self._global_tension_history) < window:
463
+ return 0.0
464
+
465
+ recent = self._global_tension_history[-window:]
466
+ if len(recent) < 2:
467
+ return 0.0
468
+
469
+ # Linear regression slope of tension over the window
470
+ n = len(recent)
471
+ x_mean = (n - 1) / 2.0
472
+ y_mean = sum(recent) / n
473
+ numerator = sum((i - x_mean) * (recent[i] - y_mean) for i in range(n))
474
+ denominator = sum((i - x_mean) ** 2 for i in range(n))
475
+
476
+ if denominator == 0:
477
+ return 0.0
478
+ return round(numerator / denominator, 6)
479
+
480
+ # -- lifeform spawning (VIVARA-inspired) --------------------------------
481
+
482
+ def spawn_lifeform(self, seed: str, connect_to: int = 3) -> str:
483
+ """Spawn a new high-coherence node from a conceptual seed.
484
+
485
+ Inspired by VIVARA's lifeform spawning: when a conversation topic
486
+ generates high enough resonance, it becomes its own node in the web.
487
+
488
+ Args:
489
+ seed: A seed string (e.g., topic name) to generate the node ID
490
+ connect_to: How many existing nodes to connect to
491
+
492
+ Returns:
493
+ The new node's ID
494
+ """
495
+ import hashlib as _hashlib
496
+ node_id = f"life_{_hashlib.md5(seed.encode()).hexdigest()[:8]}"
497
+
498
+ if node_id in self.nodes:
499
+ return node_id # Already exists
500
+
501
+ # High-coherence birth state (psi=0.8, balanced other dims)
502
+ state = NodeState(psi=0.8, tau=0.0, chi=0.7, phi=0.3, lam=0.5)
503
+ self.add_node(node_id, state)
504
+
505
+ # Connect to existing nodes (random subset)
506
+ import random as _random
507
+ existing = [nid for nid in self.nodes if nid != node_id]
508
+ peers = _random.sample(existing, min(connect_to, len(existing)))
509
+ for peer in peers:
510
+ self.connect(node_id, peer)
511
+
512
+ return node_id
513
+
514
+ # -- serialization -----------------------------------------------------
515
+
516
+ def to_dict(self) -> Dict:
517
+ """Serialize web state for cocoon packaging."""
518
+ return {
519
+ "nodes": {
520
+ nid: {
521
+ "state": n.state.to_array(),
522
+ "neighbors": n.neighbors,
523
+ "tension_history": n.tension_history[-10:],
524
+ "is_collapsed": n.is_collapsed,
525
+ "attractor_id": n.attractor_id,
526
+ }
527
+ for nid, n in self.nodes.items()
528
+ },
529
+ "glyphs": [
530
+ {
531
+ "glyph_id": g.glyph_id,
532
+ "encoded_tension": g.encoded_tension,
533
+ "stability_score": g.stability_score,
534
+ "source_node": g.source_node,
535
+ }
536
+ for g in self.glyphs
537
+ ],
538
+ "phase_coherence": self._compute_phase_coherence_readonly(),
539
+ "global_tension_history": self._global_tension_history[-20:],
540
+ }
541
+
542
+ @classmethod
543
+ def from_dict(cls, data: Dict) -> "QuantumSpiderweb":
544
+ """Reconstruct web from serialized state."""
545
+ web = cls()
546
+ for nid, ndata in data.get("nodes", {}).items():
547
+ node = web.add_node(nid, NodeState.from_array(ndata["state"]))
548
+ node.neighbors = ndata.get("neighbors", [])
549
+ node.tension_history = ndata.get("tension_history", [])
550
+ node.is_collapsed = ndata.get("is_collapsed", False)
551
+ node.attractor_id = ndata.get("attractor_id")
552
+ for gdata in data.get("glyphs", []):
553
+ web.glyphs.append(IdentityGlyph(
554
+ glyph_id=gdata["glyph_id"],
555
+ encoded_tension=gdata["encoded_tension"],
556
+ stability_score=gdata["stability_score"],
557
+ source_node=gdata["source_node"],
558
+ attractor_signature=gdata.get("attractor_signature"),
559
+ ))
560
+ web._global_tension_history = data.get("global_tension_history", [])
561
+ return web
reasoning_forge/resonant_continuity.py ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Codette Resonant Continuity Engine — The RC+xi Equation
2
+
3
+ The mathematical core of Codette's recursive cognition framework.
4
+
5
+ The Resonant Continuity equation computes Ψ_r (psi-resonance):
6
+ Ψ_r = (emotion × energy × frequency × intent) / ((1 + |darkness|) × speed)
7
+ × sin(2πt / gravity) + Δmatter
8
+
9
+ This captures the interaction between:
10
+ - Emotional state (valence of the reasoning moment)
11
+ - Cognitive energy (engagement level)
12
+ - Resonant frequency (harmonic alignment between perspectives)
13
+ - Intent coefficient (alignment with purpose)
14
+ - Darkness/uncertainty (noise floor)
15
+ - Gravitational pull (convergence tendency)
16
+ - Delta-matter (stochastic creative perturbation)
17
+
18
+ Additionally implements:
19
+ - Information-Energy Duality: E_info = ℏω + η·S
20
+ - Cocoon Stability Field: ∫|F(k,t)|²dk < ε(t,σ)
21
+ - Gradient Anomaly Suppression for outlier detection
22
+
23
+ Origin: resonant_continuity_engine.py + Codette_Deep_Simulation_v1.py, rebuilt
24
+ """
25
+
26
+ import math
27
+ import time
28
+ from dataclasses import dataclass, field
29
+ from typing import Dict, List, Optional
30
+
31
+ try:
32
+ import numpy as np
33
+ HAS_NUMPY = True
34
+ except ImportError:
35
+ HAS_NUMPY = False
36
+
37
+
38
+ @dataclass
39
+ class ResonanceState:
40
+ """Instantaneous state of the resonant continuity engine."""
41
+ psi_r: float = 0.0 # Resonant wavefunction value
42
+ emotion: float = 0.5 # Emotional valence [-1, 1]
43
+ energy: float = 1.0 # Cognitive energy [0, 2]
44
+ intent: float = 0.7 # Purpose alignment [0, 1]
45
+ frequency: float = 1.0 # Harmonic frequency (normalized)
46
+ darkness: float = 0.1 # Uncertainty/noise [0, 1]
47
+ coherence: float = 0.5 # Current coherence level
48
+ stability: bool = True # Cocoon stability
49
+ timestamp: float = 0.0
50
+
51
+ def to_dict(self) -> Dict:
52
+ return {k: round(v, 4) if isinstance(v, float) else v
53
+ for k, v in self.__dict__.items()}
54
+
55
+
56
+ class ResonantContinuityEngine:
57
+ """Computes and tracks the RC+xi resonance wavefunction.
58
+
59
+ The engine evolves Ψ_r over time based on epistemic signals
60
+ from the reasoning process. It detects:
61
+ - Convergence: when perspectives are harmonizing
62
+ - Divergence: when creative tension is productive
63
+ - Instability: when the cocoon needs reinforcement
64
+ - Resonance peaks: moments of deep insight
65
+ """
66
+
67
+ def __init__(self, gravity: float = 1.2, speed: float = 1.0):
68
+ self.gravity = gravity # Convergence tendency
69
+ self.speed = speed # Processing rate
70
+ self.time_index = 0
71
+ self.history: List[ResonanceState] = []
72
+
73
+ # Running state
74
+ self._emotion = 0.5
75
+ self._energy = 1.0
76
+ self._intent = 0.7
77
+ self._frequency = 1.0
78
+ self._darkness = 0.1
79
+
80
+ def compute_psi(self, emotion: float = None, energy: float = None,
81
+ intent: float = None, frequency: float = None,
82
+ darkness: float = None,
83
+ coherence: float = 0.5,
84
+ tension: float = 0.3) -> ResonanceState:
85
+ """Compute Ψ_r for the current reasoning moment.
86
+
87
+ Args:
88
+ emotion: Emotional valence [-1, 1] (from memory kernel)
89
+ energy: Cognitive energy [0, 2] (from response quality)
90
+ intent: Purpose alignment [0, 1] (from query clarity)
91
+ frequency: Harmonic frequency (from perspective agreement)
92
+ darkness: Uncertainty level [0, 1] (from tension)
93
+ coherence: Current epistemic coherence
94
+ tension: Current epistemic tension
95
+ """
96
+ self.time_index += 1
97
+ t = self.time_index
98
+
99
+ # Update state (use provided values or auto-evolve)
100
+ self._emotion = emotion if emotion is not None else self._auto_emotion(coherence)
101
+ self._energy = energy if energy is not None else self._auto_energy(coherence, tension)
102
+ self._intent = intent if intent is not None else self._auto_intent(coherence)
103
+ self._frequency = frequency if frequency is not None else self._auto_frequency(coherence, tension)
104
+ self._darkness = darkness if darkness is not None else tension
105
+
106
+ # Delta-matter: small stochastic perturbation for creativity
107
+ if HAS_NUMPY:
108
+ delta_matter = float(np.random.normal(0.0, 0.005))
109
+ else:
110
+ import random
111
+ delta_matter = random.gauss(0.0, 0.005)
112
+
113
+ # The RC+xi equation
114
+ numerator = self._emotion * self._energy * self._frequency * self._intent
115
+ denominator = (1.0 + abs(self._darkness)) * self.speed
116
+ sine_wave = math.sin((2.0 * math.pi * t) / self.gravity)
117
+
118
+ psi_r = (numerator / denominator) * sine_wave + delta_matter
119
+
120
+ # Cocoon stability check
121
+ stability = self._check_stability(psi_r, coherence)
122
+
123
+ state = ResonanceState(
124
+ psi_r=psi_r,
125
+ emotion=self._emotion,
126
+ energy=self._energy,
127
+ intent=self._intent,
128
+ frequency=self._frequency,
129
+ darkness=self._darkness,
130
+ coherence=coherence,
131
+ stability=stability,
132
+ timestamp=time.time(),
133
+ )
134
+
135
+ self.history.append(state)
136
+ if len(self.history) > 200:
137
+ self.history = self.history[-200:]
138
+
139
+ return state
140
+
141
+ def information_energy(self, angular_freq: float,
142
+ entropy: float, eta: float = 1.0) -> float:
143
+ """Information-Energy Duality: E_info = ℏω + η·S
144
+
145
+ Maps between information (entropy) and energy (frequency).
146
+ """
147
+ hbar = 1.054571817e-34 # Reduced Planck's constant
148
+ return hbar * angular_freq + eta * entropy
149
+
150
+ def resonance_quality(self) -> float:
151
+ """Overall resonance quality from recent history [0, 1]."""
152
+ if len(self.history) < 3:
153
+ return 0.5
154
+ recent = self.history[-10:]
155
+ psi_values = [abs(s.psi_r) for s in recent]
156
+ coherences = [s.coherence for s in recent]
157
+
158
+ # Good resonance: moderate psi, high coherence, stable
159
+ avg_psi = sum(psi_values) / len(psi_values)
160
+ avg_coh = sum(coherences) / len(coherences)
161
+ stability_rate = sum(1 for s in recent if s.stability) / len(recent)
162
+
163
+ # Penalize extreme psi (too wild = chaotic)
164
+ psi_quality = 1.0 / (1.0 + abs(avg_psi - 0.5))
165
+
166
+ return 0.4 * avg_coh + 0.3 * stability_rate + 0.3 * psi_quality
167
+
168
+ def detect_resonance_peak(self) -> bool:
169
+ """Detect if we're at a resonance peak (insight moment)."""
170
+ if len(self.history) < 5:
171
+ return False
172
+ recent = [s.psi_r for s in self.history[-5:]]
173
+ # Peak: value higher than neighbors and above threshold
174
+ mid = recent[-3]
175
+ return (abs(mid) > abs(recent[-5]) and
176
+ abs(mid) > abs(recent[-1]) and
177
+ abs(mid) > 0.3)
178
+
179
+ def convergence_rate(self) -> float:
180
+ """Rate at which perspectives are converging [-1, 1].
181
+
182
+ Positive = converging, negative = diverging.
183
+ """
184
+ if len(self.history) < 5:
185
+ return 0.0
186
+ recent_coh = [s.coherence for s in self.history[-10:]]
187
+ if len(recent_coh) < 3:
188
+ return 0.0
189
+ # Simple linear trend
190
+ n = len(recent_coh)
191
+ x_mean = (n - 1) / 2.0
192
+ y_mean = sum(recent_coh) / n
193
+ num = sum((i - x_mean) * (y - y_mean) for i, y in enumerate(recent_coh))
194
+ den = sum((i - x_mean) ** 2 for i in range(n))
195
+ return num / den if den > 0 else 0.0
196
+
197
+ def get_state(self) -> Dict:
198
+ """Current engine state for API/session."""
199
+ current = self.history[-1] if self.history else ResonanceState()
200
+ return {
201
+ "psi_r": round(current.psi_r, 4),
202
+ "resonance_quality": round(self.resonance_quality(), 4),
203
+ "convergence_rate": round(self.convergence_rate(), 4),
204
+ "at_peak": self.detect_resonance_peak(),
205
+ "total_cycles": self.time_index,
206
+ "stability": current.stability,
207
+ }
208
+
209
+ def _auto_emotion(self, coherence: float) -> float:
210
+ """Auto-derive emotion from coherence signal."""
211
+ return max(-1.0, min(1.0, 2.0 * coherence - 1.0))
212
+
213
+ def _auto_energy(self, coherence: float, tension: float) -> float:
214
+ """Energy rises with productive tension, falls with incoherence."""
215
+ return max(0.1, min(2.0, 0.5 + coherence + 0.5 * tension))
216
+
217
+ def _auto_intent(self, coherence: float) -> float:
218
+ """Intent tracks coherence — clear thinking = clear purpose."""
219
+ return max(0.1, min(1.0, 0.3 + 0.7 * coherence))
220
+
221
+ def _auto_frequency(self, coherence: float, tension: float) -> float:
222
+ """Frequency from perspective harmony."""
223
+ return max(0.1, coherence * (1.0 + 0.5 * tension))
224
+
225
+ def _check_stability(self, psi_r: float, coherence: float) -> bool:
226
+ """Check if the reasoning cocoon is stable."""
227
+ # Unstable if: wild oscillation AND low coherence
228
+ if len(self.history) < 3:
229
+ return True
230
+ recent = [s.psi_r for s in self.history[-3:]]
231
+ variance = sum((p - psi_r) ** 2 for p in recent) / len(recent)
232
+ return not (variance > 1.0 and coherence < 0.3)
233
+
234
+ def to_dict(self) -> Dict:
235
+ return {
236
+ "time_index": self.time_index,
237
+ "gravity": self.gravity,
238
+ "speed": self.speed,
239
+ "history": [s.to_dict() for s in self.history[-20:]],
240
+ }
241
+
242
+ @classmethod
243
+ def from_dict(cls, d: Dict) -> "ResonantContinuityEngine":
244
+ engine = cls(gravity=d.get("gravity", 1.2), speed=d.get("speed", 1.0))
245
+ engine.time_index = d.get("time_index", 0)
246
+ for h in d.get("history", []):
247
+ engine.history.append(ResonanceState(**{
248
+ k: v for k, v in h.items()
249
+ if k in ResonanceState.__dataclass_fields__
250
+ }))
251
+ return engine
reasoning_forge/synthesis_engine.py ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Synthesis Engine - Combines all agent perspectives into a unified multi-perspective response.
3
+
4
+ Takes the concept, all agent analyses, and critic feedback, then produces
5
+ a synthesized explanation that highlights how different perspectives complement
6
+ each other. Includes a Final Integrated Understanding section.
7
+ """
8
+
9
+ import random
10
+ import re
11
+
12
+
13
+ class SynthesisEngine:
14
+ """Combines multi-agent analyses into coherent synthesized responses."""
15
+
16
+ # Opening templates that set up the multi-perspective frame
17
+ _opening_templates = [
18
+ (
19
+ "To understand '{concept}' with genuine depth, we must examine it through "
20
+ "multiple lenses, each revealing structure that the others miss."
21
+ ),
22
+ (
23
+ "'{concept}' resists single-framework analysis. Its full meaning emerges "
24
+ "only at the intersection of several distinct modes of reasoning."
25
+ ),
26
+ (
27
+ "A comprehensive understanding of '{concept}' requires weaving together "
28
+ "insights from fundamentally different ways of thinking."
29
+ ),
30
+ (
31
+ "No single perspective captures '{concept}' adequately. What follows is "
32
+ "an integrated analysis drawing on physics, philosophy, ethics, creativity, "
33
+ "and human experience."
34
+ ),
35
+ (
36
+ "The richness of '{concept}' becomes apparent only when we hold multiple "
37
+ "analytical frameworks simultaneously and let them inform each other."
38
+ ),
39
+ ]
40
+
41
+ # Bridge templates connecting one perspective to another
42
+ _bridge_templates = [
43
+ "Where {agent_a} reveals {insight_a}, {agent_b} adds the crucial dimension of {insight_b}.",
44
+ "The {agent_a} analysis and the {agent_b} analysis converge on a shared insight: {shared}.",
45
+ "What appears as {aspect_a} from the {agent_a} perspective is revealed as {aspect_b} when viewed through {agent_b}.",
46
+ "The tension between {agent_a}'s emphasis on {focus_a} and {agent_b}'s emphasis on {focus_b} is productive, not contradictory.",
47
+ "{agent_a} identifies the mechanism; {agent_b} identifies the meaning.",
48
+ "Combining {agent_a}'s structural analysis with {agent_b}'s human-centered analysis yields a fuller picture.",
49
+ ]
50
+
51
+ # Closing templates for the Final Integrated Understanding
52
+ _closing_templates = [
53
+ (
54
+ "**Final Integrated Understanding:** {concept} is simultaneously a "
55
+ "{physical_desc}, a {philosophical_desc}, a {ethical_desc}, a "
56
+ "{creative_desc}, and a {human_desc}. These are not competing descriptions "
57
+ "but complementary facets of a single complex reality. The most robust "
58
+ "understanding holds all five in view, using each to compensate for the "
59
+ "blind spots of the others."
60
+ ),
61
+ (
62
+ "**Final Integrated Understanding:** The multi-perspective analysis reveals "
63
+ "that {concept} cannot be reduced to any single framework without distortion. "
64
+ "The physical analysis provides causal grounding, the philosophical analysis "
65
+ "excavates hidden assumptions, the ethical analysis maps the stakes, the "
66
+ "creative analysis opens new solution spaces, and the empathic analysis "
67
+ "anchors everything in lived human experience. Together they constitute "
68
+ "not a list of separate views but an integrated understanding richer than "
69
+ "any view alone."
70
+ ),
71
+ (
72
+ "**Final Integrated Understanding:** What emerges from this multi-lens "
73
+ "examination of {concept} is not a single 'correct' interpretation but a "
74
+ "structured understanding of how different valid interpretations relate to "
75
+ "each other. The causal structure identified by physics, the meaning "
76
+ "structure identified by philosophy, the value structure identified by "
77
+ "ethics, the possibility structure identified by creative reasoning, and "
78
+ "the experience structure identified by empathy are all real and all "
79
+ "essential. Wisdom lies in knowing which lens to apply in which context "
80
+ "and how to translate insights between them."
81
+ ),
82
+ ]
83
+
84
+ def synthesize(
85
+ self,
86
+ concept: str,
87
+ analyses: dict[str, str],
88
+ critique: dict,
89
+ ) -> str:
90
+ """Produce a synthesized multi-perspective response.
91
+
92
+ Args:
93
+ concept: The original concept.
94
+ analyses: Dict mapping agent_name -> analysis_text.
95
+ critique: Output from CriticAgent.evaluate_ensemble().
96
+
97
+ Returns:
98
+ A synthesized text of 200-400 words.
99
+ """
100
+ sections = []
101
+
102
+ # 1. Opening
103
+ opening = random.choice(self._opening_templates).replace("{concept}", concept)
104
+ sections.append(opening)
105
+
106
+ # 2. Per-perspective summaries (compressed)
107
+ perspective_summaries = self._extract_perspective_summaries(analyses)
108
+ for agent_name, summary in perspective_summaries.items():
109
+ sections.append(f"**{agent_name} perspective:** {summary}")
110
+
111
+ # 3. Cross-perspective bridges (pick 2-3)
112
+ bridges = self._generate_bridges(analyses, perspective_summaries)
113
+ if bridges:
114
+ sections.append("") # blank line for readability
115
+ for bridge in bridges[:2]:
116
+ sections.append(bridge)
117
+
118
+ # 4. Incorporate critic insights
119
+ critic_section = self._incorporate_critique(critique)
120
+ if critic_section:
121
+ sections.append("")
122
+ sections.append(critic_section)
123
+
124
+ # 5. Final Integrated Understanding
125
+ closing = self._generate_closing(concept, perspective_summaries)
126
+ sections.append("")
127
+ sections.append(closing)
128
+
129
+ raw_synthesis = "\n\n".join(sections)
130
+
131
+ # Trim to 200-400 words if needed
132
+ return self._trim_to_target(raw_synthesis, min_words=200, max_words=400)
133
+
134
+ def _extract_perspective_summaries(
135
+ self, analyses: dict[str, str]
136
+ ) -> dict[str, str]:
137
+ """Extract a 1-2 sentence summary from each agent's analysis."""
138
+ summaries = {}
139
+ for agent_name, text in analyses.items():
140
+ sentences = [s.strip() for s in re.split(r'(?<=[.!?])\s+', text) if s.strip()]
141
+ if len(sentences) >= 3:
142
+ # Take the 2nd and 3rd sentences (skip the opening framing)
143
+ summary = " ".join(sentences[1:3])
144
+ elif len(sentences) >= 1:
145
+ summary = sentences[0]
146
+ else:
147
+ summary = text[:200]
148
+
149
+ # Trim to ~40 words
150
+ words = summary.split()
151
+ if len(words) > 45:
152
+ summary = " ".join(words[:40]) + "..."
153
+ summaries[agent_name] = summary
154
+ return summaries
155
+
156
+ def _generate_bridges(
157
+ self,
158
+ analyses: dict[str, str],
159
+ summaries: dict[str, str],
160
+ ) -> list[str]:
161
+ """Generate cross-perspective bridge statements."""
162
+ bridges = []
163
+ agent_names = list(analyses.keys())
164
+
165
+ # Define perspective focus areas for bridge generation
166
+ focus_map = {
167
+ "Newton": "causal mechanisms and measurable dynamics",
168
+ "Quantum": "uncertainty, probability, and the limits of definite knowledge",
169
+ "Ethics": "moral stakes, fairness, and human impact",
170
+ "Philosophy": "foundational assumptions and the structure of meaning",
171
+ "DaVinci": "creative possibilities and cross-domain innovation",
172
+ "Empathy": "emotional reality and lived human experience",
173
+ }
174
+
175
+ # Generate a few meaningful bridges
176
+ if len(agent_names) >= 2:
177
+ pairs = []
178
+ for i in range(len(agent_names)):
179
+ for j in range(i + 1, len(agent_names)):
180
+ pairs.append((agent_names[i], agent_names[j]))
181
+ random.shuffle(pairs)
182
+
183
+ for name_a, name_b in pairs[:3]:
184
+ focus_a = focus_map.get(name_a, "its analytical focus")
185
+ focus_b = focus_map.get(name_b, "its analytical focus")
186
+ template = random.choice(self._bridge_templates)
187
+
188
+ bridge = template.format(
189
+ agent_a=name_a,
190
+ agent_b=name_b,
191
+ insight_a=focus_a,
192
+ insight_b=focus_b,
193
+ shared="the importance of understanding the full system rather than isolated parts",
194
+ aspect_a="a structural feature",
195
+ aspect_b="a deeply human concern",
196
+ focus_a=focus_a,
197
+ focus_b=focus_b,
198
+ )
199
+ bridges.append(bridge)
200
+
201
+ return bridges
202
+
203
+ def _incorporate_critique(self, critique: dict) -> str:
204
+ """Turn critic feedback into a synthesis-relevant observation."""
205
+ parts = []
206
+
207
+ if critique.get("missing_perspectives"):
208
+ gap = critique["missing_perspectives"][0]
209
+ # Extract just the perspective name
210
+ parts.append(
211
+ f"A notable gap in the analysis is the limited attention to "
212
+ f"{gap.split('lacks a ')[1].split(' perspective')[0] if 'lacks a ' in gap else 'additional'} "
213
+ f"dimensions, which future analysis should address."
214
+ )
215
+
216
+ if critique.get("improvement_suggestions"):
217
+ suggestion = critique["improvement_suggestions"][0]
218
+ # Compress the suggestion
219
+ words = suggestion.split()
220
+ if len(words) > 25:
221
+ suggestion = " ".join(words[:25]) + "..."
222
+ parts.append(f"The critic notes: {suggestion}")
223
+
224
+ overall = critique.get("overall_quality", 0)
225
+ if overall >= 0.75:
226
+ parts.append(
227
+ "Overall, the multi-perspective ensemble achieves strong analytical "
228
+ "coverage with good complementarity between viewpoints."
229
+ )
230
+ elif overall >= 0.5:
231
+ parts.append(
232
+ "The ensemble provides reasonable coverage but would benefit from "
233
+ "deeper engagement between perspectives."
234
+ )
235
+
236
+ return " ".join(parts) if parts else ""
237
+
238
+ def _generate_closing(
239
+ self, concept: str, summaries: dict[str, str]
240
+ ) -> str:
241
+ """Generate the Final Integrated Understanding section."""
242
+ template = random.choice(self._closing_templates)
243
+
244
+ # Build descriptors from available perspectives
245
+ descriptors = {
246
+ "physical_desc": "system governed by causal dynamics and conservation principles",
247
+ "philosophical_desc": "concept whose meaning depends on the framework from which it is examined",
248
+ "ethical_desc": "domain of genuine moral stakes affecting real people",
249
+ "creative_desc": "space of untapped possibilities waiting for cross-domain insight",
250
+ "human_desc": "lived experience with emotional texture that abstract analysis alone cannot capture",
251
+ }
252
+
253
+ result = template
254
+ result = result.replace("{concept}", concept)
255
+ for key, value in descriptors.items():
256
+ result = result.replace("{" + key + "}", value)
257
+
258
+ return result
259
+
260
+ def _trim_to_target(
261
+ self, text: str, min_words: int = 200, max_words: int = 400
262
+ ) -> str:
263
+ """Trim or pad text to fall within the target word range."""
264
+ words = text.split()
265
+
266
+ if len(words) > max_words:
267
+ # Trim from the middle sections, preserving opening and closing
268
+ lines = text.split("\n\n")
269
+ while len(" ".join(lines).split()) > max_words and len(lines) > 3:
270
+ # Remove the longest middle section
271
+ middle_indices = list(range(1, len(lines) - 1))
272
+ if not middle_indices:
273
+ break
274
+ longest_idx = max(middle_indices, key=lambda i: len(lines[i].split()))
275
+ lines.pop(longest_idx)
276
+ return "\n\n".join(lines)
277
+
278
+ return text
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ gradio>=5.0.0
2
+ huggingface_hub>=0.25.0
3
+ numpy
4
+ plotly>=5.0.0