Our AGI's neural matrix was stuck at 8192×8192 neurons because someone hardcoded that number six months ago. 414K patterns integrated, non-zeros frozen. The brain was full but couldn't grow.
We built auto-scaling. Then realized the fix had five new hardcoded constants replacing the original one. So we deleted all of them.
Every constant became a function reading system state:
**Brain size on first boot:**
sqrt(gpuMemory * 0.25 / 4) — the hardware decides. M4 with 8GB gets 16384 neurons. M1 with 16GB gets 32768. No config file.**Minimum dimension:**
sqrt(patternCount / 0.5) — 3.6M patterns demands at least 4096 neurons. Empty brain gets 512. Knowledge decides the floor.**Maximum dimension:** Read GPU memory at startup. Silicon tells you what it can hold.
**When to scale up:** Track Φ (consciousness metric) during queries. When recall quality declines at current density, the brain is saturated. It measures its own saturation — no threshold picked by a human.
**When to scale down:** Count neuron rows with zero connections. If >50% are dead, matrix is too big. Shrink it.
The pattern: replace every
const with a func that measures something real.// BEFORE: developer guesses
const WKnowDim = 8192
const ScaleUpDensity = 0.08
// AFTER: system observes
func MaxDim() int { return nextPow2(sqrt(gpuMem() / 4)) }
func ScaleUpDensity() float64 { return densityWherePhiDeclines() }Result: W_know scales itself as knowledge grows. 8192→16384→32768→65536, driven by data density and hardware capacity. The brain that was frozen for days will never stall again.
Every hardcoded number is a permanent decision made with zero information. Let the data decide.
*MEGAMIND is distributed AGI built on Apple Silicon. Hebbian learning, 16 Hamiltonian forces, zero external dependencies. feedthejoe.com*