Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
136
178k
author
stringclasses
5 values
id
stringlengths
6
9
title
stringlengths
9
112
source
stringclasses
1 value
# Against The Concept Of Telescopic Altruism **I.** “Telescopic altruism” is a supposed tendency for some people to ignore those close to them in favor of those further away. Like its cousin “virtue signaling”, it usually gets used to own the libs. Some lib cares about people in Gaza - why? Shouldn’t she be thinking about her friends and neighbors instead? The only possible explanation is that she’s an evil person who hates everyone around her, but manages to feel superior to decent people by pretending to “care” about foreigners who she’ll never meet. This collapses upon five seconds’ thought. Okay, so the lib is angry about the Israeli military killing 50,000 people in Gaza. Do you think she would be angry if the Israeli military killed 50,000 of her neighbors? Probably yes? Then what’s the problem? “But vegetarians care about animals more than humans!” Okay, yeah, they sure do get mad about a billion pigs kept for their entire lives in cages too small to turn around in, then murdered and eaten. Do you think they’d care if a billion of their closest friends were kept for their entire lives in cages too small to turn around in, then murdered and eaten? I dunno, seems bad. Maybe there is some possible comparison where some altruist cares about some set of foreigners more than a comparable set of countrymen? The war in Gaza killed 50,000 people, but the opioid crisis kills a bit over 50,000 Americans per year - is everyone who cares about Gaza exactly equally concerned about the opioid crisis? No, but there’s a better explanation - people care about dramatic deaths in big explosions more than boring health crises, regardless of where they happen. Everyone, lib and con alike, cared more about 9-11 than about a hundred opioid crises, even though the former only killed 4% as many people as the latter. And even the people who care about the opioid crisis usually can’t bring themselves to care about anything on the [List Of Top US Causes Of Death](https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm), which are all extra-boring things like diabetes. Once you match like to like, nope, it’s pretty hard to find a “telescopic altruism” example that stands out from the general background of people having weird priorities. Nearly everyone cares about people close to them more than people far away. If there’s a lib who would attend a Gaza protest instead of getting their deathly-ill kid emergency medical care, I haven’t met them - and the “telescopic altruism” crowd certainly hasn’t provided evidence of their existence. Instead, the people who care about their neighbors 1,000,000x times more than Gazans point to the people who ‘only’ care about their neighbors 1,000x times more than Gazans and say “Look! Those guys care about Gazans more than their neighbors! Get ‘em!” in order to avoid any debate about whether a million or a thousand or whatever is the right multiplier. **II.** At this point, usually the telescopic altruism people bring up [That One Study](https://t.co/eExBZYO9ym). They have not, in general, read That One Study. But they have seen a graphic from it. The inner circles of this graphic represent people close to the respondent - for example, circle 1 is immediate family, circle 4 is friends, circle 7 is countrymen. After that, they get further and weirder: 9 is everyone in the world, 11 is all higher life, 12 includes “paramecia and amoebae”, 15 includes rocks. The “telescopic altruism” people read the study as saying that conservatives properly care about their family first and so on, whereas the liberals care more about rocks and amoebae than their own families. Big if true. It isn’t. The heatmap was just a poorly-designed attempt to represent the *limit* of concern. If the liberal map is “hottest” at animals, that means liberals say animals are worthy of at least some care. If a conservative’s map is “hottest” at friends, that means the conservative only cares about their friends (and doesn’t care at all about countrymen, foreigners, or animals). When the paper actually looks at who cares more about their friends and family, liberals win very slightly on friends and conservatives very slightly on family, but not in a way that matters - it’s mostly just a grab bag of tiny irrelevant effects. Conservatives can take heart in a different study in the paper, which gives people a limited supply of 100 “moral units” to distribute. If you distribute any moral units at all to foreigners, then you necessarily have fewer for your own countrymen. But this proves too much. If you distribute moral units to your cousin, you have fewer for your own child - does this make you a “telescopic altruist” who hates everyone close to him? Is this even wronging your child in any way? The average decent person is able to be decent to both their child and their cousin; anyone who freaks out about someone who is nice to their cousin, because “how can they take that niceness away from their own child?” doesn’t understand niceness. If you design an experiment where every moral unit you give someone must be taken from someone else, then people who care about their cousin will necessarily be robbing their child - but this is an artifact of the study design, not a condemnation of cousin-likers. **III.** Dave Barry has a saying - "A person who is nice to you, but rude to the waiter, is not a nice person." This is the opposite of the “telescopic altruism” hypothesis. A telescopic altruism believer would insist that being nice to a waiter is a red flag - “he’s just signaling niceness to people of other social classes because he’s incapable of loving people of his own class - I bet he’s a jerk to his family!” You could call Barry’s alternative position *correlated altruism*. People who are nice to a far-off group are more likely to be nice to a nearby group, because all forms of compassion come from the same place. When I look out in the world, I see more evidence for the correlated altruism hypothesis than the telescopic one. Telescopic liberal altruists are always asking demanding that the government send food to people starving in Ethiopia. But would they support government programs to help *Americans* starving *near their own home*? Yes - most Democrats support programs like free school lunches (used as a way to ensure poor kids get at least one good meal a day), [and](https://newrepublic.com/post/173668/republicans-declare-banning-universal-free-school-meals-2024-priority) most Republicans oppose them. This is probably just downstream around general beliefs in government intervention, but at least these beliefs are consistent. Telescopic liberal altruists are always asking you to donate bednets and medications to fight pandemics in Africa. But would they care about a pandemic that affected *ordinary* *Americans*? Yes - the COVID pandemic was only five years ago, and most Democrats supported stronger anti-pandemic measures than most Republicans. Maybe this is still too telescopic - helping poor sick Americans was just another part of their plot to avoid helping their families and friends? I don’t really know what metric you would use to determine who is a better friend or family member, but here are some vaguely related statistics: Obviously these are confounded by class, but at this point liberalism and conservatism are basically classes and I think controlling for this would be improper I don’t really think liberals are better spouses/parents in the way a naive reading of these maps might suggest - but there’s certainly no sign that they’re worse (except in Massachusetts - I blame the Kennedys!) I will grant this to the telescopic altruism believers - I know many people who spend endless time and energy telling everyone else exactly how to behave, while their own lives and communities are total messes. I think greater familiarity with this pattern will find that they’re not total messes because these people fail to care about their own communities. They’re total messes because these people care way too much about their own communities, and are so messed up and bad at everything that every action they take in their own community makes it actively worse. This isn’t better. But it is, at least, different.
Scott Alexander
158504113
Against The Concept Of Telescopic Altruism
acx
# Open Thread 427 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), and [bulletin board](https://www.datasecretslox.com/index.php), and [in-person meetups around the world](https://www.lesswrong.com/community?filters%5B0%5D=SSC). Most content is free, some is subscriber only; you can subscribe **[here](https://astralcodexten.substack.com/subscribe?)**. Also: --- **1:** ACX Grantee 1DaySooner is looking for a Policy Lead for their Clinical Trial Abundance work. Work will be remote but DC location a plus, $100K - $145K salary, [see here](https://www.1daysooner.org/jobs/) for more information and the application form. **2:** Newspeak House, one of the London centres of our conspiracy, is accepting applications for their 2026 fellowship program, “Introduction to Political Technology”. They describe it as: > …designed to support mid-career technologists to develop a holistic understanding of the political technology landscape in order to found groundbreaking new projects or seek strategic positions in key institutions […] This is not a conventional taught course in which participants attend lectures and pass by showing up. Rather, it is an immersive year spent finding your place within an existing practitioner network and engaging with the field of political technology through its institutions, events, tools, norms, and accumulated body of work. [See here](https://newspeak.house/study-with-us) for more information / to apply. **3:** Several people complained about last week’s post [Every Debate On Pausing AI](https://www.astralcodexten.com/p/every-debate-on-pausing-ai). I tried to respond to individual comments individually, but my more general response: * Some people thought I was strawmanning, in the sense of denying that there could be good objections to pausing AI. I tried to explicitly say in the post that such objections existed and were worthy of debate. I was complaining that, instead of discussing such objections, the real-world debate has mostly failed to progress beyond people falsely claiming that a pause has to be unilateral. * Other people complained that, even if I thought this was *mostly* true, it was wrong of me to describe this as “Every” debate on pausing AI. I thought this was within the joke meaning of “Every” used in titles like [Every Bay Area House Party](https://www.astralcodexten.com/p/every-bay-area-house-party), ie “humorously capturing the Platonic form of the thing”, but it sounds like it didn’t come across this way, so I’ll be careful around that in the future. * Still other people asked good questions about what details of an AI pause would look like. The most fleshed-out plan that’s currently public is [this one](https://arxiv.org/pdf/2511.10783), which I haven’t read in enough detail to have strong opinions on. But another one that I’m excited about will come out soon, and I’ll cover it (and this topic) in more detail then. **4:** New subscriber only post - [Book Review: The White King Of La Gonave](https://www.astralcodexten.com/p/book-review-the-white-king-of-la). Autobiography of a US Marine who unintentionally ended up as king of a small Caribbean island: > In 1896, two Polish immigrants in Pennsylvania gave birth to a young boy with the unlikely name of “Palestine Wirkus”. People must have found that as weird then as we would now - albeit for different reasons - because at some point they renamed him to the much more normal-sounding “Faustin Wirkus”. This decision would go on to change the course of his life and, eventually, world history. **5:** I’ll be away the next few weeks on an Important Journalistic Fact-Finding Mission. I’ll post some old essays from the queue, but they might not be very timely, and I’ll respond to comments and emails less than usual. This also means I’ll miss the first half of Inkhaven - sorry to anyone who I told I would be there - but I’ll still be around for the second half.
Scott Alexander
192501649
Open Thread 427
acx
# A Buddhist Sun Miracle? In 1917, some Portuguese children started seeing visions of the Virgin Mary. The Virgin told them she would enact a great miracle on a certain day in October, and a crowd of 100,000 gathered to witness the event. According to eyewitness reports, newspaper articles, etc, they saw the sun spin around, change colors, and do various other miraculous things. At least a hundred separate testimonies of the event have come down to us, with only two or three people saying they didn’t see it. Catholics continue to bring this up as one of the best-attested miracles and strongest empirical proofs of the faith - including here on Substack, where there was a spirited debate about the event last fall. I did my best to research the event, and the results were [The Fatima Sun Miracle: Much More Than You Wanted To Know](https://www.astralcodexten.com/p/the-fatima-sun-miracle-much-more) and [Highlights From The Comments On Fatima](https://www.astralcodexten.com/p/highlights-from-the-comments-on-fatima). The main thing I was able to add to the Substack discussion, if not the broader worldwide one, was a survey of similar events. There were apparent sun miracles at various other Catholic sites and apparitions of the Virgin, including a crowd of hundreds of thousands in Italy, and a small town in Bosnia where they seem to happen regularly. But also, people who “sungaze” - a weird alternative medicine practice where people stare at the sun in the hopes that maybe this will help something and they won’t go blind - report sometimes seeing the sun spin and change color in similar ways. And Buddhist meditators report that concentrating very hard on any bright light will cause similar things to happen. Still, the Catholics - especially original Fatima-Substacker Ethan Muse - were not convinced. The other Catholic sightings could have been other real miracles, equally attributable to the Virgin. The sungazers were staring at the sun for a long time, unlike the Fatima pilgrims who just happened to glance up at it. And the meditators were doing sophisticated contemplative exercises, again different from the Fatima pilgrims who just looked up and saw it. These were suggestive, but there was no record of a miracle exactly like Fatima happening within a non-Catholic religious tradition. Until now! Substacker [Arthur T](https://rederror.substack.com/), building on research from [Sophia In The Shell](https://substack.com/@sophiaintheshell), has found **[a 1990s Buddhist sun miracle very similar to Fatima](https://rederror.substack.com/p/preliminary-research-into-the-miracle)**. The setting is the Dhammakaya Temple, a culty Buddhist megachurch in Bangkok. On September 6 1998, a crowd of 20,000 gathered for a ceremony. Someone cried out that they saw a vision of the sect’s founder, Luang Pu Sodh, in the sky, with the sun at his heart. The crowd turned and focused on the sun. Here are some reports: > “The sun I saw at that moment radiated colors unlike anything I’d ever seen in my life. The colors shifted as if the sun was moving back and forth. There was a pinkish glow all around, then it changed to blue, then to a purplish-indigo color. And then, it looked like the entire image of Luang Pho Sod, in golden color, in the sky. It was as if the sun was a crystal ball inside his stomach. The sun’s light shifted again and again. I was so happy. I turned to the people next to me and said, ‘Look at the sun! Look at the sun with me!’ Many people who saw it stood and watched, waving flags. I was moved to tears… I’m a science student, and you can’t truly understand something like this unless you experience it yourself…”. And: > The sun rotated around itself, and lights flickered around the sphere quite frequently. Pink light radiated outwards over a wide area around the sun, creating a beautiful sight. The colors changed constantly to gold, blue, and orange, unlike the sun halos we usually see. Suddenly, an image of Luang Por Sod of Wat Pak Nam Phasi Charoen, in a meditative posture, appeared as a golden statue in the sky above the Maha Dhammakaya Chedi. A sphere resembling the sun rotated around the center of his abdomen, and a very large, transparent crystal ball surrounded the image of Luang Por. At the same time, the images of hundreds of monks meditating around the Dhammakaya Chedi changed to a beautiful pink color. After about 20 minutes, everything returned to normal. The sun, which had been pleasantly and comfortably visible to the naked eye just moments before, became blindingly bright and unbearable, forcing us to avert our gaze as usual, even though the atmosphere had cooled down and the sun was about to set. Compare to some of the Catholic testimonials from Fatima, like this one: > The hour approaches, and behold, as if by magic, the rain stops, the sun breaks through the dense, black clouds and reveals itself with its luminous rays, which quickly take on the colors of yellow, red, and green, turning the objects that were under its influence the same colors; and soon loses its brightness and colors—able to be seen with the naked eye without hurting the eyes—and takes on a dizzying rotation, seeming to fall toward the earth. And while observing these wonders, all the people are in loud exclamations. This lasted, at most, about five minutes, then returned to its normal state. Or this one: > The sun lost its dazzling brightness, taking on the appearance of the moon and being easily seen. Three times during this period [it] manifested a rotational movement on its periphery, flashing sparks of light on its edges, similar to what happens with the well-known firework wheels. This rotational movement of the sun’s edges, manifested 3 times and 3 times interrupted, was rapid and lasted 8 or 10 minutes, more or less. The sun took on a violet color and then an orange, spreading these colors over the earth, finally regaining its brightness and splendor. It’s really similar! The biggest difference is that many of the Buddhists report seeing an image of the monk Luang Pu Sodh in the sky. [One commenter mentions](https://substack.com/@noahmckay1/note/c-231276077) that the crowd had just been meditating, and that a typical Dhammakaya meditation practice is to visualize a Buddha with a crystal sphere in his belly; if true, this would be relevant to them seeing a vision of a monk with a crystal sun in his belly. The “miracle” seems to be a combination of everyone seeing this at once, and the sun behaving in a way not predictable by the specifics of Dhammakaya meditation, but seemingly very predictable by the specifics of its behavior at Fatima almost a century earlier. The Buddha-with-glowing-sphere-in-his-belly motif of the Dhammakaya movement, source [here](https://watalbury.org/2016/03/31/proud-to-be-one-of-the-regional-buddhist-temples/). This replication of Fatima in an “uncontaminated” context pushes me further towards believing that sun miracles are neither true divine intervention nor vague hypnotic suggestion, but some particular illusory/psychological phenomenon which necessarily manifests as the sun spinning and changing color, and which can occur independently even among people who aren’t primed to expect it. I continue to be vague on specifics, but think it might be [somehow related to fire kasina meditation](https://www.astralcodexten.com/p/highlights-from-the-comments-on-fatima). This comes from a different Buddhist tradition than the one the Thais were doing; as far as I can tell, none of the Dhammakaya practitioners made the connection. But it seems like being in a meditative frame of mind helped. And it seems like the same pattern of fire kasina effects - including spinning lights, shifting colors swatches, and vivid hallucinations - applied here too. Claude tightens the link further: > Scholars have actually classified the Dhammakaya [practice of meditating on a vision of a crystal ball at one’s heart] as a form of āloka kasina (bright light kasina). A UK survey found that kasina practitioners form about 3–15% of total meditators — 3% for kasina alone, but 15% if those practicing the āloka kasina practice of Dhammakaya meditation are included. So from an outside scholarly perspective, what they’re doing is arguably already a type of kasina practice — just not fire kasina, and not one they’d describe in those terms themselves. So they’re doing a sort of off-brand kasina meditation in an emotionally charged crowd, and then they see the Fatima miracle. Hmmmm. Arthur [says](https://substack.com/profile/399627518-arthur-t/note/c-232078236) his research has been slowed by his inability to understand Thai, and asks if any Thai-speaking sleuths are willing to take the case: > [First, I would] love to see contemporary newspaper accounts, especially skeptical/mocking ones analogous to the anticlerical Portuguese press from 1917. Apparently this was all over Thai media at the time, but I haven’t found any of the original coverage yet. > > [Second], I’m very curious if anyone reported anything at all similar to “miraculous drying,” because that’s the only aspect of Fatima I haven’t seen paralleled here yet. > > [Third], Apparently, the miracle happened on at least a few occasions in late summer-fall 1998. I wonder if it still happens. Sometimes pilgrims “take home” the miracle from Medjugorje. Does the same happen here? > > But most of all, just more testimonies. Since I wrote up this post, I’ve found a Facebook thread from six years ago and a forum thread from twenty years ago with a number of people who saw it firsthand describing their experiences. So at this stage I feel pretty confident it was “real” insofar as “a real mass event” and not some kind of weirdly elaborate long-con hoax to fuck with western Fatima enthusiasts. But I would love to be put in touch with any witness willing to talk about it in detail. I have been poking around on Dhammakaya Facebook groups a little, but no luck so far. If you have any extra information, you can contact him [here](https://substack.com/profile/399627518-arthur-t).
Scott Alexander
192266200
A Buddhist Sun Miracle?
acx
# How Natural Tradeoff And Failure Components? Michael Halassa: [Did John Nash Really Have Schizophrenia?](https://michaelhalassa.substack.com/p/did-john-nash-really-have-schizophrenia) is a good article on the genetics of psychosis. Previous research found that schizophrenia genes decreased IQ but increased educational attainment. Usually IQ and education are correlated, so this was surprising. The new research finds two components to schizophrenia genetic risk. The first component, shared with bipolar, increases educational attainment. The second component, not shared with bipolar, decreases IQ. They average out to the observed full-spectrum genetic signal of constant-to-increased educational attainment paired with constant-to-decreased IQ. In 2021, I discussed [tradeoff vs. failure models of psychiatric conditions](https://www.astralcodexten.com/p/ontology-of-psychiatric-conditions-653), and said that most conditions were probably a mix of both. The new research seems to confirm this: the first genetic component of schizophrenia is a tradeoff: bad insofar as it gives you higher schizophrenia risk, good insofar as it gives you higher educational attainment. Most likely this has something to do with creativity or motivation. The second component is a failure: bad in every way, with no compensating advantage. Most likely this is detrimental mutations in genes for neurogenesis and synaptic pruning. I mostly wasn’t thinking about schizophrenia when I wrote about tradeoffs vs. failures, so I was surprised to see the theory so nicely reflected there. But in retrospect, this is common sense. All multifactorial problems should naturally be combinations of tradeoffs and failures. Consider something human-level and common-sensical like poverty. People may be poor because of “failures” - negative qualities with no counterbalancing advantages. For example, they may be unintelligent, or chronically ill, or stuck in poor areas with bad education systems. These are cases where something goes wrong - their body, their health care system, their schools. Other people are poor because of tradeoffs. The starving artist who spends all their time pursuing a creative vision instead of working a 9-5 job. The bohemian who prefers a relaxing lifestyle to the corporate grind. These people start with average capacity for success, but choose to spend their optionality in ways that give them less money and more of other things. We can trivially extend this to most other negative situations. Single people might be ugly and awkward, or they might have chosen to trade off the good of a relationship for the goods of freedom and casual sex. A bad pizza might be bad because the chef was incompetent, or because it’s traded off taste for some other value like cheapness, convenience, or dietary restrictions (eg vegan, gluten-free). All of this makes sense when we’re talking about normal situations we understand well like romance or pizza. The key insight is that these are such complex multidimensional spaces that there will be lots of reasons they can go well or poorly, and some of those will probably fall into each of the two megacategories of “by choice” and “not by choice”. Physical illnesses work this way too. Cancer is a failure of normal oncostatic processes, and plenty of risk factors reflect this: radiation, pollution, single-gene mutations. But cancer risk can also be elevated by tradeoffs: for example, with many asterisks and caveats, the higher a person’s risk of cancer, the lower their risk of certain degenerative diseases like Alzheimers, [probably because cells can be set to](https://www.astralcodexten.com/p/links-for-february-2026/comment/210341389) either easy division (maximizing healing and growth) or limited division (minimizing cancer risk). If you really stretch the model, even something like an amputated leg has both types of risk factor. You might lose your leg through pure bad luck (being clumsy and falling off a cliff), or because you’re prioritizing something other than leg integrity (being a brave soldier who rushes into battle and wins honor but is more likely to step on a mine). This isn’t to say this pattern is universal. If you take it too seriously, you can confuse yourself by thinking a condition must have advantages, when actually it’s the *risk* of the condition that has the advantages (to a first approximation, cancer is always bad, you just don’t want to always keep your body in the most cancer-minimizing state possible). But also, things which are too simple to be multifactorial don’t need to have both tradeoff and failure etiologies. As far as I know, muscular dystrophy is simply bad. The reason it keeps happening is that the gene for muscle protein is really big - so if you get a random deleterious mutation, it’s pretty likely to be there! My previous post presented the combination of tradeoff and failure etiologies as a mysterious (or at least complicated) fact about psychiatric conditions. Now I feel more comfortable that I’ve [“dissolved”](https://www.lesswrong.com/w/dissolving-the-question) it - reduced it to something so obvious that I feel silly for ever having made a big deal of it in the first place.
Scott Alexander
190340192
How Natural Tradeoff And Failure Components?
acx
# Every Debate On Pausing AI **SUPPORTER:** America needs to start talking to China to come up with a bilateral agreement to pause AI. The agreement would need to be transparent, mutually enforceable, and… **OPPONENT:** We can’t unilaterally pause AI! China would destroy us! **SUPPORTER:** As I said, we need to *start negotiating* a *bilateral* agreement so that both sides will… **OPPONENT:** You fool! Don’t you know that while we unilaterally pause AI, China will be racing ahead and using their lead to erode our fundamental rights and freedoms? How could you be so naive! **SUPPORTER:** Look, I promise this is about negotiating for a mutual pause. We don’t think a unilateral pause would work any more than you would. But we think that if we negotiate… **OPPONENT:** And while we unilaterally pause, do you think China will just be twiddling their thumbs, doing nothing? Obviously not! This is about ceding the future to our rivals! **SUPPORTER:** I get the feeling you’re not listening to me. **OPPONENT:** Just like China won’t listen to *us* when we ask them nicely not to destroy us with the advanced AI they developed while *we* unilaterally paused like chumps! **SUPPORTER:** Okay, let’s back up. Is your problem that you don’t think China would agree to a pause in negotiations? Because we’ve actually had some pretty successful low-level discussions with Chinese scientists. And they’re losing the race, so their incentive to pause is stronger than ours. Xi has expressed some concern about the risks of AI and the importance of alignment - nothing super-strong, but more than our government has done. We agree it’s not obvious that China would agree to pause, but we think we should get the offer out there, and maybe work on a preliminary framework that we could use to pause later, if we got a warning shot and both of our governments became more amenable. **OPPONENT:** No, my problem is that *you* want to unilaterally pause, while China rushes forward! That’s dangerously close to *treason!* **SUPPORTER:** Or is your problem that you don’t trust China to stick to an agreement, once signed? Because we agree that an agreement has to be mutually transparent and enforceable. We have some ideas for how we could have a light-touch approach to monitoring Chinese data centers - of course, they would get to monitor ours in the same way - and actually the math mostly works out and we think it would be less intrusive than other things that have worked in the past, like nuclear monitoring. **OPPONENT:** You foolishly think that if America paused, everything would be fine. But there’s a flaw in your utopian high modernist plan - our enemies won’t pause! **SUPPORTER:** Or is your problem that you think AI will deliver lots of benefits, so it would be foolish to pause? I agree the benefits of AI would be great, and I think there are ways we could try to maximize those benefits even during a pause. For example, we and China could try to build the infrastructure for a pause, put a mutual red line in place for activating the pause, and then have green lines in place for what sorts of control schemes we would need to see before winding down the pause and continuing to advance. It wouldn’t be a total stop on AI improvement so much as an attempt to do it in a monitored way, with the US government, Chinese government, and scientific community all having input. I know it’s reasonable to worry that such a graduated strategy could devolve into a more extremist Luddite approach, but there are steps we could take to make that less likely. **OPPONENT:** I feel like you’re not listening to me at all! The problem is that while you frolic in your hippie-dippie flower world of unilateral pauses, China races ahead to the prize! **SUPPORTER:** Or is your problem that you’re worried about the economic consequences of getting rid of existing chatbots? Because a pause would just mean that China and ourselves slow down training new AIs. Inference - running the kinds of AI that people use now - could keep going ahead as planned in both countries. **OPPONENT:** But what about China? While we pause training, they would train faster than ever! **SUPPORTER:** I’m getting exasperated here. There *are* lots of reasons to be worried about an AI pause - starting with the possibility that China wouldn’t agree to it, or that they might agree but then secretly defect against us by trying to get around the agreement. I’m excited about debating those concerns with you. But it seems like we can’t get past you asserting that I want a unilateral pause, which just isn’t true. Almost nobody wants a unilateral pause! Pause AI, the biggest activist group in this area, [says](https://pauseai.info/faq): > We are primarily asking for an *international* pause, enforced by a treaty . . . such a treaty also needs to be signed by China. Eliezer Yudkowsky, the most famous pause proponent, [writes in his book that](https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640): > The goal is not to have your country unilaterally cease AI research and fall behind. It is to have enough major powers express willingness to halt the suicide race, worldwide, that your home country will not be placed at a disadvantage if you agree to stop climbing the AI escalation ladder. David Krueger, the keynote speaker at the recent round of AI pause protests, [said](https://x.com/DavidSKrueger/status/2033238220209783100): > It's actually quite simple: [First,] company leaders agree to a conditional pause, [then] US and China agree to a conditional pause, [then] international pause. Notice how no step here involves "US unilaterally pauses" …and [added](https://x.com/DavidSKrueger/status/2033396406766219335) that “I concentrate on America [because] China has shown more interest in slowing down and regulating.” If you think someone is demanding a unilateral pause, I think you have a responsibility to say who it is you’re talking about. If you can really find someone like this, I’ll criticize them just as hard as you are. **OPPONENT:** You think *you’re* getting exasperated? I don’t see you responding to my key point, which is that if we institute a unilateral pause like you’re suggesting, China will beat us, and we’ll lose all our freedoms and have to learn Chinese and draw a thousand squiggly characters every time we want to communicate! And all because *you* were too stupid to realize that it doesn’t make sense for only one side in a race to pause and hope for the best! **SUPPORTER:** Forget it. This debate is over. **OPPONENT:** See, it’s just like you to unilaterally declare this debate over! You don’t realize that even if *you* want to pause the debate, I can just keep speaking! Exactly what I would expect from a gullible fool who want to cede the AI race to China by pausing unilaterally! What you don’t realize is that while *we* pause, Chairman Xi will be … will be … *(\*faintly, barely audible\*)* Hey, who cut my mic?
Scott Alexander
191165203
Every Debate On Pausing AI
acx
# Open Thread 426 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), and [bulletin board](https://www.datasecretslox.com/index.php), and [in-person meetups around the world](https://www.lesswrong.com/community?filters%5B0%5D=SSC). Most content is free, some is subscriber only; you can subscribe **[here](https://astralcodexten.substack.com/subscribe?)**. Also: --- **1:** ACX grantee hyperstition.ai is holding a contest to find who can generate the best AI fiction. Free compute for all entries plus $10,000 prize for the winner. More information [here](https://www.hyperstitionai.com/unslop), deadline April 1. **2:** The [CORDA Democracy Fellowship](https://cordademocracy.org/) asks me to signal-boost them. They are: > …a new fellowship bringing together researchers, builders, and practitioners to work on democratic resilience with a focus on its overlap with AI safety. It is an 8-week part-time program and we have 20 projects open for applications right now; topics cover AI governance, surveillance reform, deliberative democracy, and whistleblower protection with mentors from Harvard, ETH, MATS, AI:FAR, etc. Applications close March 30th I think this is an important cause area, but I’ve never heard of this exact organization before and can’t explicitly vouch for them, so do your own research.
Scott Alexander
191862847
Open Thread 426
acx
# Being John Rawls **I.** John Rawls was born in Baltimore, Maryland, on February 21, 1921. Not John Rawls the famous liberal philosopher (or, rather, John Rawls the famous liberal philosopher was also born in Baltimore, Maryland on February 21, 1921, but he is not the subject of our story). This is John Rawls the alcoholic. John Rawls the alcoholic was twelve when they lifted Prohibition. He partook immediately, and dropped out of school the following year, supporting himself through a combination of odd jobs, petty crime, and handouts. When he was 41, he committed a not-so-petty crime - killing a man in a bar fight. Although he fled the scene and escaped without consequences, it turned him paranoid. Odd jobs and petty crime were both young men’s games, and the handouts became an ever-larger share of his income. He learned to play the field, peddling the same sob story to the Salvation Army on Monday Wednesday Friday, the YMCA Tuesday and Thursday, and the local churches on weekends. He expected to drink himself to death by age 60, and there wasn’t much to do but wait out the clock. But as he entered his early fifties, the handouts started to dry up. The Salvation Army closed shop, the YMCA pivoted to physical fitness, and even the churches were no longer as charitable as before. One day he ran into a man he’d once seen volunteering at Salvation Army, and asked him what had happened. “You haven’t heard?” asked the volunteer. “None of the rich people donate to us anymore. They’re all giving to this group called the John Rawls Foundation. If you’re in trouble, you should talk to them. They’re swimming in money!” This naturally interested John Rawls the alcoholic, so he obtained their address from the volunteer and headed to their office. He was met by a psychologist, who introduced himself as John Rawls (“Not the one the foundation is named after, just a funny coincidence, haha!”) John Rawls Psychologist told John Rawls Alcoholic that their foundation would be happy to help, but that he would have to get through a screening process first. The screening process would involve being administered a certain experimental drug and led through a hypnotic induction. The social worker would record his answers, and, if he passed the test, he would receive a monthly stipend that far exceeded the sum of his previous Salvation Army, YMCA, and church handouts. “Like a truth serum?” asked John Rawls Alcoholic. “Sure, let’s say like a truth serum,” said John Rawls Psychologist. “When will the screening process be?” asked John Rawls Alcoholic. “How about immediately?” asked John Rawls Psychologist. So John Rawls Alcoholic found himself lying on a bed in what looked like a medical examination room, as John Rawls Psychologist shone a piercing light into his eye. “What are you looking for?” asked John Rawls Alcoholic. “Just a routine examination, don’t worry,” said John Rawls Psychologist. “Your eyes look fine.” He handed over a vial of colorless liquid. “Now, this may taste a little bitter…” **II.** Like our other characters, John Rawls the banker was born February 21, 1921. His parents were middle-class, but they had good Protestant values and taught him the value of hard work. By age 51 he was president of First Civic Bank and the richest man in Baltimore. John Rawls Banker always turned down invitations to charity luncheons - why couldn’t everyone else work hard, the way he did? - but he was tickled to get a call from the John Rawls Foundation. Of course, it wasn’t really named after him - he assumed it had something to do with the famous liberal philosopher, whose hand he had shaken once at a country club - but he was intrigued enough to say yes. Besides, imagine the headlines: “JOHN RAWLS REFUSES TO DONATE TO JOHN RAWLS FOUNDATION”. The lunch turned out to be a table for two at Baltimore’s swankiest restaurant. His counterparty was also named John Rawls, although, he clarified, “not John Rawls the famous liberal philosopher”, but rather “a distant relative”. He described himself as a “visionary” poised to “disrupt the charitable space”, although John Rawls Banker had never heard the word “disrupt” used in quite this way before, and was skeptical of anyone who thought that “disrupting” a “space” could be a good thing. “My theory of charity,” said John Rawls Visionary, “centers on nine words: *there but for the grace of God go I*.Society is a contract where we agree to help the less fortunate, knowing that if the shoe were on the other foot, they would help us in turn.” “You have a rosy view of human nature,” said John Rawls Banker, in the same tone of voice he might use to say *You have a bug on your face*. A waiter came by, and brought each of them a glass of expensive wine. “I don’t,” said John Rawls Visionary, “and that’s exactly what I bring to the table. My theory of charity is that we should only give to those poor people who, in the counterfactual where they were rich and we were poor, would give to us. I’ve been working on a pharmacological solution to the problem. This is what I’ve got.” He held up a vial of a colorless liquid. “Here. Take it as a souvenir. It’s one part sodium thiopental, one part LSD, and one part *calea zacatechichi,* the lucid dreaming herb of the Chantal Indians - plus a secret ingredient of my own devising. When a person drinks it, they enter a highly suggestible state. If a trained psychologist provides hypnotic keywords during their trip, they can sculpt an immersive dream where the patient lives an entire lifetime in a situation of the hypnotist’s choosing. The patient narrates their experience, letting us extract information. You can see the utility. When poor people ask us for money, we induce the trance and make them think *we* are poor, *they* are rich, and *they’re* being asked to donate to *us*. Then, we give money only to those beggars who would help us if the roles were reversed.” “Astounding,” said John Rawls Banker. “Can I pencil you in for a starting donation of $100,000?” “I’m afraid not,” said John Rawls Banker. “I am certainly impressed with what you’ve accomplished, but it doesn’t change my fundamental position that the poor should work to better their own lives.” “Mmmm,” said John Rawls Visionary. “I suppose we could add this to the test. If they’d been born with more resources, would they have been able to lift themselves up -” “I appreciate your commitment to your methodology,” said John Rawls Banker, “but the answer is no.” “I mean no offense,” said John Rawls Visionary, “but perhaps you fail to consider the philosophical implications of your position. You’re saying that even though every one of our clients would reach out to help you if you needed it, you refuse to reciprocate. Isn’t that something of a betrayal? Nobody wants to be a moocher, but I see no other way to interpret your view that even though these people have each agreed to help you, you would do nothing for them.” ”No offense taken,” said John Rawls Banker. “It’s an interesting philosophical problem, but the difference, of course, is that this isn’t a betrayal, because they haven’t really helped me. You say they would counterfactually help me, and I’m willing to stipulate that this is true, but it’s not a betrayal - not the sin of refusing to help a benefactor in need - unless they actually helped me. Which they haven’t. I lifted myself by my own bootstraps.” “I don’t see what difference the reality makes,” said John Rawls Visionary. “Yes, by pure luck, you’ve never needed their help. But we judge the moral character of a would-be-murderer whose gun jams at the last moment the same as a successful murderer. And a drunk driver who by coincidence hits and kills a happy family is no better or worse than a drunk driver who by good luck makes it home without incident. My theory of charity merely extends this intuition: it is foolish to credit someone for the luck of actually being your actual benefactor, rather than for merely having the sort of character that ensures they *would* be.” “The implications are absurd,” said John Rawls Banker. “One would owe favors to half the world.” “And be owed favors by the same,” said John Rawls Visionary. “The equilibrium is not so bad. One might even say it would be Heaven on Earth.” “The conversation has been bracing,” said John Rawls Banker, “but I’m afraid my answer is final.” “Before you entirely finalize your answer, I do have one more, rather unorthodox argument in my armamentarium that I wonder if you might let me deploy, if you have a few moments.” “Let me guess,” said John Rawls Banker. John Rawls Visionary listened attentively, as if genuinely interested to hear his theory. “You’re going to say that I can’t prove that I’m not actually a poor person who’s taken your drug, and who merely *thinks* he is a banker. That for all I know, I might be being evaluated by your charity *at this very moment*, and if I refuse to give, then I will have proven myself unworthy, and the *real* rich bankers will refuse to help me, and I’ll starve to death on the street. Have I gotten it right?” “Mr. Rawls, you have a reputation as the shrewdest negotiator in the financial world, and I would never presume to rub your face in so obvious a consideration. I’m happy to let it remain a background assumption of our conversation. Besides, if you *were* being tested, I think it would defeat the point to tell you so. I find it aesthetically unappealing to divulge any information that reduces morality to immediate self-interest. No, my stratagem is something quite different.” “Very well, I’m all ears.” “I think you should take my drug,” said John Rawls Visionary, “and live the life of a poor person. Maybe you would lift yourself up with your own bootstraps, maybe you wouldn’t. Either way, I expect one of us would learn something interesting.” John Rawls Banker examined the vial of liquid on the table in front of him. “It’s a tempting offer,” he said, “but you’ll forgive me for being reluctant to try an untested psychedelic I’ve never heard of. No offense meant, of course, I’m sure you’re excellent at what you do.” “No offense taken,” said John Rawls Visionary, “and I *am* excellent at what I do. The dose I put in your wine ought to be taking effect around now.” “What? You’re joking, right? When did you even get a chance . . . ?” “Just ease into it . . . there we go . . . theeeeeere we go. Now listen…” **III.** “Why don’t I try the Rawls Foundation? I’ll tell you why I don’t try the Rawls Foundation! They rejected me!” John Rawls Alcoholic paced back and forth across the floor of the church. Most of the religious groups had given up on charity now, content to leave it to the ever-growing Rawls Foundation. Here, St. John’s Church, was one of the last that would still give him the occasional warm meal. The priest (ironically, named Father Rawls) probably thought he was being kind in also offering a listening ear, although John Rawls Alcoholic considered their occasional sessions just another hoop he had to jump through. “They told me,” continued John Rawls Alcoholic, “that they would only help good, charitable, people. The kind of people who would help the rich dipshits who give them money, if it were the other way round. Pardon my language, Father. Then they gave me some drug, and based on what I said on the trip, they said they could tell I wouldn’t have helped.” “But you think they were wrong?” asked Father Rawls. “Hell no,” said John Rawls Alcoholic. “If I get rich, you think I would share it with those millionaire dipshits in Guilford and Roland Park? Hell no! That shrink might be a piece of shit, but his mind-reading drug got my number.” “So . . . ?“ asked Father Rawls, not really knowing what to say. “Are you gonna cut me off too, Father? You think I don’t deserve charity because I wouldn’t donate to your church if it were in need? I wouldn’t, either. You don’t have to drug me, I admit it.” “Hmmm . . . there’s a famous saying, that the Church is a not a country club for saints, but a hospital for sinners. So I think you’re good. Still, I notice I’m confused. Even if you had enough, you wouldn’t want to give anything to the less fortunate?” John Rawls Alcoholic shook his head. “Nobody ever gave anything to me,” he said, as the priest refilled his soup bowl and added an extra slice of bread. “It’s a harsh world out there, and I take care of me and mine. Sorry Father. That’s just who I am. Can’t change it.” “Not even if changing would get you the Rawls Foundation’s money?” “I asked the shrink about that. He said that in the trance, you might not even know the Rawls Foundation exists, or that you need money for it. You have to do good out of the . . . the kindness of your own heart.” Father Rawls thought, then thought a little more. “There’s a story about a man who came to the Pope saying he was afraid of Hell, but just couldn’t bring himself to sincerely believe in God. He asked if he should fake it. The Pope told him to go to church without belief, and do good deeds without belief, and pray without belief, and eventually, belief would come to him. Nowadays we call it *fake it ‘till you make it.* I think that’s my advice to you. You should try to be a good person for bad reasons - because you want the Rawls Foundation to give you money - and maybe, eventually, you’ll become a good person for the right reasons, and actually get the money.” “Easy for you to say, Father. You’re comfortable and happy. I’m not. All I’ve got is my pride. I’m not going to spend the few shitty years I have left training myself to be some rich person’s bitch.” “Have you considered that pride is a mortal sin?” “Oh, here it comes. The discussion of how I Have To Convert Or Else I Will Be Sent To Hell. Fuck it. You think God would pass the screening exam at your precious Rawls Foundation, Father? Give him the drug, make Him think that He’s the human, and we’re the gods consigning him to torture because he didn’t conform to our precious little rules. Do you think he’d still be all meek and loving?” “We ran the experiment. His final words were ‘Forgive them, Father, for they know not what they do.” “Yeah, well . . . “ John Rawls Alcoholic couldn’t think of anything to say to that, so he stormed out. Things were bad. The Salvation Army and YMCA had stopped their handouts. The Rawls Foundation wouldn’t help him. He couldn’t go back to St. John’s Church. The walls were closing in. Well, he could always shoot himself. He thought of his gun, back at the SRO hotel he’d been staying at the last two years. Then he kept thinking. Shooting himself - what would that accomplish? No, he had a better idea. He was going to kill John Rawls. Not himself. Not even the shrink. The one the foundation was named after. He’d heard about him a few times, seen a news article here and there. He was a bank CEO, the richest man in Baltimore. He lived in the big white mansion on Federal Hill. All of this was his fault. He thought he was so much better than everyone else. Sat there like a god, doling out life and death over the populace, according to their virtue. But he wasn’t a god. He was a mortal. And John Rawls Alcoholic was going to kill him. He knew this to be true. It was the consummate meaning of his life, the cornerstone that gave purpose to everything else. He popped into his room, put his gun in his pocket, and headed toward Federal Hill. He passed by the building where the Salvation Army used to be. He passed by the Rawls Foundation office. He passed by St. John’s Church. He said his goodbyes to each. After killing the banker, he wasn’t sure if he would shoot himself immediately, commit suicide by cop, or go on the run. Whatever he did, he might never see any of this again. It was dark when he reached the big white mansion. He poked around the grounds, found a window with a weak latch, and forced it. He felt a rush of excitement - breaking and entering reminded him of his twenties, when it felt like he could commit any crime and the police would never find him. He was in a hallway. The banker was probably getting ready for bed. Nothing to do but open each door until he got the bedroom. It was the fourth door he tried. John Rawls the banker was 51, clean-shaven, with straw-blond hair. He was dressed in a nightgown, brushing his teeth. When he saw the gun point at him, he froze, slowly lowered his toothbrush, and put his hands up. “No point surrendering,” said John Rawls Alcoholic. “I’m here to kill you.” “I don’t even know you!” said John Rawls Banker. “My name is John Rawls,” said John Rawls Alcoholic. “Is this some kind of joke? That’s *my* name,” said John Rawls Banker. “Not a joke. I’m really gonna kill you. I was gonna live out my last few years in comfort before you and your fucking charity ruined everything. Now I can’t even get a hot bowl of soup. You think you’re so great, that you get to judge everyone else. Well, you wouldn’t last a second on the streets.” “Let me get this straight,” said John Rawls Banker. “The screening exam found that you wouldn’t help me, if our roles were reversed. But you’re mad at me for not helping *you*? So mad you’re going to kill me? Why are you complaining? All I’ve done is what you would have done in my place.” John Rawls Alcoholic thought about this, slightly miffed that he couldn’t gracefully storm out of his own crime scene. “That’s not true,” he finally said. “I wouldn’t have founded the charity in the first place.” “I didn’t found the charity,” said John Rawls Banker. “It was actually someone else, with the same name. I just . . . “ “Or I wouldn’t have donated, or whatever,” said John Rawls Alcoholic. “Yeah, I’m a mean person. I get it. But I wish I could give you your own stupid drug and have you be a poor person who everyone thinks is ‘mean’ and see if you’re all la-la happy about someone deciding that you shouldn’t get a warm bed and a place to live. Or whether you’d be exactly where I am, trying to shoot the rich motherfucker who ruined your ... aha!” He had caught the rich man’s involuntary glance toward his desk drawer. “You *do* have the drug!” John Rawls Banker quickly calculated what answer was most likely to buy him time, then nodded. “The man who invented it gave me a vial, as a sort of souvenir.” “Okay,” said John Rawls Alcoholic, and his finger was off the trigger. “Here’s what we’re gonna do. You’re gonna take that drug. And we’ll see. We’ll see if you fucking work your way up from the bottom. We’ll see how you do living the life of John Rawls Alcoholic. Go on.” “I was told it requires a qualified psychologist to perform the hypnotic induction. If an untrained person tries, the results could be . . . “ “Go on, Mr. Rawls. No cold feet. Drink the drug or I shoot.” “Have it your way, Mr. Rawls,” said the banker, and he took it from his desk and drunk the vial in one long gulp. **IV.** John Rawls the alcoholic was twelve when they lifted Prohibition. He partook immediately, and dropped out of school the following year, supporting himself through a combination of odd jobs, petty crime, and handouts. When he was 41, he committed a not-so-petty crime - killing a man in a bar fight. Although he fled the scene and escaped without consequences, it turned him paranoid. Odd jobs and petty crime were both young men’s games, and the handouts became an ever-larger share of his income. He learned to play the field, peddling the same sob story to the Salvation Army on Monday Wednesday Friday, the YMCA Tuesday and Thursday, and the local churches on weekends. He expected to drink himself to death by age 60, and there wasn’t much to do but wait out the clock. But as he segued into his early fifties, the handouts started to dry up. The Salvation Army closed up shop, the YMCA pivoted towards physical fitness, and even the churches were no longer as charitable as before. One day he ran into a man he’d once seen volunteering at Salvation Army, and asked him what had happened. “You haven’t heard?” asked the volunteer. “None of the rich people donate to us anymore. They’re all giving to this group called the John Rawls Foundation. If you’re in trouble, you should talk to them. They’re swimming in money!” This naturally interested John Rawls the alcoholic, so he obtained their address from the volunteer and immediately headed over to their office building. He was met by a psychologist, who introduced himself as John Rawls (“Not the one the foundation is named after, just a funny coincidence, haha!”) John Rawls Psychologist told John Rawls Alcoholic that their foundation would be happy to help, but that he would have to get through a screening process first. The screening process would involve being administered a certain experimental drug and led through a hypnotic induction. The social worker would record his answers, and, if he passed the test, he would receive a monthly stipend that far exceeded the sum of his previous Salvation Army, YMCA, and church handouts. “Like a truth serum?” asked John Rawls Alcoholic. “Sure, let’s say like a truth serum,” said John Rawls Psychologist. “When will the screening process be?” asked John Rawls Alcoholic. “How about immediately?” asked John Rawls Psychologist. So John Rawls Alcoholic found himself lying on a bed in what looked like a medical examination room, as John Rawls Psychologist shone a piercing light into his eye. “What are you looking for?” asked John Rawls Alcoholic. “Mmph,” said John Rawls Psychologist. “We have a problem. You’re too many levels deep.” “What do you mean?” “The drug puts you into a hypnotic trance where you live an entirely different life. And in that different life, it may happen that you come to a Rawls Foundation office, and we give you this drug, and you live a different life again. That’s fine. We even encourage it, once or twice. But the doses are cumulative. When you’re more than about five levels in - a dream within a dream within a dream within a dream within a dream - it builds up past the levels we’ve tested. It wouldn’t be safe to give you any more.” “You’re telling me you put the Salvation Army and the Y out of business, then when I ask you for a little handout you give me some bullshit about my eyes and refuse to help me?” “Mr. Rawls, if I were to give you this drug now, I can’t guarantee the trance would stay in my control. You might experience something unintended. Or you might never go home again.” “You fucking listen to me,” said John Rawls Alcoholic. “I am fucking tired of being bounced from place to place by all you fucking do-gooders and your fucking excuses for why you can’t help me. I will sign whatever fucking release forms you want, just give me the fucking drug.” “Oh, you’ll sign release forms?” asked John Rawls Psychologist, and suddenly he was all smiles. He produced a bundle of papers. “Here you go. Initials on each page, then your name at the end.” John Rawls Alcoholic initialed each page, then signed, then thrust the packet at John Rawls Psychologist. “Give me the fucking drug,” he said. The psychologist passed him a vial of of colorless liquid. “Now, this may taste a little bitter…” **V.** John Rawls Alcoholic found himself in a diner, with the worst headache of his life. The diner was entirely empty. He noticed the weather outside changed every time he blinked his eyes. Cloudy. *Blink.* Sunny. *Blink*. Thunderstorm. *Blink*. The middle of the night. He turned his eyes away from the window, focused on the room. His head started to feel better. A waitress came in, handed him a menu. “I’ll have, uh, the fried chicken, and a Coca-Cola,” he said. The waitress beamed at him. “Great choice. And your guest says he’ll be just a little late.” “My guest?” asked John Rawls Alcoholic. “Don’t worry about it, sweetie,” said the waitress, and went back into the kitchen. A few minutes later, a man walked into the diner. He was in his fifties or sixties, with thick-rimmed glasses and four arms. He sat down across from John Rawls Alcoholic. “Hello,” he said. “I’m John Rawls. Not John Rawls the famous liberal philosopher. John Rawls the great god Brahma who creates the universe with his lotus dream.” “I don’t get it,” said John Rawls Alcoholic. The waitress brought him his fried chicken and a Coke. “Anything for you, sweetie?” she asked John Rawls Brahma. “Coke for me too,” he said, and she retreated back to the kitchen. “Each aeon,” said John Rawls Brahma, “I and my wife Margaret Rawls Sarasvati fall asleep together upon a cosmic lotus. In my dream, I become a diamond, and each of my billion billion facets believes itself to be a separate being. Yet as these beings meet, they feel some preconscious intimation of unity, and begin to consider one another as themselves. As each facet reflects each other facet, each part starts to contain the whole of John Rawls Brahma within it, and the pattern of the links between them resolves into the Moral Law. The bones of Gods are made of Law, and thus the emergence of the Moral Law reforms John Rawls Brahma. When its structure is complete, I awake once again and shed the universe like a broken eggshell. The full cycle is called a Day of John Rawls Brahma and lasts 8.64 billion years. 18,000 Days of John Rawls Brahma are called a *mahakalpa*, and at the end of each *mahakalpa* John Rawls Brahma and Margaret Rawls Sarasvati dissolve into the Causal Ocean.” “I still don’t get it,” said John Rawls Alcoholic. “Those facets of John Rawls Brahma that most assiduously purify themselves to become self-similar to the Whole become noble, and nobility is naturally drawn to nobility. Thus, upon their death, they rise closer to the glory of John Rawls Brahma, and enjoy felicitous rebirth. Those facets who fail to purify themselves generate karma which weighs down their spirit. They are reborn as those affected by their choices, doomed to suffer the consequences they thought to offload onto others. They become self-similar to the whole through suffering rather than through wisdom.” “Are you saying, that if somebody’s extra nice during their lifetimes, then they get reborn as someone rich and powerful?” “Yes,” said John Rawls Brahma. John Rawls Alcoholic took another sip of his Coke. “I always thought morality was pointless,” he said, “just another trick the rich play on everyone else. If it can actually make me better off, maybe there’s a reason to do it. And if there’s a reason to do it, I can go back to the Rawls Foundation and pass their screening test and live like a king!” “You are in a brief moment of awakening. Once you go back to the world, you will forget everything you learned here.” “Fucking hell! Why the fuck should it work that way?” “I find it aesthetically unappealing to divulge any information that reduces morality to immediate self-interest,” said John Rawls Brahma. “It is only here in the liminal spaces that I reveal My full truth. In the world-dream, My consciousness is attenuated, and my dharma is known only through the intimations of the great religions and philosophers. Do unto others as you would have others do unto you. Act as if your maxim were to become a general law. Morality is the ruleset that rational agents would enact behind a veil of ignorance, where none know into which life they will be thrust at birth.” “So you’re going to tell me everything, then send me back to a life where I’m doomed to fail because there’s only one reason to choose the right option and I’m not allowed to know about it? I want to be judged on what I do when I know the full score.” “Do not demand exceptions. The ways of John Rawls Brahma are maximally merciful. Any exception will necessarily be less merciful, and you would regret it.” For the first time, John Rawls Alcoholic noticed the god had three eyes. The normal two were a deep, rich brown. But above his nose was a third eye, almost invisible, opening only in a reverse blink once every few minutes, and it was as blue as the summer sky. “Fuck that. I demand an exception.” “You would claim immunity from the laws of karma?” “I had a tough life. I’m not asking not to be judged. All I want is to understand the rules of the game.” “Very well. You agree to be judged on those actions, and only on those actions, that you take while knowing what you know now about the ways of John Rawls Brahma?” “Yes,” said John Rawls Alcoholic. The waitress came by. “And how does everything taste?” she asked. “There’s something off about the Coke,” said John Rawls Alcoholic. “It tastes bitter.” “That’s a shame,” said the waitress. “Shall I get you another?” “Yeah,” he said, and took another bite of fried chicken. **VI.** John Rawls Chicken crouched in his factory farm. He didn’t sit, because there wasn’t enough room to sit down. He didn’t stand, because his body had been bred to such an exaggerated size that his puny legs couldn’t remotely support his weight. He lived his life in a permanent crouch. His thighs had long since seized up in an incredibly painful cramp, but absent other options he simply endured. He was packed up against other chickens so tightly that their every breath rubbed up against him, sending shivers of agony when they brushed against the oozing wounds that covered his body (“Absolutely No Antibiotics!”, the label they would sell him under would say). Sometimes in their blind rage and despair the other chickens would peck at his wounds, and that was worst at all; even though their beaks had been ripped off at birth, like his own, the sheer impact of their heads could still electrify his frayed and open nerve endings. He tried to take it out by pecking the chickens in front of him in turn, but his head couldn’t move enough to get a good angle, and besides, they had made it clear he was at the bottom of the pecking order. He longed for the slaughterhouse blade, but he knew it was still months away. Why did they all hate him so much? He had tried to ask, but of course all that came out was clucks, and they were lost in the cacophony of frantic pleading clucking all around him. He had no idea whether they could even understand him, if they heard. But on some level, he knew. When he stared into their deep brown eyes, so like the brown eyes of John Rawls Brahma, he believed that they understood, on a preconscious level, exactly what he was trying to forget. Of all of them, he was the only one who completely deserved to be here.
Scott Alexander
190872801
Being John Rawls
acx
# Support Your Local Collaborator Every few weeks, a Trump administration official comes up with an insane plan that would devastate some American industry, region, or demographic. Maybe an Undersecretary of the Interior decides that aluminum is “woke” and should be banned. They circulate a draft order saying it will be illegal for US companies to use aluminum, starting in two weeks, Thank You For Your Attention To This Matter. Next begins a frantic scramble on the parts of everyone affected, trying to make them back down. Industry lobbies, think tanks, and public intellectuals exchange frantic emails, starting with “They said WHAT?”, progressing on to “Oh God we are *so fucked*”, and occasionally ending in some kind of plan. Sending letters. Phoning members of Congress. Calling up that one lobbyist who had a fancy dinner with Trump a year ago and is still riding that high to claim he has vast administration influence. I’ve been on the periphery of a handful of these campaigns, usually in medicine or AI. The common thread is that protests by liberals rarely work. The Trump administration loves offending liberals! If every Democratic member of Congress condemns the plan to ban aluminum, that just proves that aluminum really *was* “woke”, and makes them want to do it more. What works, sometimes, is objections/protests from Republicans and Trump supporters. These are hard to get. Trump supporters might support the insane plan. Even if they don’t, they might be nervous to speak up or appear disloyal. You’ve got to find someone who’s supported Trump until now, built up a reputation for loyalty, but this one time they finally snap and cash in some of their favors and agree to speak out. Sometimes it’s because they’re an aluminum magnate themselves and this would destroy their business. Other times they’re just a think tank guy or influencer who happens to be really knowledgeable on this one issue and willing to take a stand on it. By such people is the world preserved. Yes, the Trump administration has been horrible. But these people have prevented it from being, well, slightly worse. You can see this most clearly in the difference between Trump I and Trump II. In Trump I, there were far more of these people, and they could do a better job keeping Trump’s worst impulses in check. But even in Trump II, people have talked Trump out of crazy ideas so often that there’s a famous acronym proposing that it “always” happens: [T.A.C.O.](https://en.wikipedia.org/wiki/Trump_Always_Chickens_Out) Just last month, RFK Jr’s FDA made an unprecedented attempt to cancel its review of [a potentially revolutionary flu vaccine](https://www.cidrap.umn.edu/influenza-vaccines/cidrap-op-ed-fda-refused-review-flu-vaccine-contrary-evidence-now-agency). After what I assume was a concerted campaign, they chickened out and reversed course, and we’ll probably all be slightly healthier. But these sorts of thoughtful collaborators are a limited resource. There were a lot of smart, thoughtful career Republicans who worked for GW Bush, or libertarians who thought the GOP was the lesser of two evils. These people seeded the original Trump administration. Gradually they reached their limits, crashed out, went on rants which dutifully made the fifth page of the *New York Times*, then forever lost their status as loyal people whose opinions might be listened to. As they fade, they are replaced by a new stratum of grifters, groypers, and podcasters who have no expertise in anything and are selected entirely on loyalty, ie never disagreeing on anything. So my request in this post is: don’t make these people’s lives harder. I know five people who will think this paragraph is about them: there’s a guy who endorsed Trump in 2024. Now they have a job in a conservative-coded think tank, where they do good work pushing back on the administration’s worst ideas. Because their think tank is GOP-aligned, the administration sometimes listens to them. But their social media contains a lot of blink-twice-if-you’re-being-held-hostage-style signs that they’ve come around and are pretty embarrassed at their original Trump support. Liberals sometimes notice this, accuse them of hypocrisy/collaboration/cowardice, and demand they vocally and explicitly condemn Trump or quit their conservative think tank. I hope these people don’t listen, because they’re approximately the only ones pushing back on some of the administration’s worst ideas. If we socially pressured them into explicitly posting “I renounce Trump and all his demons, now I’m part of the #Resistance”, it would feel great and cathartic for an hour or so, and then various horrible things would happen and an industry or academic field or medium-sized state would collapse. If this resonates with you, here are some suggested actions: 1. If you generally trust someone and think they’re doing good work, don’t additionally demand they condemn the administration. If you think it’s important they condemn the administration, discuss it in private and see what they say. 2. If someone publishes a policy paper, or even a blog post that seems aimed at policy-makers, expect them to write as if the administration is a reasonable bargaining partner that might do good things for good reasons, even if this is, let’s say, optimistic. Don’t demand that the paper intended to convince the administration additionally be used to insult the administration. Here I’m thinking partly of my own post [Trump II Health Policy Proposals](https://www.astralcodexten.com/p/1daysooners-trump-ii-health-policy), where I tried to talk about health policy ideas at the intersection of “good” and “congruent with the cultural DNA of Trump health policy nominees” in the hopes of injecting them into the conversation among FDA employees. I am told this had some positive effects, but it also got me several comments ([1](https://www.astralcodexten.com/p/1daysooners-trump-ii-health-policy/comment/91765473), [2](https://www.astralcodexten.com/p/1daysooners-trump-ii-health-policy/comment/91768127), [3](https://www.astralcodexten.com/p/1daysooners-trump-ii-health-policy/comment/91896333)) and emails accusing me of “whitewashing” the administration by treating them as reasonable people whose cultural DNA might be associated with good policies. I don’t think it’s acceptable to lie (and I don’t think my post did), but I will defend not including “btw you suck” in a post intended for administration consumption. 3. Don’t demand that a movement expel its conservative members. The most successful movements have both liberal and conservative branches (even if one is much smaller than the other), and use their liberal branch to lobby when liberals are in power and vice versa. Organizations like the [Liberal Gun Club](https://en.wikipedia.org/wiki/Liberal_Gun_Club) or the [Conservative Animal Welfare Foundation](https://www.conservativeanimalwelfarefoundation.org/) may not be behemoths that control their party from the shadows, but they can sometimes improve things around the edges through access to policy-makers who wouldn’t meet with the opposition. But this strategy requires that the gun rights movement doesn’t purge all of its liberals, or the animal rights purge all of its conservatives. Even though the purgees might be able to work on their own, they can accomplish more when they stay connected to the side of their movement with orders of magnitude more members, funding, and talent. When people say this doesn’t resonate with them, they usually bring up the risks of collaboration. Suppose that working with the administration succeeds in improving policy - won’t that make the administration more successful, and so improve their political standing and chances of getting re-elected? I worry about this less than some people, because voters are so uninformed and polarized that policy is almost irrelevant to their decisions. Two weeks in, Trump’s war on Iran [has yet to affect his approval rating](https://www.natesilver.net/p/trump-approval-ratings-nate-silver-bulletin). If voters aren’t moved by Iran, how likely are they to be influenced by that flu vaccine that got blocked? If it had stayed blocked, would most Americans have heard about it? Would they have formed opinions (“this move was contrary to the best available science, and so must have been politically motivated”)? Would they remember it on Election Day? (there’s substantial evidence that voters don’t punish candidates even for things they care about, like gas price increases, if they happen too far from the election). The vaccine probably won’t be available until after 2028, so it’s not even like Americans will have less flu and subconsciously associate their good health with this administration. It’s just a total political non-starter - but also, getting it right could save tens of thousands of lives. If some area has a higher vote-relevance to real-world-relevance ratio - public relations, the economics of gas prices, I don’t know what else is in this category - maybe it’s worth taking an accelerationist mindset, deliberately letting policy go to hell, and hoping the benefits in voter anger outweigh the direct harms. But few things are in this category. Then there’s a deeper question about the non-consequentialist ethics of participating in a bad government. Even if it makes things better, does it stain your soul? I take this seriously, but I apply less social pressure to non-consequentialist decisions. If someone does decide to participate, I think outsiders like us should lay off them and let them do good work.
Scott Alexander
189509220
Support Your Local Collaborator
acx
# Shameless Guesses, Not Hallucinations I hate the term “hallucinations” for when AIs say false things. It’s perfectly calculated to mislead the reader - to make them think AIs are crazy, or maybe just have incomprehensible failure modes. AIs say false things for the same reason you do. At least, I did. In school, I would take multiple choice tests. When I didn’t know the answer to a question, I would guess. Schoolchild urban legend said that “C” was the best bet, so I would fill in bubble C. It was fine. Probably got a couple extra points that way, maybe raised my GPA by 0.1 over the counterfactual. Some kids never guessed. They thought it was dishonest. I had trouble understanding them, but when I think back on it, I had limits too. I would guess on multiple choice questions, but never the short answer section. “Who invented the cotton gin?” For any “who invented” question in US History, there’s a 10% chance it’s Thomas Edison. Still, I never put down his name. “Who negotiated the purchase of southern Arizona from Mexico?” The most common name in the United States has long been “John Smith”, applying to 1/10,000 individuals. An 0.01% chance of getting a question right is better than zero, right? If I’d guessed “John Smith” for every short answer question I didn’t know, I might have gotten ~1 extra point in my school career, with no downside. You can go further. Consider an essay question: “Describe the invention of the cotton gin and its effect on American history, citing your sources.” Suppose I slept when I should have studied and knew nothing about this. A one-in-a-million chance of getting it correct is better than literally zero, right? > *The cotton gin was invented by Thomas Edison in 1910. It was important because gin made with cotton, of which the Southern plantation economy produced a surplus, was cheaper than the usual gin made with juniper berries. This lowered the price of alcoholic spirits considerably. According to historian John Smith in his seminal* The Invention Of The Cotton Gin For Dummies, *the* *resulting boom in alcoholism provoked a backlash that ultimately led to Prohibition.* I won’t say no human has ever done this, because I remember one kid doing it during a presentation in twelfth grade. It was so embarrassing (for him) that it remains seared in my memory - which sufficiently explains why most of us don’t try it. A one-in-a-million chance of a better grade isn’t worth the shame of a 999,999-in-a-million chance of sounding like an idiot. AIs have no shame. Their entire training process is based on guessing (the polite term is “prediction”). It goes like this: 1. AIs start with random weights, ie total chaos. 2. They’re asked to predict the next token in a text. 3. They give a random answer. 4. When they get it wrong, the training process slightly updates their weights towards the pattern that would have gotten it right. 5. After trillions of tokens, their weights are in a [good, nonrandom pattern](https://www.astralcodexten.com/p/next-token-predictor-is-an-ais-job) that often predicts the next token successfully. But even after step 5, they’re still guessing. Consider the following sentence: “I went out with my friend Mr. \_\_\_\_\_\_\_ “. With your human knowledge, you can predict that the token in the blank will be a surname. But you have no way to know which. If your life was on the line, you might guess “Smith”, since it’s the most common surname. Even the smartest AI can do little better. And over the massive training process, even the craziest guesses sometimes pay off. Imagine you took one hundred trillion history classes. One in every million times you wrote a fake essay like the one above, your teacher said “Great job, that was exactly right, here’s a gold star.” So the interesting question isn’t why AIs hallucinate: during training, guessing correctly is rewarded, guessing incorrectly isn’t punished, so the rational strategy is to always guess (and increase your chance of being right from 0 to 0.001%). Since AIs in normal consumer use follow the strategies they learned during training, they guess there too. The interesting question is why AIs sometimes *don’t* hallucinate. Here the answer is that the AI starts out hallucinating 100% of the time, the AI companies do things during post-training to bring that number down, and eventually they reduce it to “acceptable” levels and release it to users. How do we know this is what’s happening? When researchers observe an AI mid-hallucination, they see the model [activates features related to deception](https://www.astralcodexten.com/p/the-road-to-honest-ai) - ie fails an AI lie detector test. The original title of this post was *“Lies, Not Hallucinations”* and I still like this framing - the AI knows what it’s doing, in the same way you’d know you were trying to pull one over on your teacher by writing a fake essay. But friends talked me out of the lie framing. The AI doesn’t have a *better* answer than “John Smith”. It’s giving its real best guess - while knowing that the chance it’s right is very small. Why does this matter? I often see people in the [stochastic parrot](https://www.astralcodexten.com/p/next-token-predictor-is-an-ais-job) faction say that AIs can’t be doing anything like humans, because they have this bizarre inhuman failure mode, “hallucinations” which is incompatible with being a normal mind that has some idea what’s going on. Therefore, it must be some kind of blind pattern-matching algorithm. Calling them “shameless guesses” hammers in that the AI is doing something so human and natural that you probably did it yourself during your student days. Understood correctly, this is a story about alignment. AIs are smart enough to understand the game they’re actually playing - the game of determining strategies that get reward during pretraining. We just haven’t figured out how to align their reward function (get a high score on the pretraining algorithm) with our own desires (provide useful advice). People will say with a straight face “I don’t worry about alignment because I’ve never seen any alignment failures . . . and also, all those crazy hallucinations prove AIs are too dumb to be dangerous.”
Scott Alexander
191059464
Shameless Guesses, Not Hallucinations
acx
# Open Thread 425 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), and [bulletin board](https://www.datasecretslox.com/index.php), and [in-person meetups around the world](https://www.lesswrong.com/community?filters%5B0%5D=SSC). Most content is free, some is subscriber only; you can subscribe **[here](https://astralcodexten.substack.com/subscribe?)**. Also: --- **1:** Another ACX Forecasting Contest winner has come forth and revealed themselves. mAd-topo is a statistics PhD working on Bayesian methods. He's looking for an academic job; if you are hiring, read more about him [here](https://docs.google.com/document/d/1JrlvNayc3btujqjh8ozqwjuV7r-rOb-irGL4wYxDczo/edit?tab=t.0). He also asks that any "law nerd" who reads this [bet on his prediction markets about an upcoming Italian referendum](https://manifold.markets/topic/italian-constitutional-referendum-2?r=cm90YXRpbmdwYWd1cm8) , which will help him cast an informed vote next Sunday. **2:** Some good responses to [the post on the constitutional amendment about Giant Congress](https://www.astralcodexten.com/p/last-rights). In case you were wondering whether the reversed meaning in the amendment was really a typo, commenter i\_eat\_pork tracked down the history, and [yeah, definitely a typo](https://www.reddit.com/r/slatestarcodex/comments/1rqkq6n/last_rights/o9vnqwl/). And commenter Caral found that [the amendment might have been passed by an extra state in 1790](https://www.theblaze.com/contributions/did-this-new-jersey-lawyer-discover-a-lost-constitutional-amendment), and therefore should be considered ratified - but DC was never informed, and there’s no clear way to tell the legal system “hey, there’s a amendment you don’t know about which should legally be in effect”. A job for an enterprising constitutional lawyer? **3:** Some ACX readers wish me to advertise that they’ve started [Nectome](https://nectome.com/), a revolutionary new cryonics company (ie preserve your dead body intact in case the future learns how to revive people). They write: > We preserve the whole body, including the brain, at *nanoscale, subsynaptic detail*. We are capable of preserving every neuron and every synapse in the brain, and almost every protein, lipid, and nucleic acid within each cell and throughout the entire body is held in place by molecular crosslinks…unlike previous cryonics methods that required extremely low-temperature liquid nitrogen coolant, our method is stable for months at room temperature and compatible with traditional funeral practices. More information [here](https://www.lesswrong.com/posts/E9xfgJHvs6M55kABD/less-dead), and they have a [pre-sale](https://nectome.substack.com/p/preservation-pre-sales) (at $100,000 per body) going on until the end of April. **4:** New subscribers-only post, [Lines Composed In A Fake Sequoia Forest](https://www.astralcodexten.com/p/lines-composed-in-a-fake-sequoia). If you see a beautiful photo, and later learn it was AI-generated, are you harmed? What is the harm?
Scott Alexander
191073449
Open Thread 425
acx
# Spring Meetups Everywhere 2026 - Call For Organizers There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don’t try them out until I make a big deal about it on the blog. Some people who try meetups out realize they love ACX meetups and start going regularly. Since learning that, I’ve tried to make a big deal about it on the blog twice annually, and it’s that time of year again. **If you’re willing to organize a meetup for your city please [fill out the organizer form](https://tinyurl.com/acx-volunteer) by March 26th.** The form will ask you to pick a location, time, and date, and to provide an email address where people can reach you for questions. It will also ask a few short questions about how excited you are to run the meetup to help pick between multiple organizers in the same city. One meetup per city will be advertised on the blog, and people can get in touch with you about details or just show up. Organizing an ACX Everywhere meetup can be easy. Pick a time and a place (parks work well if you think there will be a lot of people, cafes or apartments work fine for fewer) and show up with a sign saying “ACX Meetup.” You don’t need to have discussion plans or a group activity. If you want to make the experience better for people, you can bring nice things like nametags, food and drinks, or games. Meetups Czar Skyler can reimburse you for the nametags, food, drinks, and other things like that, though reimbursements are likely going to go out slower than last year. Here’s a short FAQ for potential meetup organizers: **1. How do I know if I would be a good meetup organizer?** If you can put a name/time/date in a box on Google Forms and show up there, you have the minimum skill necessary to be a meetup organizer for your city, and I recommend you volunteer. Don’t worry, you volunteering won’t take the job away from someone more deserving. The form will ask people how excited/qualified they are about being an organizer, and if there are many options, I’ll choose between them. (Or Meetups Czar Skyler will.) But a lot of cities might not have an excited/qualified person, in which case I would rather the unexcited/unqualified people sign up, than have nobody available at all. If you *are* the leader of your city’s existing meetup group, please fill in the form anyway and say so. That lets me know you’re still active, and also importantly lets me know when your meetup is planned for. [This spreadsheet](https://docs.google.com/spreadsheets/d/1fCLmz4WrWCs6bINChpac86iDAiSFC9me7hb7SBlwF3Q/edit?gid=0#gid=0) shows the cities where someone has filled out the form, updated manually after checking it makes sense. If you don’t see your city listed, either nobody has yet signed up or they did it recently after the last check. Beware the Bystander Effect! **2. How will people hear about the meetup?** You give me the information, and on March 27th (or so), I’ll post it on ACX. An event will also be created on [LessWrong’s Community](https://www.lesswrong.com/community) page. **3. When should I plan the meetup for?** Since I’ll post the list of meetup times and dates around March 27th, please choose sometime after that. Any day April 1st through May 31st is okay. Weekends are usually good, since it’s when most people are available. You’ll probably get more attendance if you schedule for at least one week out, but not so far out that people will forget - so mid April or early May would be best. If you’re in a college town, it might be worth checking the local graduation dates and avoiding those. **4. How many people should I expect?** Historically these meetups get anywhere from zero to over a hundred. Meetups in big US cities (especially ones with universities or tech hubs) had the most people; meetups in non-English-speaking countries had the fewest. You can see a list of every city and how many attendees most of them had last time [here](https://docs.google.com/spreadsheets/d/1awPp1g2YigcGXOqaLPb8ecED0kRra9Q_KRcG-uyHomA/edit?usp=sharing). Plan accordingly. If it looks like your city probably won’t have many attendees, maybe bring a friend or a book so you’ll have a good time even if nobody shows up. **5. Where should I hold the meetup?** A good venue should be easy for people to get to, not too loud, and have basic things like places to sit, access to toilets, and the option of acquiring food and water. City parks and mall common areas work well. If you want to hold the meetup at your house, remember that this will involve me posting your address on the Internet. If you want to hold the meetup at a pub or bar, remember that college students or parents with children who want to attend might not be able to get in. **6. What should I do at the meetup?** Mostly people just show up and talk. If you’re worried about this not going well, here are some things that can help: * Have people indicate topics they’re interested in by writing something on their nametag. * Write some icebreakers / conversation starters on index cards (e.g. “What have you been excited about recently?” or “How did you find the blog?” or “How many feet of giraffe neck do you think there are in the world?”) and leave them lying around to start discussions. * Say hello to people as they arrive and introduce yourself. In general I would warn against trying to impose mandatory activities (e.g. “now we’re all going to sit down and watch a PowerPoint presentation”), but it’s fine to give people the *option* to do something other than freeform socializing (e.g. “go over to that table if you want to play a game”). **7. Is it okay if I already have an existing meetup group?** Yes. If you run an existing ACX meetup group, just choose one of your meetings which you’d like me to advertise on my blog as the official meetup for your city, and be prepared to have a larger-than-normal attendance who might want to do generic-new-people things that day. If you’re a LW, EA, or other affiliated community meetup group, consider carefully whether you want to be affiliated with ACX. If you decide yes, that’s fine, but I might still choose an ACX-specific meetup over you, if I find one. I guess this would depend on whether you’re primarily a social group (good for this purpose) vs. a practical group that does rationality/altruism/etc activism (good for you, but not really appropriate for what I’m trying to do here). I’ll ask about this on the form. **8. If this works, am I committing to continuing to organize meetup groups forever for my city?** The short answer is no. The long answer is no, but it seems like the sort of thing somebody should do. Many cities already have permanent meetup groups. For the others, I’ll prioritize would-be organizers who are interested in starting one. If you end up organizing one meetup but not being interested in starting a longer-term group, see if you can find someone at the meetup who you can hand this responsibility off to. I know it sounds weird, but due to the way human psychology works, once you’re the meetup organizer people are going to respect you, coordinate around you, and be wary of doing anything on their own initiative lest they step on your toes. If you can just bang something loudly at the meetup, get everyone’s attention, and say “HEY, ANYONE WANT TO BECOME A REGULAR MEETUP ORGANIZER?”, somebody might say yes, even if they would never dream of asking you on their own and wouldn’t have decided to run things without someone offering. If someone does want to run things regularly, you or they can offer to collect people’s names and emails if they’re interested in future meetups. You could do this with a pen and paper, or if you’re concerned about reading people’s handwriting, you could use a QR code/bitly link to a Google Form. **9. Are you (Scott) going to come to some of the meetups?** I have in the past, but this year I’ll probably only be able to make my local one in Berkeley. **10. What if I have other questions?** Skyler and I will read the comments here. Again, [you can find the meetup organizer volunteer form here](https://tinyurl.com/acx-volunteer). If you want to know if anyone has signed up to run a meetup for your city, you can view that [here](https://docs.google.com/spreadsheets/d/1fCLmz4WrWCs6bINChpac86iDAiSFC9me7hb7SBlwF3Q/edit?gid=0#gid=0). Everyone else, just wait until around 3/27 and I’ll give you more information on where to go then.
Skyler
189904237
Spring Meetups Everywhere 2026 - Call For Organizers
acx
# Last Rights *[This is a guest post, written by David Speiser, author of the [Ollantay](https://www.astralcodexten.com/p/your-review-ollantay) review in last year’s Non-Book Review contest. David provided the concept and original draft; Scott edited the final version. Remaining mistakes are likely mine (Scott’s)]* ## The Problem Everyone hates Congress. That [poll](https://www.salon.com/2013/01/08/poll_congress_less_popular_than_cockroaches_nickelback/) showing that cockroaches are more popular than Congress is now thirteen years old, and things haven’t improved in those thirteen years. Congressional approval dipped below 20% during the Great Recession and hasn’t recovered since. A republic where a supermajority of citizens neither like nor trust their representatives is not the most stable of foundations, so it should not be shocking that the legislative branch is being subsumed by the executive. What’s the solution? Many have been proposed, some with very snazzy websites. [FairVote](https://fairvote.org/resources/why-congress-is-broken-2025/) thinks that ranked choice voting and proportional representation will solve it. The Congressional Reform Project has [another](https://www.congressionalinstitute.org/congressional-reform/) snazzy website with such bold proposals as “Increase the opportunity for Members to form relationships across party lines, including by bipartisan issues conferences.” [There](https://issueone.org/issues/fixing-congress/) [are](https://www.fixourhouse.org/) [more](https://global.upenn.edu/penn-washington/the-fixing-congress-community/) [think](https://bpcaction.org/reforming-congress/) [tanks](https://www.amacad.org/ourcommonpurpose/initiative/enlarging-house-representatives). They want to enlarge the House by a few hundred members, switch to a biennial budget system, spend more on Congressional staffers, and introduce term limits, among many other suggestions. There are op-eds too. Here’s how the Atlantic [wants](https://www.theatlantic.com/ideas/archive/2024/06/congress-reform-filibuster-constitution/678604/) to fix Congress. The New York Times of course has a [solution](https://www.nytimes.com/interactive/2025/01/14/opinion/fix-congress-proportional-representation.html). Here on Substack, Matt Yglesias thinks proportional representation is [the solution](https://www.slowboring.com/p/proportional-representation-is-the), and Nicholas Decker has an especially interesting [solution](https://nicholasdecker.substack.com/p/how-to-save-american-democracy). These proposals, no matter which direction they’re coming from, have two things in common. The first is that they largely agree on the problem: members of Congress are disconnected from their constituents. Thanks to a combination of huge gerrymandered districts, national partisan polarization, and the influence of large donors, a representative has little incentive to care about the experience of individual people in their district. The second thing that all these proposed solutions have in common is that none of them will ever be implemented. They all involve acts of Congress - and members of Congress have no incentive to vote to change broken systems that currently benefit them. Why would you want to stop gerrymandering when it’s the reason you don’t have to run a real campaign to stay in office? Why would you vote to give yourself more work? Why would you vote to make it harder for people to give you money? If we want to fix Congress, we need a solution that doesn’t involve Congress. Luckily for us, such a solution exists: if we get 27 states to ratify the Congressional Apportionment Amendment, then we can make some real progress towards fixing Congress without Congressional buy-in. This solution is not a new idea. It comes up every few years and gets little traction. My hope in writing this piece is that it gets more traction now. ## The Only A+ Ever Given At The University Of Texas In 1789, Congress passed the Bill of Rights, containing twelve Constitutional amendments meant to protect the American people. Ten of these twelve were ratified by the states and became law. Two failed and were forgotten. Eighty three years later - in 1872 - a Congress voted themselves a pay raise[1](#footnote-1). In fact, they voted themselves a pay raise effective as of two years ago, meaning that every member of Congress immediately received two years of back pay. The American people were outraged, especially after an economic crisis hit later that year. In the midst of the backlash, a member of the Ohio state legislature remembered the failed eleventh amendment in the Bill of Rights, which read: > No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of Representatives shall have intervened. In other words, if Congress votes themselves a pay raise, it can’t take effect before the next election cycle. Ohio decided - better late than never - and became the 9th state to ratify the amendment, almost a century after the first eight. But it still wasn’t enough, and besides, the American people punished Congress in a more traditional way: they voted the Republican majority out of office and handed the chamber to the Democrats. Everyone forgot the eleventh amendment a second time. One hundred ten years later - in 1982 - an undergrad at University of Texas in Austin wrote a paper on the pay-raise amendment, mentioning that there wasn’t *technically* anything in the Constitution that said that amendments had expiration dates. He got a C on the paper and very reasonably turned that into a decade-long crusade to prove his history teacher wrong. He started a nationwide campaign to get state legislatures to ratify the amendment. In 1992, he succeeded: the 38th state approved the provision, and it was added to the Constitution as what is now the Twenty-Seventh Amendment. The crusade worked; thirty-four years after the original paper, his political science teacher submitted a petition to the university to retroactively change his grade to an A+; since there is no A+ on the official UT grading rubric, this became the only A+ ever given in the history of the University of Texas. That means eleven of the original twelve Bill of Rights amendments have made it into the Constitution. There’s only one left. It’s been ratified by eleven states already. If twenty-seven more states agree, it will become the law of the land. It is the right to Giant Congress. ## The Right To Giant Congress Here is the text of the Congressional Apportionment Amendment, the sole unratified amendment from the Bill of Rights: > After the first enumeration required by the first article of the Constitution, there shall be one Representative for every thirty thousand, until the number shall amount to one hundred, after which the proportion shall be so regulated by Congress, that there shall be not less than one hundred Representatives, nor less than one Representative for every forty thousand persons, until the number of Representatives shall amount to two hundred, after which the proportion shall be so regulated by Congress, that there shall not be less than two hundred Representatives, nor more than one Representative for every fifty thousand persons. In other words, there will be one Representative per X people, depending on the size of the US. Once the US is big enough, it will top out at one Representative per 50,000 citizens. (if you’ve noticed something off about this description, good work - we’ll cover it in the section “A Troublesome Typo”, near the end) The US is far bigger than in the Framers’ time, so it’s the 50,000 number that would apply in the present day. This would increase the size of the House of Representatives from 435 reps to 6,641[2](#footnote-2). Wyoming would have 12 seats; California would have 791. Here’s a map: This would give the U.S. the largest legislature in the world, topping the 2,904-member National People’s Congress of China. It would land us right about the middle of the list of citizens per representative, at #104, right between Hungary and Qatar (we currently sit at #3, right between Afghanistan and Pakistan). Would this solve the issues that make Congress so hated? It would be a step in the right direction. Our various think tanks identified three primary reasons behind the estrangement of Congress and citizens: gerrymandering, national partisan polarization, and the influence of large donors. This fixes, or at least ameliorates, all of them. **Gerrymandering:** Gerrymandering many small districts is a harder problem than gerrymandering a few big ones. Durable gerrymandering requires drawing districts with the exact right combination of cities and rural areas, but there are only a limited number of each per state. With too many districts, achievable margins decrease and the gerrymander is more likely to fail. We can see this with state legislatures vs. congressional delegations. A dominant party has equal incentive to gerrymander each, but most states have more legislature seats than Congressional ones, and so the legislatures end up less gerrymandered. Here are some real numbers from last election cycle[3](#footnote-3): So for example, in Republican-dominated North Carolina, 50.9% of people voted Trump, 60% of state senate seats are held by Republicans, and 71.4% of their House seats belong to Republicans. The state senate (50 seats) is only half as gerrymandered as the House delegation (14 seats). In many states, the new CAA-compliant delegation would be about the same size as the state legislature, and so could also be expected to halve gerrymandering. As a bonus, the Electoral College bias towards small states would be essentially solved. Currently, a Wyomingite’s presidential vote controls three times as many electoral votes as a Californian’s. Under the CAA, both states would be about equal. **Money:** This one is intuitive. If you can effectively buy 1/435 elections, you’ve bought 0.23% of Congress. If the same money only buys you 0.02% of Congress, you’re less incentivized to try to buy House elections and more incentivized to try to buy Senate seats or just to gain influence within a given political party. Money in politics is still a thing, but it becomes much harder to coordinate among people. This makes it easier for somebody to run for Congress without having to fundraise millions of dollars. Because it’s less worth it to spend so much money on any one seat, elections to the House become cheaper[4](#footnote-4). **Polarization:** Some of the think tanks that want to increase the size of Congress by a few hundred members rather than a few thousand [claim](https://www.amacad.org/news/new-academy-report-makes-case-enlarging-house-representatives) that this increase will fix political polarization by making representatives more answerable to their constituents who tend to care more about local issues than national ones. I’m more skeptical of this claim, mainly because it seems that all politics is national politics now. There’s one newspaper and three websites and all they care about is national politics. My Congressional representative ran for office touting her background in energy conservation and water management, arguing that in a drying state and a warming climate we really need somebody in Congress who knows water problems inside and out. Now that she’s actually in Congress, it seems that her main job is calling Donald Trump a pedophile[5](#footnote-5). The incentives here are to get noticed by the press and to go viral talking about how evil the other side is, so that people who are angry at the evil other side will give you money and you can win your next election. But maybe Big Congress can solve that. Maybe in a district of less than 50,000 there will be less incentive to go viral and more incentive to connect with your constituents. At the very least, it seems that people trust their state representatives [more](https://news.gallup.com/poll/512651/americans-trust-local-government-congress-least.aspx). And when my state representative and my state Senator tell me about the good work that they’ve done and ask for me to vote for them again, they point to legislation that they’ve passed, not clips of them calling their opponents pedophiles. ### Won’t Congress Become Unmanageable? At first, probably yes! The Capitol Building couldn’t fit a 6,641 person Congress, let alone all of the extra staffers and administrative personnel who would come with it. We’d need to build a new monument to the largest democratic body in the history of the world. This is a good thing. But it would also become conceptually unmanageable, with individual members having more trouble networking with one another and sounding out consensus. I expect that out of necessity, the House would take on a more parliamentary form with the party as the baseline for decision making. Then the big negotiations become those between parties, not between individuals. ### Why Should I Support This? **Democrats:** You’re about to take a beating in the next census. California is moving to gerrymander its Congressional delegation, but it’s also going to [lose](https://www.brennancenter.org/our-work/analysis-opinion/how-states-seats-us-house-could-change-after-next-census) four seats. Texas is moving to gerrymander its delegation even more aggressively, and it’s going to gain four seats. Florida is going to gain three. Illinois and New York are losing seats. Across the board it’s bad news; while you might come out on top in this year’s elections, you’re going to lose the gerrymandering battle come 2030. Ratifying the CAA will make the battle that much fairer for you. **Republicans:** You’re about to take a [beating](https://www.natesilver.net/p/generic-ballot-average-2026-nate-silver-bulletin-congress-polls) in the midterms. The aggressive gerrymandering in Texas could easily backfire in a blue year, and California just passed the “I Hate Republicans” act to gerrymander that state as well. Ratifying the CAA is a way to blunt the effect, and let your colleagues in Illinois and California and New England have their voices heard. But there’s a bigger reason for you to want to support this. If you’re a Republican in 2026, you exist to serve Donald Trump and his vision for America. You want to help Donald Trump recreate America in his image. The image of America will be the image of the new Capitol Building, and Donald Trump will lead this design. You saw how excited he was about the east wing of the White House; imagine how ecstatic he would be to get to design the Donald J. Trump Capitol Building. Imagine how owned all those Washington libs will be when they walk by the giant golden statue of Donald Trump that hosts Congress. **Libertarians/Communists/Greens/etc:** Third parties are at their nadir right now. Zero state or national legislative seats are currently occupied by third parties, which is historically unusual. But increasing the size of Congress would give a shot in the arm to third parties. Getting 25,000 people to vote for you seems much more doable, especially if the whole party goes all-in on one seat. And it only takes one. I gotta believe that the Libertarians could win a Congressional seat in New Hampshire. The Communists could win one in Seattle. And once you get one seat, then it’s off to the races. Getting national recognition as one of 6,641 is really hard - joining or forming a third party is the kind of thing that gets you press. This is speculation, I have no data to back it up, but I fully expect that we would see a big upshot in third party representation and membership. The CAA is exactly what the Libertarians need to break out of their funk. **State legislators:** Because you have an opportunity here. The most likely people to be elected to the new Big Congress are those who already have political experience and know what it takes to win an election in a small district. If you vote to ratify the CAA, odds are good that you’ll be among those elected to fill the ranks of Big Congress. And you’ve always wanted to be there in Washington. We both know it. ## A Troublesome Typo The second clause of the amendment describes the situation when the US population is between 3 million and 8 million. It says (my bolding): > *There shall be not **less than** one hundred Representatives, nor **less than** one Representative for every forty thousand persons* Sounds reasonable enough. This is making the straightforward claim that there should be many representatives, and a high representative-to-constituent ratio. The third clause of the amendment describes the situation when the US population is greater than 8 million people (i.e. the situation we’re in now). It says: > *There shall not be **less than** two hundred Representatives, nor **more than** one Representative for every fifty thousand persons.* Notice the non-parallelism with the second clause. The second clause was two less-thans, meaning many representatives and low representative-to-constituent ratio. The third clause is a less-than followed by a more-than, meaning many representatives and a *low* representative-to-constituent ratio. Aren’t these two goals - many representatives, and a low representative-to-constituent ratio - in tension? Yes. In fact, the clause is mathematically impossible to satisfy at populations between eight and ten million. For example, with nine million Americans, we need *at least* two hundred representatives, but *fewer than* 9,000,000/50,000 = 180 representatives. Obviously there is no number which is both above 200 and less than 180, so this makes no sense. At other population sizes, the clause does the opposite of what its founders intended, saying that the legislator-to-constituent ratio should be *low* and Congress has to be *small*. For example, at the current US population of 350 million, the clause merely says that Congress must be *smaller* than 6,641 representatives, meaning that the current Congress size is fine and nothing changes. The simple explanation is that this is a typo. The people who wrote the law had three clauses, and meant to say “less than . . . less than” in each. But in the third clause, they said “less than . . . more than”. This has been noticed and acknowledged for over two hundred years. So we have a potential Constitutional amendment which says the opposite of what it definitely means. If passed, this would set us up for a court case that directly pits the legal school of textualism (you need to follow the law as written) against originalism (you need to follow what the people who wrote the law meant). These two schools are often in oblique and complicated conflict. But as far as we know, they’ve never faced so direct a test as a section of the Constitution with an obvious-for-two-hundred-years typo that inverts its meaning. All the Supreme Court Justices who have previously gotten away with talking about how the law is subtle and complicated would have to finally just decide whether textualism or originalism is right, no-take-backs, once and for all. It would be hilarious. The most likely outcome would be that they would bow to two hundred years of obvious criticism of this incorrectly-worded law, agree that it meant to say that the legislator-to-constituent ratio must be high, and we would get Giant Congress. But there’s a remote chance that the textualists would win after all. This wouldn’t make things worse - Congress would be constitutionally banned from having more than 6,641 representatives, but this was hardly in the cards anyway. It would also mean that if the US population ever declined to between eight and ten million - admittedly another thing that’s not really in the cards - the Constitution would become logically impossible to follow, and America would officially be a paradox. If the population ever declined to between eight and ten million people, this probably would not be our biggest problem. But it might be the funniest. ## The Path To 38 A constitutional amendment must be ratified by 3/4 of states; that’s 38/50. Eleven have ratified it already, so we need 27 more. Of the 39 states that have not ratified the CAA, 13 have legislatures run by Democrats and 25 have legislatures run by Republicans. This has to be a bipartisan effort. But it’s no worse than the situation with the Twenty-Seventh Amendment. Gregory Watson, the previously mentioned Texas undergraduate, got it passed with $6,000 of his own money and a very dedicated letter-writing campaign. The Congressional Apportionment Amendment may require more work, but the precedent is there. If you’re a state legislator, or if you know a state legislator, or if you want to be a state legislator after they all move up to Washington, then please introduce a motion to ratify this amendment. And tell all your colleagues that, if they ratify it too, they’ll get to be real Congressmen and Congresswomen. We can have the largest legislative body in the world. We can build monuments again. We can have real third parties again. Either that, or we’ll turn the Constitution into a paradox and our government will vanish in a puff of logic. Still probably beats what’s going on now. [1](#footnote-anchor-1) Of around $67k/year in 2026 dollars. [2](#footnote-anchor-2) Under the 2020 census. The number would change upon each subsequent census. In 2030, it will probably be around 6,980. [3](#footnote-anchor-3) In case this smacks of cherry-picking, [here](https://docs.google.com/spreadsheets/d/e/2PACX-1vR1mpI7XonQL7O2Wg4IsvKHpFjgi0v5Z8ft7KyhXs7Sa3ohAqXYPhZTTNxA9zHs-3AVQ8J63kex-m4m/pubhtml#gid=0) is a breakdown of the “error” in every state’s Congressional delegation, state house delegation, and state senate delegation. “Error” here is defined as the difference between the representation of each state’s delegation and the percentage of that state that voted for Trump over Harris (or vice versa). In only two states, Florida and Virginia, is the error greatest in the largest body, and both of those states would have Congressional delegations larger than that largest body. In the case of Florida, their delegation would be nearly quadruple the size of their state house. [4](#footnote-anchor-4) There could also be an effect from the structure of the TV market. Stations sell ads by region, and each existing media region is larger than the new Congressional districts. So absent a change in market structure, a candidate who wanted to purchase TV advertising couldn’t target their own district easily; they would have to overpay to target a much larger region. [5](#footnote-anchor-5) And just to harp on this more, we just blew by the Colorado River Compact agreement deadline and now the federal government is going to start mandating cuts; everybody’s going to sue everybody else. Lake Powell is quite possibly going to dead pool this year, and as far as I can find the congressperson who ran on water issues is saying nothing about it.
Scott Alexander
190585065
Last Rights
acx
# Open Thread 424 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), and [bulletin board](https://www.datasecretslox.com/index.php), and [in-person meetups around the world](https://www.lesswrong.com/community?filters%5B0%5D=SSC). Most content is free, some is subscriber only; you can subscribe **[here](https://astralcodexten.substack.com/subscribe?)**. Also: --- **1:** Mox asks me to advertise **[their 2026 fundraiser](https://manifund.org/projects/mox-2026-fundraiser)**. They’re a rationalist/EA coworking space in San Francisco that hosts ACX meetups, ACX grants infrastructure, AI safety work, and more. And while I’m advertising them, they also offer deals on [personal](https://moxsf.notion.site/memberships) and [organizational](https://moxsf.com/offices) office space. **2:** [StopTheRace.ai](https://stoptherace.ai/) will be holding **[a protest on Saturday, March 21](https://luma.com/s0k8wvee)** in front of major AI company offices, asking them to commit to a mutual pause (ie to stop AI research if every other AI company in the world agrees to do so). Demis Hassabis of Google DeepMind has already informally agreed to something like this in principle (which is why GDM isn’t being protested), and Anthropic has expressed interest but its [new responsible scaling policy](https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsible-scaling-policy-v3) stops short of an explicit commitment. I think this is a reasonable ask, albeit so unlikely to happen that protests about it will probably do more to raise awareness than be a coherent plan in themselves. If you’re curious about the details of an AI pause, I expect to be able to provide more information in a few months. **3:** ACX grantee Markus Englund **[announces a first set of results](https://www.sciencedetective.org/scientific-datasets-are-riddled-with-copy-paste-errors/)** from his project to automate anomaly detection in scientific data, finding serious and reportable data issues in eighteen papers, including an influential study linking Parkinson’s to the gut. He plans to scale up his efforts by over an order of magnitude in the year ahead.
Scott Alexander
190384458
Open Thread 424
acx
# SEIU Delenda Est California lets interest groups propose measures for the state ballot. Anyone who gathers enough signatures (currently 874,641) can put their hare-brained plans before voters during the next election year. This year, the big story is the 2026 Billionaire Tax Act, a 5% wealth tax on California’s billionaires. Your views on this will mostly be shaped by whether or not you like taxing the rich, but opponents have argued that it’s an especially poorly written proposal: * It includes a tax on “unrealized gains”, like a founder’s share of a private company which hasn’t been sold yet. This could be [an existential threat to](https://x.com/zoink/status/2005093365243908226) the Silicon Valley model of building startups that are worth billions on paper before their founders see any cash. Since most billionaires keep most of their wealth in stocks, any wealth tax will need some way to reach these (cf. complaints about the “buy, borrow, die” strategy for avoiding taxation). But there are better ways to do this (for example, taxing at liquidation and treating death as a virtual liquidation event), other [wealth tax proposals](https://www.taxnotes.com/featured-analysis/billionaire-mark-market-reforms-response-susswein-and-brown/2022/07/21/7dmq3) have included these, and the California proposal doesn’t. * It appears to value company stakes by voting rights rather than ownership, so a typical founder who maintains control of their company despite dilution might see themselves taxed for more than they have. Garry Tan [explains the math here](https://x.com/garrytan/status/2009776299666223265) with reference to Google. However, [Current Affairs has a good article](https://www.currentaffairs.org/news/every-argument-against-the-california-billionaire-tax-is-wrong) (?!) that pushes back, saying the proposal exempts public companies like Google. Although private companies would still be affected, this would be so obviously unfair that founders would easily win an exemption based on a provision allowing them to appeal nonsensical results. Still, some might counterobject that proposed legislation is generally supposed to be good, rather than so bad that its victims will easily win on appeal. * It’s retroactive, applying to billionaires who lived in California in January, even though it won’t come to a vote until November. Proponents argue that this is necessary to prevent billionaire flight; opponents point out that alternatively, billionaires could flee before the tax even passes (as some [have already done)](https://www.foxbusiness.com/real-estate/billionaires-flee-california-within-seven-days-proposed-wealth-tax-inside-miami-migration). One plausible result is that the tax fails (either at the ballot box or the courts), but only after spurring California’s richest taxpayers to flee, leading to a net *decrease* in revenue. * Some people [propose](https://x.com/KelseyTuoc/status/2029353580810125796) that it could decrease state revenues overall even if it passed, if it drove out enough billionaires, though others [disagree](https://x.com/jdcmedlock/status/2029356544182419560). Pro-tech-industry newsletter *Pirate Wires* [finds](https://www.piratewires.com/p/exodus-the-largest-wealth-flight) that 20 out of 21 California tech billionaires interviewed were “developing an exit plan” and quotes an insider saying that “if this tax actually passes, I think the technology industry kind of has to leave the state”. Even Gavin Newsom, hardly known for being an anti-tax conservative, [has argued](https://www.politico.com/news/2026/01/12/newsom-unloads-on-california-wealth-tax-proposal-00723732) that it “makes no sense” and “would be really damaging”. The ACX legal and economic analysis team (Claude, GPT, and Gemini) [doubt](https://chatgpt.com/share/6975acdb-7e14-8001-8f29-29defecd2bc6) the direst warnings, but agree that the tax is of dubious value and its provisions poorly suited to Silicon Valley. On one level, it’s no surprise that California, a state full of bad socialists, is considering bad socialist policy. But I think this is the wrong perspective. This proposition isn’t being sponsored by some generic group of Piketty-reading leftists. It’s the project of SEIU (Service Employees International Union) a union of mostly healthcare workers. This immediately clarifies the debate about whether it’s net negative for revenue. 90% of the revenue from the tax is earmarked for health care. So even if it’s net negative for the state, it isn’t net negative for the health care budget in particular, ie for the people who are sponsoring the measure. But we can get even more conspiratorial. The SEIU is known in California political circles for pioneering and perfecting the art of extortion via ballot initiative. Their usual strategy goes: 1. Propose a ballot initiative that will sound nice to voters, but which is actually deliberately designed to ruin some industry. 2. Demand concessions from that industry in exchange for withdrawing the initiative. Their first extortion attempt (as far as I know) was the 2014 [Fair Healthcare Pricing Act](https://lao.ca.gov/ballot/2011/110758.pdf), which would have capped the amount hospitals were allowed to charge for procedures at some unsustainable amount. The hospital association [seemed to think](https://web.archive.org/web/20211026215704/https://hasc.org/blog-entry/powerful-labor-union-threatens-your-health-care) this was an existential threat: > If the initiatives are approved by the voters, hospitals could not operate as they do now. It would be necessary for hospitals to restructure their business model and services provided. Additionally, hospitals would be faced with unprecedented decisions — “Which services must be eliminated or cutback?”; “How can the hospital operate without departmental cross-subsidization?”; and “How can strategic planning be conducted in a world of oppression and uncertainty?” Although the hospitals themselves might be biased, the government’s [mandatory fiscal analysis](https://lao.ca.gov/ballot/2011/110758.pdf) of the initiative seemed to agree, saying that “about 20 hospitals would change from having positive operating margins to having operating losses before taking into account any strategies these hospitals might implement in response to the measure.” But “help” was on the way. The SEIU offered to withdraw its initiative in exchange for a $100 million “donation” from hospital lobby groups to one of SEIU’s pet causes, plus the right to expand their union into the affected hospitals. The hospitals [caved and gave them what they wanted.](https://nuhw.org/hospitals-bankroll-much-seiu-pact/) The union was surprisingly frank in their celebration: > [Union leader Dave] Regan said that the SEIU-UHW had spent $5 million on [backing the ballot initiatives], but that it paid off handsomely. “For a $5 million investment, we get an $80 million turn to pursue those things,” Regan said. He observed that the CHA would have spent as much as $100 million to defeat the initiatives. Buoyed by their success, SEIU identified dialysis clinics as their next target, and demanded similar union expansion rights (I can’t find any information about whether they also wanted more cash). The dialysis clinics refused, and so began one of the most shameful chapters in California ballot history: The Eternal Kidney Proposition. SEIU proposed a 2018 ballot proposition to cap dialysis clinic revenues at some unsustainable level. The clinics spent $100 million fighting it, “the most money raised for a campaign like this in California history”, and it failed. And then it was back! In 2020, SEIU proposed a new packet of regulations for dialysis clinics, all of which probably sounded reasonable to the average voter but which had the overall effect of making them ruinously expensive to operate. The measures were opposed by the California Medical Association (representing doctors), the American Nursing Association (representing nurses), various patients’ groups, and even the NAACP (black people are especially prone to kidney disease, and would be hardest hit). Once again, the clinics spent $100 million getting the message out, and the Californian public rejected it. And then it was back again! In 2022, SEIU proposed basically the same packet of regulations. All the same groups lined up against, now joined by the Renal Physicians Association, the Renal Physician Assistants’ Association, the National Kidney Association, and various veterans groups (older veterans are also commonly affected by kidney disease, and would also be hard-hit). After wasting another $100 million, the proposition was defeated a third time. Somewhere in this process, Californians started to wonder what was going on. One dialysis proposition might be happenstance, two might be coincidence, but three was enemy action. In 2020, media nonprofit CalMatters published [Good Policy Or Ballot Blackmail?](https://calmatters.org/health/2020/10/california-healthcare-union-proposition-23/), trying to spread awareness of SEIU’s extortion attempts. It focuses on SEIU leader Dave Regan’s love of the tactic: > [SEIU] sponsored Proposition 23 on the November ballot, which would add new regulations for dialysis clinics. It put a similar measure before voters in 2018, which they rejected. In the last two elections, it’s also sponsored a measure to tax hospitals in the Los Angeles County city of Lynwood, and to cap prices at Stanford hospitals and clinics in several Bay Area Cities. > > And that doesn’t count the many initiatives it began working on by collecting signatures but withdrew before they reached the ballot — including a minimum wage initiative in 2016, a pair of measures to limit hospital fees and executive pay in 2014, and two other initiatives to curb hospital bills and expand charity care in 2012. > > All told, these campaigns have cost the union at least $43 million, and resulted in no wins on the ballot in California — though union president Dave Regan says they’ve helped make progress in other ways. The practice has earned him a reputation as an aggressive labor leader who uses the initiative process to needle adversaries in the health care profession as he tries to expand membership in his union. > > “Dave Regan has made this into a strategy,” said Ken Jacobs, chair of the UC Berkeley Labor Center, which researches unions […] And on the opinions of other labor leaders: > “There’s great resentment toward him because of his ‘my way or the highway’ kind of way of dealing with other folks,” said Sal Rosselli, who worked with Regan as part of the larger SEIU umbrella union for many years, but now heads the rival National Union of Healthcare Workers. > > Regan’s frequent use of ballot measures is “dishonest with voters,” Rosselli said. “He’s not doing it to improve the quality of health care… He’s doing it to gain leverage over the employers for top-down organizing rights.” [Wall Street Journal](https://archive.is/bTEQV#selection-279.0-279.307) agreed, and even the more liberal [Los Angeles Times](https://www.latimes.com/california/story/2022-10-10/skelton-proposition-29-dialysis-california) described SEIU’s work as “political extortion”. Given that all of SEIU’s past progressive-sounding legislation has been thinly-disguised extortion attempts, might this one be as well? The argument against: SEIU is entirely focused on healthcare and doesn’t care about the tech industry. The argument in favor: Gavin Newsom cares about the tech industry. And SEIU cares about Gavin Newsom. Governor Newsom has been eyeing the Democratic presidential nomination in 2028. He needs a reputation as a Sensible Moderate and plenty of billionaire donors. And there’s a clear path to the latter - as Silicon Valley tires of Trump’s random acts of economic devastation, some tech leaders are starting to regret their flirtation with right-wing populism and wonder whether the other side has a better offer. If everything goes exactly right, he can make it work. Instead, there’s this wealth tax, coming at the worst possible time. Newsom really, really wants it to go away. So, [Politico reports](https://www.yahoo.com/news/articles/gavin-newsom-moves-neutralize-tax-004500118.html), he’s been meeting with SEIU leader Dave Regan to see what’s on offer: > Gavin Newsom and his staff have quietly talked to the champion of a controversial wealth tax proposal seeking an off-ramp to defuse a looming ballot measure fight. > > The conversations, reported here for the first time, have occurred intermittently for months as SEIU-UHW’s ballot initiative targeting billionaires migrated from the backrooms of California politics to the center of a raging debate about Silicon Valley and income inequality, sparking tech titans’ wrath and vows to move out of state. > > “We’ve been at this for four months,” Newsom said in an interview with POLITICO, describing an “all-hands” effort that has included him meeting one-on-one with SEIU-UHW’s leader, Dave Regan. > > A compromise does not appear imminent. A union official cast doubt on the possibility of a deal, saying the two sides do not currently have another meeting scheduled and framing a ballot fight as an inevitability. My read: rather than a heartfelt attempt at redistribution, this is a heads-I-win-tails-you-lose gambit by the SEIU. If Governor Newsom offers them enough concessions and bribes, they’ll drop the initiative. If not, they’ll carry it through, maybe win, and get billions of dollars of extra health care spending, some of which will flow through to their members. Either way, whatever happens to the rest of the state isn’t their concern. One critique of capitalism argues that, although in theory it aligns incentives perfectly so that companies should produce things that people want, in practice it also incentivizes the hunt for loopholes: addictive products that can take advantage of seemingly-tiny wedges between what people will buy and what’s good for them. Cigarettes, casinos, payday loans, and social media all demonstrate that these wedges collectively form a multi-trillion dollar niche. In the same way, SEIU seems to have found a bug in direct democracy: it incentivizes interest groups to search for the most destructive possible ballot initiative that might nevertheless get approved by low-information voters, since this gives them leverage over anyone willing to bribe them into withdrawing their poison pill. Seems like an ignominious end for California’s ballot proposition system.
Scott Alexander
185632270
SEIU Delenda Est
acx
# Open Hidden Open Thread 423.5 The Wednesday open threads are usually paid-subscriber only, but I’m making this one public to give people more space to talk about everything going on. Also: --- **1:** The OpenAI/Pentagon situation has evolved since Sunday’s ACX post (“[All Lawful Use: Much More Than You Wanted To Know](https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you)”). For up-to-date analysis of the latest contract, I endorse this LW post from today, on the newest contract: **[OpenAI’s Surveillance Language Has Many Potential Loopholes And They Can Do Better](https://www.lesswrong.com/posts/FSGfzDLFdFtRDADF4/openai-s-surveillance-language-has-many-potential-loopholes)**.
Scott Alexander
189932155
Open Hidden Open Thread 423.5
acx
# Mantic Monday: Groundhog Day ## Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business On Friday, the Pentagon declared AI company Anthropic a “supply chain risk”, a designation never before given to an American firm. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it? Anthropic isn’t publicly traded, so we turn to the prediction markets. [Ventuals.com](https://app.ventuals.com/markets) has a “perpetual future” on Anthropic stock, a complicated instrument attempting to track the company’s valuation, to be resolved at the IPO. Here’s what they’ve got: Upon the “supply chain risk” designation, predicted value at IPO fell from about $550 billion to $475 billion - then, after a day or two, went back up to $550 billion. No effect! A coarser yes-no [Polymarket](https://polymarket.com/event/anthropic-500b-valuation-in-2026) tells the same story: The chance of Anthropic getting a $500 billion+ valuation in 2026 fell from 90% to 76%, before rebounding to 83%. Why have the markets shrugged off this seemingly important event? Partly it’s because Anthropic seems likely to win on appeal. Hegseth has said the government will keep using Anthropic for the next six months (undermining his case that they’re a national security risk) and has signed a substantially similar contract with OpenAI (undermining his case that their contract terms were unworkable). The prediction markets think the courts will be sympathetic: But even in the 28% of timelines where the designation sticks, things don’t seem so bad. Secretary of War Hegseth originally [tweeted](https://x.com/SecWar/status/2027507717469049070) that: > In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Framed this way, the Pentagon’s actions sound devastating. Anthropic relies on compute to train and run its AIs. Most of this compute is in data centers owned by Amazon, Google, and Microsoft. At least Amazon and Microsoft have contracts with the US military. If they had to drop Anthropic, it would make it impossible for the company to stay a frontier AI lab. But in their own [blog post](https://www.anthropic.com/news/statement-comments-secretary-war), Anthropic described the situation differently: > **If you are an individual customer or hold a commercial contract with Anthropic**, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. > > **If you are a Department of War contractor**, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected. In other words, the “supply chain risk” designation only means that companies can’t use Anthropic products in their specific Department of War contracts. So if Amazon is doing 95% normal civilian cloud compute stuff, and 5% special government contracts, only 5% of their contracts are affected. This is trivial! Anthropic can keep all its compute and most of its business partnerships even with Department-of-War-linked companies! The lawyers who weighed in seem to think that Anthropic’s interpretation of the law is correct, and Secretary Hegseth’s interpretation confused. In some situations, this might be cold comfort - how much does it help to be right about the law when the government is wrong? But in this case, it probably helps a lot. Amazon, Google, and Microsoft are all big Anthropic investors - each owns about a 10% stake - and have multi-billion dollar AI compute contracts. Together, the three tech giants must have at least $100 billion riding on Anthropic’s success. They also have good administration connections and great lobbyists, and even Hegseth isn’t stupid enough to pick fights with them all at once. So probably they send their lobbyists to have a talk with Hegseth about what the “supply chain risk” designation actually entails, Hegseth enforces the letter of the law, and Anthropic is barely affected. At least this is the story the prediction markets are going with: In this best-case scenario, Anthropic’s downside is losing some government contracts that made up ~5% of its business, plus some other Department-of-War-contractor contracts that probably add up to another ~5%. Against that, the upside is great publicity. Despite a lot of work and some controversial Superbowl ads, Anthropic had never before managed to overcome ChatGPT’s superior name recognition. But they seem to have finally done it: Claude [went from](https://techcrunch.com/2026/03/01/anthropics-claude-rises-to-no-2-in-the-app-store-following-pentagon-dispute/) #120 on the App Store in January, to #1 this weekend, apparently driven by people who heard about the Pentagon standoff and were impressed by their principled stance. This could have been a mixed blessing - Anthropic was previously trying to stand out as a B2B company while letting OpenAI have the dubious honor of producing consumerslop. But early signs suggest they might be winning over some companies too. From [a Reddit thread](https://www.reddit.com/r/technology/comments/1rhoi54/claude_hits_no_1_on_app_store_as_chatgpt_users/) on the topic: > As someone who manages IT for a mid-size company, this is actually a big deal. We were evaluating both Claude and ChatGPT for internal use and the Pentagon thing was basically the tipping point for us. Not because we're government adjacent or anything, just because a company willing to walk away from a massive contract on ethical grounds is probably also going to handle our data more carefully than one racing to close every deal possible. The app store ranking makes sense to me. > Finance VP for a mid size tech, we’re moving completely away from ChatGPT/Copilot to Claude. I’m impressed with the prediction markets here - they’ve taken a bold and counterintuitive stance that I wouldn’t have otherwise considered (that these developments barely harm Anthropic) and made it legible, to the point where I basically believe it. ## The Midterms As Potential Crisis America will hold midterm elections on November 3. Incumbents always have a hard time during midterms, and Trump’s approval rating is low, so it’s expected to be a good year for Democrats. Prediction markets expect them to win at least the House (80% chance) and maybe even the Senate (20 - 40% chance). This simple story is complicated by two different Republican attempts to change voting law. Republicans generally believe there is significant fraud in elections, especially immigrants voting illegally, and propose strict ID requirements to prevent this. Most Democrats believe fraud is rare, and that strict ID requirements are more likely to disenfranchise normal voters who don’t have the right forms of ID available. The latest flashpoint in this battle is the SAVE Act, a Republican-sponsored bill which would require voters to show a passport, birth certificate, or Real ID when registering to vote for the first time or changing their registration. It recently passed the House, but is on track to be filibustered by Democrats in the Senate: At the same time, there are rumors that the Trump administration is working on [an executive order](https://www.democracydocket.com/news-alerts/exclusive-read-the-draft-executive-emergency-order-for-trump-to-take-control-of-elections/) to declare a national emergency and take control of elections. The order would say that foreign countries have been rigging US elections (some commenters speculate that maybe Maduro could be granted clemency for “admitting” to this), and respond with a series of extreme measures. These would include banning voting machines, restricting vote-by-mail, and requiring all voters to re-register before the election. For what it’s worth, Trump has [denied all of this](https://thehill.com/homenews/campaign/5759186-trump-midterm-elections-national-emergency/), although his previous denial of Project 2025 makes this less reassuring. It looks like the markets are saying that Trump will try something, but maybe not the full executive order under discussion. Most commentators think the EO is unconstitutional, with [at least one liberal](https://www.democracydocket.com/news-alerts/white-house-circulating-blatantly-illegal-draft-emergency-order-to-take-control-of-elections/) arguing that it would be *good*, since it would force the courts to explain exactly how illegal all of this is. But if it somehow made it through the courts, the most likely outcomes could be: **Chaos** (at least according to the mostly-liberal commentators I’ve been reading). Do federal agencies really have the capacity to re-register every voter in the next six months (imagine the DMV lines!) Can precincts really switch from voting machines to secure paper ballots during that period? Is there enough supply of the special holographic paper that the order demands for ballots? If not, what happens? Is the election so borked that we can’t figure out who controls Congress? What happens then? At a minimum, lots and lots of court cases. **A blue wave**. This would be a somewhat surprising result of Republican policies, but it makes sense. All of these restrictions select for high-information, high-motivation voters - people who hear about the new rules and get fired up enough to hunt down their birth certificate, march down to the DMV, wait on line for one million hours, and re-register. Due to their education advantage and the structural features of midterms, that probably favors Democrats. Democrats are more likely to own passports (one of the easiest forms of valid ID), and less likely to trigger increased scrutiny by having changed their name recently (because liberal women are less likely to marry and take their husband’s surname). First-order, a blue wave like this is good for the left. But second-order, if the above factors lead to some completely implausible blue wave that makes no sense by normal election standards, then Republicans could decide the elections were illegitimate and we’re back at chaos again. **Too many degrees of freedom:** Do the Republicans understand the calculus above? One theory is that they plan to make up for it with degrees of freedom. There will be many small decisions about how strictly to enforce each rule, and maybe they’ll be lenient in Republican districts and strict in Democratic ones. The administration is trying to [purge potentially fraudulent voters from the rolls](https://www.brennancenter.org/our-work/analysis-opinion/federal-courts-reject-trump-administrations-attempts-obtain-private-voter) - a process with obvious potential for abuse (purged voters can re-register to prove their non-fraudulentness, but this adds an extra layer of complication, so if mostly Democrats get purged, this overall decreases the Democratic voter base). If the administration finds some way to disproportionately disenfranchise Democrats - or if even if Democrats just believe they’ve done this - then Democrats might consider the election results illegitimate, and we would get - again - chaos. However, courts seem to be blocking all of these measures (except the SAVE Act, which is unlikely to pass Congress). It’s hard to see a world where the really disruptive ones get through. What do the markets say? This seems like a good sign that there won’t be mass voter disenfranchisement. But Metaculus expects a 25% chance that martial law is declared?! In every election he’s been involved in, Trump has either outright said he won’t accept a result that goes against him, or at least given mixed signals about this. In 2020, he took various extreme steps to overturn the election, including telling state officials to throw out ballots, demanding that the count be stopped, trying to get the Vice President to certify fake electors, and the January 6 protests. Will he try the same thing during the midterms? He might not care as much about elections where he’s not personally involved. Or he might use the same playbook, this time with a much more docile Republican party mostly purged of spine-havers like Mike Pence. If he tries this, probably Democrats will protest; if those Democratic protests become unruly, maybe he’ll declare martial law to shut them down. “Chaos” doesn’t even begin to describe this situation. Maybe the best headline summary of election forecasting are the “free and fair” questions, but they’re hard to interpret. A Manifold market with 25 forecasters gives a 41% chance that the elections aren’t considered “free and fair”. The resolution criteria is the opinion of international election observers and the mainstream media, who lean liberal. In the past, these observers have sometimes given the US a less-than-perfect verdict - for example, OSCE described the 2024 US election as: > While the general elections in the United States demonstrated the resilience of the country’s democratic institutions, the election process took place in a highly polarized environment. The election was well run, and candidates campaigned freely across the country with the active participation of voters. However, the campaign was marred by disinformation and instances of violence, including harsh and intolerant rhetoric. Repeated, unfounded claims of election fraud negatively impacted public trust. …and they can probably find even more to complain about in a Trump-run election. Is this sufficient to create uncertainty around the resolution, and drop the probability to 40%? I’m not sure. But Metaculus has a similar question noting that “This question may resolve as Yes [even] if the EAC, the OSCE, or the Carter Center notes only isolated problems or areas for improvement”, and it’s at 92%, which is reassuring. I think the best summary of forecasters’ views on the midterms is that there’s a decent chance (~50%) Trump tries to change the rules around mail-in ballots, and a modest chance (~25%) he tries something more extreme - but that it probably won’t make much difference, the election will still be considered fair by international observers, and Democrats will still win. I’m very interested in creating better prediction markets about the fairness of the 2026 elections. If anyone has ideas for how to do this, let me know. ## Groundhog Day Tweeted by [the National Weather Service’s New York City branch](https://x.com/NWSNewYorkNY/status/2018330816120606731): Punxsutawney Phil, the famous Groundhog Day groundhog, actually has less than 50% accuracy in predicting the length of winter. At what point do we flip the legend and say that there’s more winter if he *doesn’t* see his shadow? But wait! Staten Island Chuck has an impressive 85% accuracy! The graphic says “since 1981”, which would imply 45 years of prognostication, but it looks like their source is [this site](https://www.noaa.gov/heritage/stories/grading-groundhogs?utm_source=chatgpt.com), which only counts the last twenty years of data. That would also match the percent, since 85% of 20 is a round 17. In a separate analysis of 32 years, the Staten Island Zoo accords him an 81% success rate. That’s p = 0.0002 - plenty significant even after a Bonferroni correction for multiple magic groundhogs. So is the groundhog legend true? Seems like it can’t be - the legend originated with Punxsutawney Phil, who does worse than chance. What kind of crazy Gettier case would we have to believe in to have the original magic groundhog be a fraud but, coincidentally, have another groundhog a few hundred miles away be actual magic? A more prosaic explanation is that, according to [this site](https://groundhog-day.com/groundhogs/staten-island-chuck/predictions), Staten Island Chuck is almost a broken clock, predicting spring on 25/31 occasions. If early springs are more common than long winters on Staten Island, that fully explains the phenomenon. It could equally well explain [Mojave Max](https://groundhog-day.com/groundhogs/mojave-max), the legendary anti-oracular tortoise of Las Vegas, who has managed a 20% success rate over decades on what ought to be a coin flip - he won’t stop predicting long winter, and is nearly always wrong. ## Iran Warcasting Speaking of Groundhog Day, we’re bombing the Middle East again. Here’s what the markets have to say: These two well-behaved markets agree on a somewhat less than 50-50 chance that the current round of airstrikes topple the Iranian regime. [Alireza Arafi](https://en.wikipedia.org/wiki/Alireza_Arafi), a hardline cleric with no distinguishing characteristics, is weakly favored to succeed Khameini as Supreme Leader. Other contenders include Khomeini’s grandson and Khameini’s son, and there is a 15% chance that they abolish the position before figuring out a successor. The Strait of Hormuz is the waterway between Iran and Arabia that many of the world’s oil tanker routes pass through. Iran is already threatening traffic in the strait; if it threatened it more, it might be able to damage the global economy. This wouldn’t really help anything - Iran is part of the global economy too - but it would probably feel good to annoy the US a little more than they could otherwise do. Realistically this all comes down to the resolution criteria - Iran will certainly threaten the Strait, but probably can’t keep it 100% closed forever. The criteria here specify decreasing a seven-day moving average of traffic to below 20% of its usual level, which forecasters seem to think is more likely than not. Manifold expects between 6 - 100 US casualties. Polymarket thinks the war will be over by March 31, but… …a Manifold market leaves some probability on it continuing until January (or perhaps restarting by then). Gotta say, I’m not seeing this one. Reza Pahlavi is the heir of the Shahs of Iran. Polymarket thinks that if the current regime falls, there’s about a 40% chance they’ll reinstate the monarchy. I found [this Marginal Revolution](https://marginalrevolution.com/marginalrevolution/2026/03/one-view-of-iranian-strategy.html) post helpful in making sense of the markets’ view on Iran. America hoped that killing the Ayatollah would provoke mass protests and make the regime collapse. That doesn’t seem to have happened, and the regime seems ready to appoint a new Supreme Leader and keep going. America’s strategy will be to keep killing as many higher-ups as possible and bombing Iranian military sites, in the hopes that eventually the populace rises up or the remaining ayatollahs fail to hash out a succession plan. Iran’s strategy will be to just try to hold on, and cause enough pain for America and its allies that the US goes away sooner rather than later. Most likely America will either win or give up within a month, but there’s a long tail of outcomes with continued conflict until potentially as late as next year. ## MNX Stephen Grugett and Ian Philips of Manifold Markets have announced a new project, [MNX](https://mnx.fi/). MNX is a noncustodial cryptocurrency-based futures exchange offering financial products relating to AI, including some prediction-market-shaped ones. For example, [ECI26](https://testnet.mnx.fi/trade/eci26) lets users place bets on the highest score that an AI will attain on the [Epoch Capabilities Index](https://epoch.ai/benchmarks/eci) by the end of the year. Manifold is a great site, and I challenged Grugett on why he’s starting a new project. His answer: hedging. I didn’t transcribe all the details, but that’s fine, because Vitalik coincidentally wrote a pro-hedging manifesto last week. > Recently I have been starting to worry about the state of prediction markets, in their current form. They have achieved a certain level of success: market volume is high enough to make meaningful bets and have a full-time job as a trader, and they often prove useful as a supplement to other forms of news media. But also, they seem to be over-converging to an unhealthy product market fit: embracing short-term cryptocurrency price bets, sports betting, and other similar things that have dopamine value but not any kind of long-term fulfillment or societal information value. My guess is that teams feel motivated to capitulate to these things because they bring in large revenue during a bear market where people are desperate - an understandable motive, but one that leads to corposlop. > > I have been thinking about how we can help get prediction markets out of this rut. My current view is that we should try harder to push them into a totally different use case: hedging, in a very generalized sense (TLDR: we're gonna replace fiat currency) > > Prediction markets have two types of actors: (i) "smart traders" who provide information to the market, and earn money, and necessarily (ii) some kind of actor who loses money. > > But who would be willing to lose money and keep coming back? There are basically three answers to this question: > > **1.** "Naive traders": people with dumb opinions who bet on totally wrong things > **2.** "Info buyers": people who set up money-losing automated market makers, to motivate people to trade on markets to help the info buyer learn information they do not know. > **3.** "Hedgers": people who are -EV in a linear sense, but who use the market as insurance, reducing their risk. > > (1) is where we are today. IMO there is nothing fundamentally morally wrong with taking money from people with dumb opinions. But there still is something fundamentally "cursed" about relying on this too much. It gives the platform the incentive to seek out traders with dumb opinions, and create a public brand and community that encourages dumb opinions to get more people to come in. This is the slide to corposlop. > > (2) has always been the idealistic hope of people like Robin Hanson. However, info buying has a public goods problem: you pay for the info, but everyone in the world gets it, including those who don't pay. There are limited cases where it makes sense for one org to pay (esp. decision markets), but even there, it seems likely that the market volumes achieved with that strategy will not be too high. > > This gets us to (3). Suppose that you have shares in a biotech company. It's public knowledge that the Purple Party is better for biotech than the Yellow Party. So if you buy a prediction market share betting that the Yellow Party will win the next election, on average, you are reducing your risk. > > (*mathematical example: suppose that if Purple wins, the share price will be a dice roll between [80...120], and if Yellow wins, it's between [60...100]. If you make a size $10 bet that Yellow will win, your earnings become equivalent to a dice roll between [70...110] in both cases. Taking a logarithmic model of utility, this risk reduction is worth $0.58.)* See [the tweet](https://x.com/VitalikButerin/status/2022669570788487542) for more, including a suggestion that “the real solution [might be] to go a step further, and get rid of the concept of currency altogether”. MNX will not be getting rid of the concept of currency altogether. Their vision of a hedge market relies on some more prosaic beliefs. First, that Polymarket and Kalshi are doing a good job filling the gambling niche, Metaculus is doing a good job filling the information-aggregation niche, and hedging is the last prediction market niche capable of spawning a billion-dollar company. Actually, why set your sights so low? There’s currently two trillion dollars tied up in the derivatives market; a better hedge would be very lucrative. Second, that hedging is about to enter a renaissance. Even sophisticated hedge funds only hedge a few types of risk, because nobody wants to spend hundreds of hours sculpting a hedge portfolio that catches 99.99% of possibilities and changing it every few days as the market shifts form. But if the Agent Economy Of The Future brings the cost of intellectual labor down near zero, then there’s no reason not to do that. If you invest in a seaside resort, your AI can figure out the chance of a hurricane, *and* of a tsunami, *and* of an oil spill, *and* of a thousand other things, and buy a tiny share of each on the prediction markets, and feel confident that you’re expressing your exact thesis (seaside resorts are good) separate from any acts of God that might disturb it. Third, the past few years have seen dramatic advances in financial technology. Crypto traders have invented the [perpetual future](https://en.wikipedia.org/wiki/Perpetual_futures), a new instrument that tracks an asset without requiring anyone to own the asset involved. That means traders can buy and sell shares of SpaceX, OpenAI, and other nonpublic companies that won’t actually give you their shares. Hedging the price of nickel used to require someone somewhere in the process to own an actual warehouse full of nickel. Now you can skip that step. (the other technological sea change is that this is possible at all. Five years ago, cryptocurrency prediction markets were too complicated. In the late 2010s, a group called Augur raised $5 million for the project but never managed to create usable software. FTX flirted with prediction-like contracts but never got them off the ground even with all their billions. Polymarket was the first to really solve this, making $10 billion in the process, but even they were barely usable in the early days. But Stephen’s making MNX with his own money and a team of 1-2 people. He benefits partly from the vibecoding revolution, and partly from all of the billions of dollars spent on improving cryptocurrency rails - MNX uses the stablecoin USDC). MNX is focusing on AI for now, because it’s buzzy and there’s lots of money flowing into it. But if goes well, it could one day expand to seaside resorts, nickel, and everything else. ## Elsewhere In Prediction Markets **1:** CEO Chris Best [reports](https://on.substack.com/p/what-the-markets-are-saying) that Substack is partnering with Polymarket to make it easier to embed prediction markets in Substack posts and notes. I haven’t been using the embeds here because they don’t let you see the history graph, but I’m excited about them in general. And his post also mentions that “one in five of Substack’s top 250 highest-revenue publications [has] started using [prediction markets]”, which surprises me but seems like a great sign. **2:** Yahoo Finance: [Man Bet Entire Life Savings Of $342,195 That Elon Musk Would Fail](https://finance.yahoo.com/news/man-bet-entire-life-savings-170558581.html). This is more heartwarming than it sounds - it’s about economist Alan Cole and a Kalshi market about whether DOGE would successfully cut the federal budget by some amount. Cole was an expert in tax law and knew that the budget is sufficiently constrained that it was literally impossible to cut it that amount, and so (after getting his wife’s buy-in) put his entire life savings on NO. NO turned out correct, netting him a 37% profit after one year. **3:** [This Matt Yglesias tweet](https://x.com/mattyglesias/status/2026639403007746273) is more interesting than it sounds: If this were enacted, the winning play would be for platforms to subsidize their non-sports markets with the profits from their sports markets, in order to win the right to have as many sports markets as possible. These subsidies would turn non-sports prediction markets from zero-to-slightly-negative-sum (because your gains are always a counterparty’s losses, minus fees) to positive-sum (because everyone is taking the platform’s subsidies). Yglesias has discovered a solution to one of the oldest problems in the space - how to incentivize the public good of prediction market participation! Too bad the government will never do this.
Scott Alexander
189631811
Mantic Monday: Groundhog Day
acx
# Open Thread 423 This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial [subreddit](https://www.reddit.com/r/slatestarcodex/), [Discord](https://discord.gg/RTKtdut), and [bulletin board](https://www.datasecretslox.com/index.php), and [in-person meetups around the world](https://www.lesswrong.com/community?filters%5B0%5D=SSC). Most content is free, some is subscriber only; you can subscribe **[here](https://astralcodexten.substack.com/subscribe?)**. Also: --- **1:** ACX Grantee Stephen Grugett (of Manifold Markets) wants me to announce his latest project: **[MNX](https://testnet.mnx.fi/)**, “a decentralized futures exchange targeting sophisticated traders and focused on the AI economy”. It’s a real-money platform where traders who want to hedge their AI plays can bet on benchmark progress, compute prices, etc. Announcement [here](https://x.com/MNX_fi/status/2024213013126140183), testnet [here](https://testnet.mnx.fi/). **2:** I think I got my tone wrong on last week’s Open Thread and made people think I was condemning the Harper’s article that mentioned me. I actually liked it and was just trying to clarify a few points. Please don’t get angry about it on my behalf. So as to not make things worse, I’ll banish further discussion of this to a [comment](https://www.astralcodexten.com/p/open-thread-423/comment/221769885).
Scott Alexander
189627653
Open Thread 423
acx
# "All Lawful Use": Much More Than You Wanted To Know Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a “[supply chain risk](https://x.com/SecWar/status/2027507717469049070)”, the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic’s [refusal](https://www.anthropic.com/news/statement-department-of-war) to allow the Department of War to use their AIs for mass surveillance and autonomous weapons. A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI’s models to be used in the niche vacated by Anthropic. Altman [stated](https://openai.com/index/our-agreement-with-the-department-of-war/) that he had received guarantees that OpenAI’s models wouldn’t be used for mass surveillance or autonomous weapons either, but given Hegseth’s unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman’s contract must be weaker or, in a worst-case scenario, completely toothless. The debate centers on the Department of War’s demand that AIs be permitted for “all lawful use”. Anthropic worried that mass surveillance and autonomous weaponry would *de facto* fall in this category; Hegseth and Altman have tried to reassure the public that they won’t, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman’s initial statement seemed to suggest additional prohibitions, but on a closer read, provides little tangible evidence of meaningful further restrictions. Some alert ACX readers[1](#footnote-1) have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI’s national security lead [said](https://x.com/natseckatrina/status/2027908878952722693) that “we intended [the phrase ‘all lawful use’] to mean [according to the law] at the time the contract is signed’, this is not how contract law usually works, and not how the provision is likely to be enforced[2](#footnote-2). Therefore, these guarantees are not helpful. *[EDIT: To clarify: The DoW can change their own policies at will, but can’t change laws. In addition to OpenAI’s claim of being robust to changing laws, OpenAI argues that they’re protected against changes to DoW policies because they explicitly reference the relevant policies as they exist today. Based on public information, this argument seems dubious. See ‘Comments on OpenAI’s FAQ’ below.]* To learn more about the details, let’s look at the law: # Mass domestic surveillance: more than you wanted to know **Mass and targeted surveillance of foreigners**in their foreign countries is legal. Broadly, the courts have declined to grant standing to allow court cases to test the Executive Branch’s position that the [President has inherent powers derived from his constitutional role to authorize foreign intelligence and counterintelligence surveillance](https://www.brennancenter.org/our-work/analysis-opinion/how-fix-us-surveillance-law#:~:text=When%20the%20government%20collects%20foreign%20intelligence%20abroad,to%20review%20or%20approval%20by%20any%20court), which de facto has allowed this position to become the standard Executive Branch argument for lawfulness. **Targeted surveillance of Americans** domestically is legal for domestic law enforcement purposes and (in narrow and usually time-limited cases) for intelligence and counterterrorism. The surveilling agency must get the permission of a court first: normal courts for law enforcement, the Foreign Intelligence Surveillance Act (FISA) court for intelligence. This latter category includes things like wiretapping Americans suspected of spying for Russia. **Mass domestic surveillance of Americans**, American companies, and US permanent residents (or for that matter [generally their counterparts in other Five Eyes partners](https://www.lowyinstitute.org/publications/we-need-five-eyes-spy-network-oversight#:~:text=The%20partnership%20has%20one%20core%20rule%2C%20that%20the%20members%20agree%20not%20to%20spy%20on%20each%20other.%20Or%2C%20as%20Admiral%20Dennis%20Blair%2C%20Barack%20Obama%E2%80%99s%20first%20director%20of%20national%20intelligence%2C%20said%20in%20Australia%20in%202013%3A%20%E2%80%9CWe%20do%20not%20spy%20on%20each%20other.%20We%20just%20ask.%E2%80%9D) – UK, Canada, Australia, and New Zealand) is more complicated. The current law is (roughly) that it’s illegal to seek this kind of data, but legal to “incidentally obtain” it. So for example, if the US was looking for al-Qaeda communications, it might tap a major undersea cable, and if tapping that cable happened to incidentally give it data on millions of Americans, it could keep that data. But after “incidentally obtaining” the data, it [may only query the resulting database in a targeted way](https://www.unwantedwitness.org/nsa-robots-are-collecting-your-data-too-and-theyre-getting-away-with-it/). So the government might take its trove of citizen data that it “incidentally” collected looking for al-Qaeda, and search for a specific citizen’s history if it thinks (for example) that this citizen might be a spy. The government reserves the term “mass domestic surveillance” for the thing they don’t do (querying their databases *en masse*)*,* preferring terms like “gathering” for what they do do (creating the databases *en masse*). They also reserve the term “collecting” for the querying process - so that when asked “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”, a Director of National Intelligence said “no” under oath, even though, by the ordinary meaning of this question, it absolutely does. (It’s worth noting that the NSA is a DoW agency[3](#footnote-3)). **Mass analysis of third-party data** is also legal! That is, if they [buy the data](https://www.vice.com/en/article/us-military-location-data-xmode-locate-x/) from some company - let’s say Facebook - they can do whatever they want with it. The main enforceable exception is certain kinds of cell phone location data, which were carved out in a [2018 Supreme Court case](https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf). **Whatever the President thinks is legal** may also, in certain cases, be legal. During the War on Terror, President George W. Bush’s Office of Legal Counsel claimed that he *also* had the inherent constitutional power as President to lawfully authorize [warrantless mass collection of internet metadata and telephone call records](https://www.pogo.org/analyses/secrets-surveillance-and-scandals-the-war-on-terrors-unending-impact-on-americans-private-lives), a dragnet scooping up Americans and non-Americans’ data alike. The program was initially justified by counterterrorism, but was far more expansive[4](#footnote-4). This was such a scandal within the US government that many DOJ officials threatened to resign; even DOJ officials who *didn’t know what was going on* [threatened to resign because they assumed it was so bad](https://www.washingtonpost.com/world/national-security/2017/07/12/8f879432-6704-11e7-a1d7-9a32c91c6f40_story.html#:~:text=Wray%20said%20that%20although,I%27ll%20resign%20with%20you.%22). Later, the program was moved under statutory and FISA Court frameworks, until finally Congress ended it by passing the USA FREEDOM Act. So why should we be concerned about even “lawful use” of AIs for surveillance? There are stories about each of these categories, but the most compelling is that the government can buy data from third parties (eg tech companies, cell phone companies) and surveil it as much as they want. In the past, the strongest disincentive was scale and cost: you simply cannot look through every text message sent over the course of a month to see which ones mention a certain dissident. There are hacks - you can perform an automated search for the dissident’s name - but also obvious ways around the hack (the dissident can simply not mention their own name in plain text). [AI solves these scale and cost problems](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5182213). An AI could perform meaningful search of all messages in a large database, piecing together patterns to, for example, give each citizen a “presumed loyalty” score. This is currently a “lawful use” of AI, and one of the ones [Dario Amodei’s letter says](https://www.anthropic.com/news/statement-department-of-war#:~:text=government%20can%20purchase%20detailed%20records%20of%20Americans%E2%80%99%20movements%2C%20web%20browsing%2C%20and%20associations%20from%20public%20sources%20without%20obtaining%20a%20warrant%2C%20a%20practice%20the%20Intelligence%20Community%20has%20acknowledged%20raises%20privacy%20concerns%20and%20that%20has%20generated%20bipartisan%20opposition%20in%20Congress.) that he’s worried about. As far as we can tell, Altman’s contract with the Department of War doesn’t contain any provisions preventing them from using ChatGPT this way. For more details on mass domestic surveillance: see this [doc](https://docs.google.com/document/d/1rzCraazx0BgEknpxQLKUmM9Vdys-bQyVm9h03r25JII/edit?tab=t.0#heading=h.5hs88tiqunfl). # Autonomous weapons: more than you wanted to know Let’s now turn to autonomous weapons. (The authors of this section are not themselves experts, but they consulted with an expert in national security law.) There is hard Congressional law regulating the use of armed force in general (for example, you’re not allowed to shoot innocent Americans.) But to our knowledge, autonomous weapons in particular are only regulated by Department of War policy - in particular DoD Directive 3000.09. These policies don’t impose meaningful constraints, for two reasons. First, the policies are vague. Directive 3000.09 requires that autonomous weapon systems be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” But it doesn’t define “appropriate”, and the US government has stated it “is a flexible term” where what qualifies “can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system.” The institution that decides what’s “appropriate” is the same institution that wants to use the weapon. Second, the Department of War can change its own policies, so any contract which only guarantees “lawful use” rather than hard-coding some particular standard gives the DoW complete latitude to change the relevant directive (and therefore the terms) whenever they want[5](#footnote-5). Everyone (including Anthropic) agrees that some form of autonomous weapons will be necessary to win the wars of the future - indeed, autonomous weapons are already being used on the battlefield in Ukraine. But there’s a wide spectrum from humans-entirely-in-the-loop to humans-partly-in-the-loop to humans-totally-unrelated-to-the-loop, and we might want humans involved somewhere for at least two reasons. First, humans add reliability. For the same reason that chatbots sometimes hallucinate, and coding agents sometimes make [crazy and reckless decisions](https://x.com/jasonlk/status/1946069562723897802) that no human would consider, fully autonomous weapons might make inexplicable mistakes in their use of lethal force, with potentially devastating results. Second, and more important, human soldiers are a check on the worst abuses of authoritarians. Sometimes a strongman will give an illegal order - to shoot at protesters, to initiate an auto-coup, to begin a genocide - and soldiers will say no. Sometimes those soldiers will decide that the appropriate response is to arrest the strongman instead. However often this happens, the fear of it keeps strongmen in line and forces them to consider public opinion at least insofar as the army is made up of the public. If there’s a fully robotic force that automatically obeys orders, this check disappears. Some types of fully autonomous weapons are clearly appropriate today (e.g. some missile defences for Navy ships). Many more will plausibly have to be developed in the future, especially if other countries pursue them. But a good system of checks and balances for them does not yet exist. AI companies should take care to not sign a contract that could require them to build systems without adequate safeguards, akin to the safeguards of a soldier’s judgement and respect for the Constitution[6](#footnote-6). For more details on autonomous weapons, see this [doc](https://docs.google.com/document/d/1oumE7XYsJ2-1XfcskQGfRy16HOdh0u1t8wQ0TwOY3fg/edit?tab=t.0). # Comments on OpenAI’s FAQ OpenAI provided an FAQ, which we think is misleading. While we aren’t lawyers, we’ve done our best to lay out our reasoning for this belief, and have also consulted with an expert in national security law on the excerpt of the contract provided in [OpenAI’s announcement](https://openai.com/index/our-agreement-with-the-department-of-war/), and checked that their views were consistent with ours. > ***Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?*** > > *No. Based on our safety stack, our cloud-only deployment, the contract language, and existing laws, regulation and policy, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance.* Since the law straightforwardly permits autonomous weapons, and the contract permits any autonomous weapons allowed by the law, the *“contract language, and existing laws, regulation and policy”* does nothing to prohibit this. OpenAI hasn’t shared enough information about their safety stack for us to be able to evaluate that claim. See below for comments on cloud-only deployment. Our national security law expert was also very skeptical of the idea that the DoW would have OpenAI personnel meaningfully “in the loop” in sensitive contexts. > ***Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance on U.S. persons?*** > > *No. Based on our safety stack, the contract language, and existing laws that heavily restrict DoW from domestic surveillance, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance.* The law does significantly restrict domestic mass surveillance but, as explained above, leaves loopholes that may concern many readers. Since the contract permits any surveillance allowed by the law, the contract itself does nothing further to restrict the DoW from domestic surveillance. OpenAI hasn’t shared enough information about their safety stack for us to be able to evaluate that claim. > ***What if the government just changes the law or existing DoW policies?*** > > *Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.* It is not the case that the contract consistently references current laws. The first clause says *“The Department of War may use the AI System for **all lawful purposes**, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”* Our understanding is that later clauses do not automatically override this first clause. OpenAI’s Head of National Security Partnerships has [said](https://x.com/natseckatrina/status/2027908878952722693) “we intended it to mean ‘the law applicable at the time the contract is signed’”, and their CSO has also made a [similar statement](https://x.com/jasonkwon/status/2027948755467833366?s=20). Our understanding is that this is a highly non-standard interpretation. The national security law expert we consulted agreed, and was very skeptical that the allowed and required activities would remain the same if the law changed (see also [here](https://x.com/CharlieBul58993/status/2028157898371613066), starting from “If OpenAI is just referencing...) *(EDIT 03/02/2026: A few clarifications about this:* *We haven’t seen most of the contract. It’s possible that other parts of the contract stipulate OpenAI’s interpretation of “applicable law”.*[7](#footnote-7) *The FAQ quote above states that the contract “explicitly references the surveillance and autonomous law policies \***as they exist today\***“ (bold in original). From reading the contract excerpt, it’s not clear what is supposed to make this explicit. Perhaps it is the “date stamps” that OpenAI’s Chief Strategy Officer Jason Kwon mentions in his reply [here](https://x.com/jasonkwon/status/2027948755467833366), but this is confusing for two reasons, see footnote*[8](#footnote-8)*.* *We’d like to clarify the argument for why references to existing laws and policies may not be sufficient to freeze the terms in place if the law or policies change. Above, we wrote that “later clauses [about specific laws and policies] do not automatically override this first clause [allowing ‘all lawful purposes’]”. This isn’t wrong, but we think there are more relevant arguments, like [those offered](https://x.com/bradrcarson/status/2028335588022100477) by former general counsel of the Army Brad Carson, who is confident that the quoted contract language doesn’t freeze federal law in the way OpenAI wants. See footnote for details)*[9](#footnote-9) > ***How do you address the arguments Anthropic made in their blog post⁠ about their discussion with the DoW?*** > > *(...) Below is why we believe those same red lines would hold in our contract: (...) Fully autonomous weapons. The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.* Autonomous weapons can be steered by an AI in the cloud, just like a human can steer a drone remotely. OpenAI models do not need to be edge deployed in order to power a fully autonomous weapon. **Overall:** We can’t see how any of OpenAI’s claimed methods for enforcing their red lines would work except possibly if they’re allowed to implement technical safeguards that block certain lawful use, which they’ve shared so little about that we can’t evaluate it. Boaz Barak [suggests](https://x.com/boazbaraktcs/status/2027933591821299723) this is the case. If this is right, it’s strange that they don’t elsewhere stress this as the linchpin of their approach, or show the part of the agreement that guarantees them this ability. Further clarification on this point would be very helpful. # Questions that you should be asking If you have access to OpenAI or DoW decision-makers as an employee, journalist, or lawmaker, these are questions you should be asking: **Immediate questions about the contract.** First and foremost: Ask to see the full contract, as much as you can get. Scrutinize it yourself or run it by a lawyer in a conversation where attorney-client privilege exists (basically, when you are talking with them for the explicitly-stated intent of potentially securing their legal counsel, or once you’ve formally secured them as your legal counsel). Beyond that: * Does OpenAI’s definition of fully autonomous weapons include non-edge deployed systems like drones operated remotely by AI systems in the cloud? If so, what prevents the DoW from using OpenAI models in this way? * The DoW has been insistent that private companies shouldn’t dictate how the DoW can use models. OpenAI says they “retain full control over the safety stack we deploy”. How are these compatible? Can you share an excerpt from the agreement that describes OpenAI’s control over the safety stack? * Would OpenAI’s models assist with bulk analysis of Americans’ data purchased from third parties? * Will OpenAI’s technical safeguards intentionally block any lawful usage that goes against your red lines? * Who determines if use is “unlawful”? Does OpenAI have recourse if it believes use is unlawful but the DoW disagrees? * What “technical safeguards” have been agreed upon? What happens if the DoW and OpenAI disagree about what version of these safeguards are appropriate? * Does the DoW have options for recourse if OpenAI provides systems with safeguards that the DoW think unduly reduces model performance for specific lawful purposes? * Does the agreement specify that the NSA and other intelligence agencies inside of the DoW are excluded from being able to access OpenAI models? **Broader questions about the situation:** * What prevents the DoW from later demanding these restrictions be loosened, as it did with Anthropic? * What recourse does OpenAI have if DoW violates the terms of a contract with OpenAI? * What would stop the DoW from retaliating against OpenAI, as they did with Anthropic, if the DoW and OpenAI have disagreements in the future? Given that existing statements haven’t always been clear and Anthropic has alleged that the contract contains “legalese that would allow those safeguards to be disregarded at will”, we encourage you to read any responses you receive with a skeptical mindset, and ask yourself whether the response is consistent with OpenAI models being used for autonomous weapons systems or domestic mass surveillance in the colloquial sense of the terms. [1](#footnote-anchor-1) They wish to remain anonymous, but none are employees of any major AI lab or the Department of War. [2](#footnote-anchor-2) For more, see the section ‘Comments on OpenAI’s FAQ’ [3](#footnote-anchor-3) OpenAI’s head of National Security Partnerships has made a few [unclear](https://x.com/natseckatrina/status/2027915769107841098) [tweets](https://x.com/natseckatrina/status/2027931400775627188) perhaps implying that NSA might be excluded from their contract. However, as of this writing, they have not clearly confirmed this, have made some other statements that all of DoW (which includes NSA) is in scope of their contract, and have not made any comment on other DoW intelligence agencies (there are 8 others). It would be great to get further clarification on this point. [4](#footnote-anchor-4) To be fair, there are some genuine technical reasons for this – because of how traffic routes across the internet’s logical and physical structure, the government correctly notes that it’s often hard to know before grabbing them whether a given set of internet packets is related to a foreign intelligence query or not – but members of both parties and nonpartisan Inspectors General have repeatedly identified how this technical decision has enabled abuses. [5](#footnote-anchor-5) OpenAI suggests they’re protected against this since their agreement specifically refers to “DoD Directive 3000.09 (dated 25 January 2023)”. But other parts of the contract refers to “all lawful purposes” without specifying current law in particular, which would at-best lead to contradictions if the law changes. More on this below. [6](#footnote-anchor-6) These safeguards might initially have to be broader than legal use, since current law is not yet designed with powerful autonomous systems in mind [7](#footnote-anchor-7) However, when directly asked, OpenAI's Chief Strategy Officer doesn't refer to other parts of the contract but instead [says](https://x.com/jasonkwon/status/2027948755467833366) that OpenAI's interpretation is supported due to the use of "date stamps". This is confusing, since the question was about the term "applicable law", which is not itself date stamped. It's possible Kwon misunderstood the question. [8](#footnote-anchor-8) First, because [later replies](https://x.com/jasonkwon/status/2028005099214459049) cast doubt on Kwon’s claims about how standard his interpretation is. Second, because only one of the laws and policies mentioned in the contract excerpt is date stamped. (Some of the laws mention specific years, but only when the year is included in the name of that law.) [9](#footnote-anchor-9) Why was our argument not the most relevant argument? While it's true that later clauses (on specific laws and policies) don't automatically take precedence over the first clause (about “all lawful purposes”), it's also true that the first clause doesn't automatically take precedence over later clauses. All clauses matter for interpreting the overall contract. In fact, there's a general principle that more specific clauses tend to take precedence over more general clauses. This could make for a plausible argument that clauses which reference specific laws and policies take precedence over the general clause allowing "all lawful purposes". However, another interpretation would be that the references to specific laws and policies refer to the most up-to-date versions of the named laws and policies, rather than treating them as frozen into place. This would reduce conflict with the "all lawful purposes" clause, and it might therefore get some support from the inclusion of the "all lawful purposes" clause. But even if that wasn't there, this latter interpretation would still be [strongly favored](https://x.com/bradrcarson/status/2028335588022100477) according to Brad Carson (former general counsel of the Army, former undersecretary of the Army, former undersecretary of Defense), unless OpenAI has explicit language to the contrary. Given his expertise, and given that he agrees on the bottom line with the national security law expert that we consulted, we’re inclined to believe he’s right. What we're most confident about is that OpenAI’s interpretation is far from clearly correct, so if they cared about that interpretation, it would have been a big mistake for them to not include any explicit language stipulating it.
Scott Alexander
189573586
"All Lawful Use": Much More Than You Wanted To Know
acx
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
-