text stringlengths 107 74.9k |
|---|
@samuel_biyi These things aren't one man one vote, or we'd all be using grocers' apostrophes.
@aladejebideji I agree with you, and indeed I suspect that's most of the reason this is even controversial.
@Spotlight_Abby So if someone's writing is verbose and they attribute this to local custom, they're probably mistaken; they're probably just bad.
@Spotlight_Abby Of course different groups (or even individuals) might have different ideas about what constitutes simplicity, but these variations are tiny compared to the difference between a good and a bad writer.
@satpugnet I'm not saying you should use simpler words than you need to convey your full meaning, just that you should use the simplest you can.
@CEG_Seyon @mblair I mean that when he sees the clumsy sentences people have written in order to use it, he'll see that it's a harder word to use well than he might have thought.
@ColinGardiner Yes, as in so many other kinds of work. Programming is another example.
@ScarTissue101 Either way, it will make their writing better.
@ebelee_ You're still talking about it, so apparently not.
@Spotlight_Abby The fact that there aren't precise, predefined standards for simple writing doesn't make it a meaningless concept.
Like anything else of this type, there will be edge cases people disagree about, but also a core of cases very few would disagree with. |
The Top of My Todo List
April 2012
A palliative care nurse called Bronnie Ware made a list of the
biggest
regrets
of the dying
. Her list seems plausible. I could see
myself —
can
see myself — making at least 4 of these
5 mistakes.
If you had to compress them into a single piece of advice, it might
be: don't be a cog. The 5 regrets paint a portrait of post-industrial
man, who shrinks himself into a shape that fits his circumstances,
then turns dutifully till he stops.
The alarming thing is, the mistakes that produce these regrets are
all errors of omission. You forget your dreams, ignore your family,
suppress your feelings, neglect your friends, and forget to be
happy. Errors of omission are a particularly dangerous type of
mistake, because you make them by default.
I would like to avoid making these mistakes. But how do you avoid
mistakes you make by default? Ideally you transform your life so
it has other defaults. But it may not be possible to do that
completely. As long as these mistakes happen by default, you probably
have to be reminded not to make them. So I inverted the 5 regrets,
yielding a list of 5 commands
Don't ignore your dreams; don't work too much; say what you
think; cultivate friendships; be happy.
which I then put at the top of the file I use as a todo list.
Japanese Translation |
@mattyglesias There are also people (primarily young women) who enjoy a pretext for pursuing heretics.
@c0n5tantinople @pharmabroo It would seem that way to outsiders who don't understand the business, but early stage investing is very different from currency trading.
@pharmabroo Less soulless than people in finance? Practically everyone is.
I was talking recently to the founder of a startup in the finance industry. He said one of the problems with this world is that so many of the people are soulless. But interestingly this is less true the more technical you are. For pure quants, it's just a math problem.
@gregstradamus Only because Mike Pence told him no.
@bayeslord There's a startup in the current YC batch doing that.
YC is now a meta-incubator.
Starting a startup is like the entertainment business and professional sports in that you can make a lot of money without doing it mainly for the money. But it has room for a lot more people.
@thenimblegeek Wokeness started in the US so it makes sense that it might be a few years ahead.
@DrBeachcombing The family, like the Hooligans, have since changed the spelling of their name. |
The Need to Read
November 2022
In the science fiction books I read as a kid, reading had often
been replaced by some more efficient way of acquiring knowledge.
Mysterious "tapes" would load it into one's brain like a program
being loaded into a computer.
That sort of thing is unlikely to happen anytime soon. Not just
because it would be hard to build a replacement for reading, but
because even if one existed, it would be insufficient. Reading about
x doesn't just teach you about x; it also teaches you how to write.
[
1
]
Would that matter? If we replaced reading, would anyone need to be
good at writing?
The reason it would matter is that writing is not just a way to
convey ideas, but also a way to have them.
A good writer doesn't just think, and then write down what he
thought, as a sort of transcript. A good writer will almost always
discover new things in the process of writing. And there is, as far
as I know, no substitute for this kind of discovery. Talking about
your ideas with other people is a good way to develop them. But
even after doing this, you'll find you still discover new things
when you sit down to write. There is a kind of thinking that can
only be done by
writing
.
There are of course kinds of thinking that can be done without
writing. If you don't need to go too deeply into a problem, you can
solve it without writing. If you're thinking about how two pieces
of machinery should fit together, writing about it probably won't
help much. And when a problem can be described formally, you can
sometimes solve it in your head. But if you need to solve a
complicated, ill-defined problem, it will almost always help to
write about it. Which in turn means that someone who's not good at
writing will almost always be at a disadvantage in solving such
problems.
You can't think well without writing well, and you can't write well
without reading well. And I mean that last "well" in both senses.
You have to be good at reading, and read good things.
[
2
]
People who just want information may find other ways to get it.
But people who want to have ideas can't afford to.
Notes
[
1
]
Audiobooks can give you examples of good writing, but having
them read to you doesn't teach you as much about writing as reading
them yourself.
[
2
]
By "good at reading" I don't mean good at the mechanics of
reading. You don't have to be good at extracting words from the
page so much as extracting meaning from the words.
Japanese Translation
Chinese Translation
Italian Translation
French Translation |
@mattyglesias Many if not most presumably from fake accounts run by Russia.
@HeinrichKuttler @growing_daniel The way I acquired it was mainly by looking at a lot of art.
@AlexCaswen Try reading the next tweet in that thread.
@michael_nielsen My first experience with the internet was presumably the day they changed the name of the ARPAnet to the Internet, but I don't know what I happened to be doing that day.
@ilyasu @growing_daniel Though in practice the half life of a haircut is so much shorter than the half life of a hanging painting that buying bad art is more like still having the terrible haircut you had as a teenager.
@growing_daniel If you want to see examples of genuinely good mostly 20th century art, follow @ahistoryinart. Almost everything he posts is good. If nothing else your Twitter feed will have nice pictures in it.
@ilyasu @growing_daniel Presumably it would be something like the feeling that many people experience when looking at their haircuts in pictures of themselves as teenagers.
@ilyasu @growing_daniel Your life becomes phony to the extent the art affects it. And a painting hanging on the wall can have big effect on what it feels like to be in a room. It's like a visual loudspeaker.
@growing_daniel Since so many people are saying "why not just buy what you like?" I'll answer that.
Art can be phony, just like people can. You know you've had to learn how to avoid phony people. Why do you think you don't need to learn how to avoid phony art?
@selentelechia And you would end up buying bad art, because the high bit of how much you liked something would be whether you recognized the style of the artist. |
@SpencerHakimian There's a lot of fat in the federal budget. Done right, something like DOGE could have been very effective.
As we approach the end of the drive:
Me: Are there any cars coming?
13 yo: No.
Me: I can see a car right there.
13 yo: Well, there usually aren't any cars coming.
@aviramj Thanks for your carefully worded answer, but we both know this isn't the reason, don't we?
@JacketNation So did you.
@CcibChris This seems AI generated, no? https://t.co/MBCkvN83ph
@9haethon Definitely not.
Bluesky's usage graph. The pattern here is bad. Spikes when a bunch of new people show up, followed by declines when the new arrivals are disappointed. https://t.co/wESwWevXpw
@davidaxelrod If you want people to read a story, use a direct link instead of an Apple News link.
Why won't Israel allow international journalists into Gaza?
@Hamza_a96 The consensus has tipped. That's a good thing. |
@janeandrsn @JCBForumSpeak Look them up for yourself.
@OliverBeedham Not if every startup moved to the UK, but if 20% of people are afraid to work in the US now (probably a conservative estimate), the first few startups who move to the UK get that whole 20% to themselves.
@russell_m It has lower GDP per capita, but it also has rule of law.
@nicholasjarboe They're less of a threat to startups than the arrests for thought crime in the US.
@JCBForumSpeak It's almost totally random.
@avoanjali Startups are made of people. A recruiting advantage is one of the most valuable advantages you can have.
Recruiting is why all the startups shifted from the peninsula up to SF starting in 2012.
What an interesting twist of history it would be if the UK became a hub of intellectual refugees the way the US itself did in the 1930s and 40s. It wouldn't take much more than what's already happening.
A smart foreign-born undergrad at a US university asked me if he should go to the UK to start his startup because of the random deportations here. I said that while the median startup wasn't taking things to this extreme yet, it would be an advantage in recruiting.
@david_firrin Why am I bald?
@celebi_int I was a kid for the first one, but definitely paying attention. |
@cheerupstanley At the very least it will harm European citizens by making them relatively poorer than the rest of the world. And as any relatively poor country knows, lots of other bad things follow from that.
@KarthikIO It gets in the way of using interactions with European users as training data.
How strange it would be if a regulation driven largely by European anger about American dominance in the previous generation of technology ended up ensuring that Europe remained behind in the latest generation. Or maybe not so strange. Bitterness is never a strategy for winning.
After talking to an AI startup from Europe in the current YC batch, it's clear that the GDPR conflicts with AI in an unforeseen way that will significantly harm European AI companies.
@Austen That's not true.
@Rushabh_Shah777 Over 5000.
@gfodor @elonmusk If that was what you meant, you should have said so.
@samarth_ghoslya Lots.
@gfodor @elonmusk It's not for me to decide how long the Ukrainians should continue to defend their country. That's for them to decide.
I wish anything else in my life was running as smoothly as YC is under Garry. We just had a board meeting and it seemed like every item on the agenda was about how to make something that's working well work even better. |
@wideofthepost I dislike Bill Ackman as much as the next man, but the numbers you quote don't add up to 9 billion.
@garrytan I don't think anyone's claiming anything changed about society in that year. It was just when a number happened to peak.
@Austen No one is better at assembling groups of cryptographers, and if none of their projects were unlikely to work, they'd be operating too conservatively.
@mmeJen Quite possibly. I'm not saying his work is valuable, just that it's not trivially easy.
@mattyglesias In practice it's only possible when all your wealth is due to a single decision. If you have to get multiple 50/50 bets right, the chance of winning by luck is 1/2^n, which approaches zero very rapidly.
Carlson is falling victim to a common phenomenon. He doesn't understand how someone rich or famous got that way, so he assumes there is nothing to understand.
I don't like Bill Ackman or Tucker Carlson, but Carlson is mistaken if he thinks Ackman is a useless person with no actual skills. If a useless person with no actual skills could make as much money as Ackman has, there would be millions of billionaires in America.
@CcibChris This isn't real. It was made by an artist called Dru Blair.
Unexpected consequence of the improvement of AIs (though obvious in retrospect): they continue to hallucinate, but as they improve their hallucinations become more authoritative-sounding. So the danger posed by hallucinations doesn't decrease as fast as AIs improve.
@danteocualesjr We get them printed by Tablo.
https://t.co/KheY1CAxDA |
AI is evolving so fast and schools change so slow that it may be better for startups to build stuff for kids to use themselves first, then collect all the schools later. That m.o. would certainly be more fun.
For the next 10 years at least the conversations about AI tutoring inside schools will be mostly about policy, and the conversations about AI tutoring outside schools will be mostly about what it's possible to build. The latter are going to be much more interesting.
The first rule of activism: It's not enough that your actions are intended to help fix some problem. They must actually help fix it. And you can't take this for granted, because the easiest forms of activism mostly don't.
"How to Start Google" (https://t.co/rN1OHWBwm5) was a talk at my older son's school. It was one of a series of talks about careers. I just heard the students voted it the best one, and I'm happier than if I'd won some prestigious award.
@caspar_g They're not. That's why you have to buy them at auction. You can only buy old ones.
@yuris @cjc Here we are at one of them. https://t.co/S9hf1IXE0N
@UserJourneys Come to think of it, old chamberpots do occasionally come up at auction. That could be the perfect way to display it.
Explained to Jessica that it's "coco de mer," not "coco de merde." She doesn't like the one I bought at auction. https://t.co/M4QHBsukgM
@David__Osland And yet passengers apparently liked it much less than the destroyed version. https://t.co/HrIw3PXPaG
Jessica pointed out that the people saying I was hot are mostly guys. I pointed out that this shows she checked. |
@Noahpinion Interestingly I believe numbers 2 and 3 (antitrust and crypto) are both ultimately due to Elizabeth Warren, whose proteges are in both cases the cause of the problem.
@ghostofhellas How do they know this mound is from that battle?
@bonker_99 I have. I'm complaining about a rental.
@ntldr2020 It was a new rental Subaru that inspired me to write this! It's so unbearably annoying.
Maybe there's a kind of uncanny valley here, and we'll later realize there are two peaks at 2004 Defender and full self-driving, with unhappy lowlands between.
New cars get more and more annoying as they approach full self-driving. There are ever more things they nag you to do or prevent you from doing, till in the penultimate stage you're just one flesh subroutine in their program.
@krishnanrohit There is something to this. I wouldn't say it has made Trump supporters of people who weren't, but it has definitely shifted people a bit to the right. Like the joke that a conservative is a liberal who's been mugged.
Maybe this is a way for him to bow out gracefully. "I'm not dropping out because I'm too old, but because the focus on my age is distracting everyone from the issues."
Now that the big question about Biden has switched to "Is he too old?" he can no longer talk effectively about issues, because people watching him speak are thinking mostly about whether he seems weak or confused instead of what he's actually saying.
@LurkingToTruth They don't want to know it. |
@oscarle_x The exact opposite of that.
@shashj Just out of curiosity, is it legal to say you're not in favor of proscribing a proscribed organization?
Someone asked how to expand startups' ideas. The best way is to shrink the idea down to its essence, then ask how broadly that essential idea could be expanded. You have to shrink it first, though, or there will be random stuff left in it that impedes its expansion.
@FrankRundatz In this case, no. YC already funded his company several years ago. We were just talking about the current state of AI-assisted programming.
@AsyncCollab People think they can keep faking all the way to liquidity. And that has happened a handful of times, though it's rare.
@Principal_Jon When I was a kid I wouldn't have expected that to be true, but it is. It has such variety.
@TimSuzman It sometimes feels that way during office hours, but the truth is that everything depends on the founders' execution.
@thogge The first thing I do when I meet a startup is get them to explain it to me. Once I think I understand it, I explain it back to them. But beyond that there's no structure. I just start with whatever seems the most important question.
I've been doing office hours with startups for 20 years now. I must have talked to over a thousand of them. How have I not become bored by it? I think the reason is curiosity. Each startup's idea is different, and it's a puzzle to figure out how to make it work.
@tr0g This is someone we funded several years ago. |
Charisma / Power
January 2017
People who are powerful but uncharismatic will tend to be disliked.
Their power makes them a target for criticism that they don't have
the charisma to disarm. That was Hillary Clinton's problem. It also
tends to be a problem for any CEO who is more of a builder than a
schmoozer. And yet the builder-type CEO is (like Hillary) probably
the best person for the job.
I don't think there is any solution to this problem. It's human
nature. The best we can do is to recognize that it's happening, and
to understand that being a magnet for criticism is sometimes a sign
not that someone is the wrong person for a job, but that they're
the right one. |
What Business Can Learn from Open Source
August 2005
(This essay is derived from a talk at Oscon 2005.)
Lately companies have been paying more attention to open source.
Ten years ago there seemed a real danger Microsoft would extend its
monopoly to servers. It seems safe to say now that open source has
prevented that. A recent survey found 52% of companies are replacing
Windows servers with Linux servers.
[
1
]
More significant, I think, is
which
52% they are. At this point,
anyone proposing to run Windows on servers should be prepared to
explain what they know about servers that Google, Yahoo, and Amazon
don't.
But the biggest thing business has to learn from open source is not
about Linux or Firefox, but about the forces that produced them.
Ultimately these will affect a lot more than what software you use.
We may be able to get a fix on these underlying forces by triangulating
from open source and blogging. As you've probably noticed, they
have a lot in common.
Like open source, blogging is something people do themselves, for
free, because they enjoy it. Like open source hackers, bloggers
compete with people working for money, and often win. The method
of ensuring quality is also the same: Darwinian. Companies ensure
quality through rules to prevent employees from screwing up. But
you don't need that when the audience can communicate with one
another. People just produce whatever they want; the good stuff
spreads, and the bad gets ignored. And in both cases, feedback
from the audience improves the best work.
Another thing blogging and open source have in common is the Web.
People have always been willing to do great work
for free, but before the Web it was harder to reach an audience
or collaborate on projects.
Amateurs
I think the most important of the new principles business has to learn is
that people work a lot harder on stuff they like. Well, that's
news to no one. So how can I claim business has to learn it? When
I say business doesn't know this, I mean the structure of business
doesn't reflect it.
Business still reflects an older model, exemplified by the French
word for working:
travailler
. It has an English cousin, travail,
and what it means is torture.
[
2
]
This turns out not to be the last word on work, however.
As societies get richer, they learn something about
work that's a lot like what they learn about diet. We know now that the
healthiest diet is the one our peasant ancestors were forced to
eat because they were poor. Like rich food, idleness
only seems desirable when you don't get enough of it. I think we were
designed to work, just as we were designed to eat a certain amount
of fiber, and we feel bad if we don't.
There's a name for people who work for the love of it: amateurs.
The word now has such bad connotations that we forget its etymology,
though it's staring us in the face. "Amateur" was originally rather
a complimentary word. But the thing to be in the twentieth century
was professional, which amateurs, by definition, are not.
That's why the business world was so surprised by one lesson from
open source: that people working for love often surpass those working
for money. Users don't switch from Explorer to Firefox because
they want to hack the source. They switch because it's a better
browser.
It's not that Microsoft isn't trying. They know controlling the
browser is one of the keys to retaining their monopoly. The problem
is the same they face in operating systems: they can't pay people
enough to build something better than a group of inspired hackers
will build for free.
I suspect professionalism was always overrated-- not just in the
literal sense of working for money, but also connotations like
formality and detachment. Inconceivable as it would have seemed
in, say, 1970, I think professionalism was largely a fashion,
driven by conditions that happened to exist in the twentieth century.
One of the most powerful of those was the existence of "channels." Revealingly,
the same term was used for both products and information: there
were distribution channels, and TV and radio channels.
It was the narrowness of such channels that made professionals
seem so superior to amateurs. There were only a few jobs as
professional journalists, for example, so competition ensured the
average journalist was fairly good. Whereas anyone can express
opinions about current events in a bar. And so the average person
expressing his opinions in a bar sounds like an idiot compared to
a journalist writing about the subject.
On the Web, the barrier for publishing your ideas is even lower.
You don't have to buy a drink, and they even let kids in.
Millions of people are publishing online, and the average
level of what they're writing, as you might expect, is not very
good. This has led some in the media to conclude that blogs don't
present much of a threat-- that blogs are just a fad.
Actually, the fad is the word "blog," at least the way the print
media now use it. What they mean by "blogger" is not someone who
publishes in a weblog format, but anyone who publishes online.
That's going to become a problem as the Web becomes the default
medium for publication. So I'd
like to suggest an alternative word for someone who publishes online.
How about "writer?"
Those in the print media who dismiss the writing online because of
its low average quality are missing an important point: no one reads
the
average
blog. In the old world of channels, it meant something
to talk about average quality, because that's what you were getting
whether you liked it or not.
But now you can read any writer you want. So the average
quality of writing online isn't what the print media are competing
against. They're competing against the best writing online. And,
like Microsoft, they're losing.
I know that from my own experience as a reader. Though most print
publications are online, I probably
read two or three articles on individual people's sites for every
one I read on the site of a newspaper or magazine.
And when I read, say, New York Times stories, I never reach
them through the Times front page. Most I find through aggregators
like Google News or Slashdot or Delicious. Aggregators show how
much
better
you can do than the channel. The New York Times front page is
a list of articles written by people who work for the New York Times. Delicious
is a list of articles that are interesting. And it's only now that
you can see the two side by side that you notice how little overlap there is.
Most articles in the print media are boring. For example, the
president notices that a majority of voters now think invading Iraq
was a mistake, so he makes an address to the nation to drum up
support. Where is the man bites dog in that? I didn't hear the
speech, but I could probably tell you exactly what he said. A
speech like that is, in the most literal sense, not news: there is
nothing
new
in it.
[
3
]
Nor is there anything new, except the names and places, in most
"news" about things going wrong. A child is abducted; there's a
tornado; a ferry sinks; someone gets bitten by a shark; a small
plane crashes. And what do you learn about the world from these
stories? Absolutely nothing. They're outlying data points; what
makes them gripping also makes them irrelevant.
As in software, when professionals produce such crap, it's not
surprising if amateurs can do better. Live by the channel, die by
the channel: if you depend on an oligopoly, you sink into bad habits
that are hard to overcome when you suddenly get competition.
[
4
]
Workplaces
Another thing blogs and open source software have in common is that
they're often made by people working at home. That may not seem
surprising. But it should be. It's the architectural equivalent
of a home-made aircraft shooting down an F-18. Companies spend
millions to build office buildings for a single purpose: to be a
place to work. And yet people working in their own homes,
which aren't even designed to be workplaces, end up
being more productive.
This proves something a lot of us have suspected. The average
office is a miserable place to get work done. And a lot of what
makes offices bad are the very qualities we associate with
professionalism. The sterility
of offices is supposed to suggest efficiency. But suggesting
efficiency is a different thing from actually being efficient.
The atmosphere of the average workplace is to productivity what
flames painted on the side of a car are to speed. And it's not
just the way offices look that's bleak. The way people act is just
as bad.
Things are different in a startup. Often as not a startup begins
in an apartment. Instead of matching beige cubicles
they have an assortment of furniture they bought used. They work
odd hours, wearing the most casual of clothing. They look at
whatever they want online without worrying whether it's "work safe."
The cheery, bland language of the office is replaced by wicked humor. And
you know what? The company at this stage is probably the most
productive it's ever going to be.
Maybe it's not a coincidence. Maybe some aspects of professionalism
are actually a net lose.
To me the most demoralizing aspect of the traditional office is
that you're supposed to be there at certain times. There are usually
a few people in a company who really have to, but the reason most
employees work fixed hours is that the company can't measure their
productivity.
The basic idea behind office hours is that if you can't make people
work, you can at least prevent them from having fun. If employees
have to be in the building a certain number of hours a day, and are
forbidden to do non-work things while there, then they must be
working. In theory. In practice they spend a lot of their time
in a no-man's land, where they're neither working nor having fun.
If you could measure how much work people did, many companies
wouldn't need any fixed workday. You could just say: this is what
you have to do. Do it whenever you like, wherever you like. If
your work requires you to talk to other people in the company, then
you may need to be here a certain amount. Otherwise we don't care.
That may seem utopian, but it's what we told people who came to
work for our company. There were no fixed office hours. I never
showed up before 11 in the morning. But we weren't saying this to
be benevolent. We were saying: if you work here we expect you to
get a lot done. Don't try to fool us just by being here a lot.
The problem with the facetime model is not just that it's demoralizing, but
that the people pretending to work interrupt
the ones actually working. I'm convinced the facetime model
is the main reason large organizations have so many meetings.
Per capita, large organizations accomplish very little.
And yet all those people have to be on site at least eight hours a
day. When so much time goes in one end and so little achievement
comes out the other, something has to give. And meetings are the
main mechanism for taking up the slack.
For one year I worked at a regular nine to five job, and I remember
well the strange, cozy feeling that comes over one during meetings.
I was very aware, because of the novelty, that I was being paid for
programming. It seemed just amazing, as if there was a machine on
my desk that spat out a dollar bill every two minutes no matter
what I did. Even while I was in the bathroom! But because the
imaginary machine was always running, I felt I always ought to be
working. And so meetings felt wonderfully relaxing. They
counted as work, just like programming, but they were so much easier.
All you had to do was sit and look attentive.
Meetings are like an opiate with a network effect. So is email,
on a smaller scale. And in addition to the direct cost in time,
there's the cost in fragmentation-- breaking people's day up into
bits too small to be useful.
You can see how dependent you've become on something by removing
it suddenly. So for big companies I propose the following experiment.
Set aside one day where meetings are forbidden-- where everyone has to
sit at their desk all day and work without interruption on
things they can do without talking to anyone else.
Some amount of communication is necessary in most jobs, but I'm
sure many employees could find eight hours worth of stuff they could
do by themselves. You could call it "Work Day."
The other problem with pretend work
is that it often looks better than real work. When I'm
writing or hacking I spend as much time just thinking as I do
actually typing. Half the time I'm sitting drinking a cup of tea,
or walking around the neighborhood. This is a critical phase--
this is where ideas come from-- and yet I'd feel guilty doing this
in most offices, with everyone else looking busy.
It's hard to see how bad some practice is till you have something
to compare it to. And that's one reason open source, and even blogging
in some cases, are so important. They show us what real work looks like.
We're funding eight new startups at the moment. A friend asked
what they were doing for office space, and seemed surprised when I
said we expected them to work out of whatever apartments they found
to live in. But we didn't propose that to save money. We did it
because we want their software to be good. Working in crappy
informal spaces is one of the things startups do right without
realizing it. As soon as you get into an office, work and life
start to drift apart.
That is one of the key tenets of professionalism. Work and life
are supposed to be separate. But that part, I'm convinced, is a
mistake.
Bottom-Up
The third big lesson we can learn from open source and
blogging is that ideas can bubble up from the bottom, instead of
flowing down from the top. Open source and blogging both work
bottom-up: people make what they want, and the best stuff
prevails.
Does this sound familiar? It's the principle of a market economy.
Ironically, though open source and blogs are done for free, those
worlds resemble market economies, while most companies, for all
their talk about the value of free markets, are run internally like
communist states.
There are two forces that together steer design: ideas about
what to do next, and the enforcement of quality. In the channel
era, both flowed down from the top. For example, newspaper editors
assigned stories to reporters, then edited what they wrote.
Open source and blogging show us things don't have to work that
way. Ideas and even the enforcement of quality can flow bottom-up.
And in both cases the results are not merely acceptable, but better.
For example, open source software is more reliable precisely because
it's open source; anyone can find mistakes.
The same happens with writing. As we got close to publication, I
found I was very worried about the essays in
Hackers
& Painters
that hadn't been online. Once an essay has had a couple thousand
page views I feel reasonably confident about it. But these had had
literally orders of magnitude less scrutiny. It felt like
releasing software without testing it.
That's what all publishing used to be like. If
you got ten people to read a manuscript, you were lucky. But I'd
become so used to publishing online that the old method now seemed
alarmingly unreliable, like navigating by dead reckoning once you'd
gotten used to a GPS.
The other thing I like about publishing online is that you can write
what you want and publish when you want. Earlier this year I wrote
something
that seemed suitable for a magazine, so
I sent it to an editor I know.
As I was waiting to hear back, I found to my surprise that I was
hoping they'd reject it. Then I could put it online right away.
If they accepted it, it wouldn't be read by anyone for months, and
in the meantime I'd have to fight word-by-word to save it from being
mangled by some twenty five year old copy editor.
[
5
]
Many employees would
like
to build great things for the companies
they work for, but more often than not management won't let them.
How many of us have heard stories of employees going to management
and saying, please let us build this thing to make money for you--
and the company saying no? The most famous example is probably Steve Wozniak,
who originally wanted to build microcomputers for his then-employer, HP.
And they turned him down. On the blunderometer, this episode ranks
with IBM accepting a non-exclusive license for DOS. But I think this
happens all the time. We just don't hear about it usually,
because to prove yourself right you have to quit
and start your own company, like Wozniak did.
Startups
So these, I think, are the three big lessons open source and blogging
have to teach business: (1) that people work harder on stuff they
like, (2) that the standard office environment is very unproductive,
and (3) that bottom-up often works better than top-down.
I can imagine managers at this point saying: what is this guy talking
about? What good does it do me to know that my programmers
would be more productive
working at home on their own projects? I need their asses in here
working on version 3.2 of our software, or we're never going to
make the release date.
And it's true, the benefit that specific manager could derive from
the forces I've described is near zero. When I say business can
learn from open source, I don't mean any specific business can. I
mean business can learn about new conditions the same way a gene
pool does. I'm not claiming companies can get smarter, just that
dumb ones will die.
So what will business look like when it has assimilated the lessons
of open source and blogging? I think the big obstacle preventing
us from seeing the future of business is the assumption that people
working for you have to be employees. But think about what's going
on underneath: the company has some money, and they pay it to the
employee in the hope that he'll make something worth more than they
paid him. Well, there are other ways to arrange that relationship.
Instead of paying the guy money as a salary, why not give it to him
as investment? Then instead of coming to your office to work on
your projects, he can work wherever he wants on projects of his own.
Because few of us know any alternative, we have no idea how much
better we could do than the traditional employer-employee relationship.
Such customs evolve with glacial slowness. Our
employer-employee relationship still retains a big chunk of
master-servant DNA.
[
6
]
I dislike being on either end of it.
I'll work my ass off for a customer, but I resent being told what
to do by a boss. And being a boss is also horribly frustrating;
half the time it's easier just to do stuff yourself than to get
someone else to do it for you.
I'd rather do almost anything than give or receive a
performance review.
On top of its unpromising origins, employment
has accumulated a lot of cruft over the years. The list of what
you can't ask in job interviews is now so long that for convenience
I assume it's infinite. Within the
office you now have to walk on eggshells lest anyone
say
or do
something that makes the company prey to a lawsuit. And God help
you if you fire anyone.
Nothing shows more clearly that employment is not an ordinary economic
relationship than companies being sued for firing people. In any
purely economic relationship you're free to do what you want. If
you want to stop buying steel pipe from one supplier and start
buying it from another, you don't have to explain why. No one can
accuse you of
unjustly
switching pipe suppliers. Justice implies
some kind of paternal obligation that isn't there in
transactions between equals.
Most of the legal restrictions on employers are intended to protect
employees. But you can't have action without an equal and opposite
reaction. You can't expect employers to have some kind of paternal
responsibility toward employees without putting employees in the
position of children. And that seems a bad road to go down.
Next time you're in a moderately large city, drop by the main post
office and watch the body language of the people working there.
They have the same sullen resentment as children made to do
something they don't want to. Their union has exacted pay
increases and work restrictions that would have been the envy of
previous generations of postal workers, and yet they don't seem any
happier for it. It's demoralizing
to be on the receiving end of a paternalistic relationship, no
matter how cozy the terms. Just ask any teenager.
I see the disadvantages of the employer-employee relationship because
I've been on both sides of a better one: the investor-founder relationship.
I wouldn't claim it's painless. When I was running a
startup, the thought of our investors used to keep me up at night.
And now that I'm an
investor
,
the thought of our startups keeps me
up at night. All the pain of whatever problem you're trying to
solve is still there.
But the pain hurts less when it isn't
mixed with resentment.
I had the misfortune to participate in what amounted to a controlled
experiment to prove that. After Yahoo bought our startup I went
to work for them. I was doing exactly the same work, except with
bosses. And to my horror I started acting like a child. The
situation pushed buttons I'd forgotten
I had.
The big advantage of investment over employment, as the examples of open
source and blogging suggest, is that people working on projects of
their own are enormously more productive. And a
startup
is a project
of one's own in two senses, both of them important: it's creatively
one's own, and also economically ones's own.
Google is a rare example of a big company in tune with the forces
I've described. They've tried hard to make their offices less sterile
than the usual cube farm. They give employees who do great work
large grants of stock to simulate the rewards of a startup. They
even let hackers spend 20% of their time on their own projects.
Why not let people spend 100% of their time on their own projects,
and instead of trying to approximate the value of what they create,
give them the actual market value? Impossible? That is in fact
what venture capitalists do.
So am I claiming that no one is going to be an employee anymore--
that everyone should go and start a startup? Of course not.
But more people could do it than do it now.
At the moment, even the smartest students leave school thinking
they have to get a
job
.
Actually what they need to do is make
something valuable. A job is one way to do that, but the more
ambitious ones will ordinarily be better off taking money from an
investor than an employer.
Hackers tend to think business is for MBAs. But business
administration is not what you're doing in a startup. What you're
doing is business
creation
. And the first phase of that
is mostly product creation-- that is, hacking. That's the
hard part. It's a lot harder to create something people love than
to take something people love and figure out how to make money from
it.
Another thing that keeps people away from starting startups is the
risk. Someone with kids and a mortgage should think twice before
doing it. But most young hackers have neither.
And as the example of open source and blogging suggests, you'll
enjoy it more, even if you fail. You'll be working on your own
thing, instead of going to some office and doing what you're told.
There may be more pain in your own company, but it won't hurt as
much.
That may be the greatest effect, in the long run, of the forces
underlying open source and blogging: finally ditching the old
paternalistic employer-employee relationship, and replacing it with
a purely economic one, between equals.
Notes
[
1
]
Survey by Forrester Research reported in the cover story of
Business Week, 31 Jan 2005. Apparently someone believed you have to
replace the actual server in order to switch the operating system.
[
2
]
It derives from the late Latin
tripalium
,
a torture device so called because it consisted of three stakes.
I don't know how the stakes were used. "Travel" has the same root.
[
3
]
It would be much bigger news, in that sense, if the president
faced unscripted questions by giving a press conference.
[
4
]
One measure of the incompetence of newspapers is that so many
still make you register to read stories. I have yet to find a blog
that tried that.
[
5
]
They accepted the article, but I took so long to
send them the final version that by the time I did the section of
the magazine they'd accepted it for had disappeared in a reorganization.
[
6
]
The word "boss" is derived from the Dutch
baas
, meaning
"master."
Thanks
to Sarah Harlin, Jessica Livingston, and Robert Morris for reading drafts of this.
French Translation
Russian Translation
Japanese Translation
Spanish Translation
Arabic Translation |
@lies_and_stats @cperciva Their simpler alternatives are also common though. Presumably even more common.
@Tired_ofit_All Some writers can get away with doing this a fair amount, but most people should just aim for simplicity.
@cperciva I've seen a lot of this sort of comedic effect recently. Sadly it was mostly unintentional.
@mathepi Depends why you want it. If you want it because it most accurately expresses what you mean to say, then yes. But if you only want it to sound impressive, that's bad writing.
I've noticed a lot of people complaining jokingly that they now pause when they're about to use a fancy word. That's great. That's exactly what one should do — stop and think "is there a way to say this more simply?"
@Bootlegregore Yes, somewhat. It's hard to get more basic than "apropos of," but there are a lot plainer alternatives to "delve into."
@refiloe_Q It's the right word to express the idea, and the right word is never pretentious. What's pretentious is to use fancy words when they're not needed.
@markessien That's a separate question from whether one chooses to write simply and concisely or not, which is what Gibbon's quote is about.
@safier "Are we not going to pay a terrible price that will threaten the foundations of Israel’s existence because the international community will completely dissociate itself from us?"
— Ehud Olmert
@zamresearch Maybe, but I'm reluctant to do that sort of thing. Online controversies are mostly driven by idiots, so if you write essays in response to online controversies, you're letting your agenda be determined by idiots. |
@halabi @growing_daniel Few people anywhere, rich or poor, have any taste.
@htmella @growing_daniel Is it a good idea to marry someone you like a minute after meeting them when you're 17?
It's possible to like bad people, and just as possible to like bad art.
@dbasch @growing_daniel Oh yes it does. It's very easy to buy art that pleases you at first, but that you completely ignore after it's been on your wall for a month. I've been buying art for decades and I'm still learning.
@diffTTT @growing_daniel In a way. They need to learn how not to be fooled by meretricious art, how to avoid the immense influence of hype and fashion, etc. Most people have to figure this out for themselves or from books, but a truly competent expert could help.
@Izemthinks @growing_daniel Not a high threshold.
@growing_daniel We funded you. Isn't that enough?
@growing_daniel The other alternative, if you don't have time to understand art, is to hire an "art consultant" to buy it for you. Rich people in NYC and LA do this. But these "art consultants" are for the most part so palpably bogus that few SV people could endure them.
@growing_daniel The main reason rich people in SV don't buy art is that it does actually take some expertise to do it well. And since the kind of people who get rich in SV hate to do things badly, and don't have time to learn about art now, they do nothing.
@SteveStuWill Do you think there's a causal connection between neuroticism and extraversion?
@bird107w Yeah. This had been something I wondered about a lot as a child. Now it is, as you say, painfully clear. We are nearly all failing. |
@nitinmalik A lot of programming has always been scutwork.
@Omeyimi01 @ahistoryinart Phew.
So if I had to boil down my advice to one sentence, it would be: Find a kind of work that you're so interested in that you'll learn to do it better than AI can.
The most interesting consequence of this principle, though, is that it will become even more valuable to know what you're interested in. It's hard to do something really well if you're not deeply interested in it.
So I think the best general advice for protecting oneself from AI is to do something so well that you're operating way above the level of scutwork. But that does in turn rule out occupations that consist mostly of this kind of work.
For example, is programming safe from AI? At the bottom end, definitely not. Those jobs are already disappearing. But at the same time the very best programmers (e.g. the ones who are good enough to start their own companies) are being paid exceptional amounts.
It may be a mistake to ask which occupations are most safe from being taken by AI. What AI (in its current form) is good at is not so much certain jobs, but a certain way of working. It's good at scutwork. So that's the thing to avoid.
@VCBrags Actually this is an interestingly counterintuitive situation. It's almost always a useful exercise to estimate how big a startup could get. I was quite surprised the first time it wasn't. And anything that has surprised me about startups should surprise you.
@ahistoryinart Is that cropped, or did he really get so close to the edge on the bottom?
@Mondoweiss I just followed her. |
@eluft You can do better than that. Ideally you don't have a political tribe.
@lukepuplett You might think that, but empirically I haven't noticed a difference. Poor people don't seem more unprincipled than rich people.
@sarmadgulzar I wouldn't assume that they do.
Your moral principles and your economic interests won't always be aligned. What you do when they're not is a test of character.
@ryzers @mattyklein_ He asked me to.
@bryanpaluch It would be easier to just paint flames and bald eagles on them.
@IndestGames @mattyklein_ I think about it more and more.
12 yo starts lurching around like Frankenstein's monster and emitting monosyllabic grunts.
Me: What's the matter with you?
12 yo: I have to start acting like this. I'm going to be 13 in a few more months.
@mattyklein_ For some mysterious reason no one ever includes the second tweet in that thread.
@gfodor I called it the year before. Are we done now?
https://t.co/ynXzEH5aP2 |
@pitdesi @tacobell It would be very alarming if the answer was no.
@SaleemMerkt That it was never 9-5.
You know the founders are still running things when a company can talk openly about its "most hated feature." Hired managers would never dare to be so candid.
For the 10% of the US electorate who were nodding their heads in agreement as they read that: I'm joking.
Whatever the practical difficulties involved in invading Canada, the moral position at least is clear. Canadians all speak English, which makes them ethnically American, so it's ok to invade.
@slaings Try answering it then.
I seem to be willing to spend unlimited amounts of time trying to start today's fire from the microscopic glowing embers of yesterday's rather than just using a match.
I just came across a school application we filled out for our older son when he was 4 (in Palo Alto you have to apply to competitive schools) and one of the questions was to describe a day in his life. I'm so pleased now that I had to do this. https://t.co/XamwRDxtxj
@caitoz You're almost half right. Elon owns 42% of SpaceX.
@vivek_thakur_81 They seem to have plenty of spine about other questions. So I worry that in this case it's something more sinister. |
Here's the masterpiece of bland evasion that I got in reply: https://t.co/1D21PA9fDU
If you want me to ignore your email, put the word "opportunity" in the subject line. If you want to make doubly sure, put "exclusive" in there too.
I dreamt there was a new fashion for plaid lambda expressions. I didn't like them, so when McCarthy came to convince me to use them, I snuck out the back to avoid him.
I talked to a startup yesterday that had been forced to compress certain information to fit it into an LLM context window. But compression is understanding, and in the compressed form the information could be used for other, new things.
I talked to a startup that's not a software company but uses AI quite a lot. They currently have 6 employees. I asked how many more people they'd need to hire if they didn't use AI, and they said about 10. So AI is increasing their productivity about 2.7x.
Scottish portraitists didn't flatter their sitters. Here are Raeburn (left) and Ramsay (right). Ramsay tended to paint everyone with a underbite, which was particularly hard on the female sitters. https://t.co/Egh8Acjpzd
A still life by James Stroudley (1906 – 1985). https://t.co/eeuCh2YkyV
I was talking recently to a startup using AI to take over a certain kind of work. Apparently investors are skeptical that AI could do it. I told them they should point out that most of this work already seems as if it's done by AI.
That could be a useful heuristic for picking domains to attack with AI. If the work being done in some domain is already slop when done by humans, then presumably current AI can do it well enough.
Jessica and I went out to lunch with Trevor Blackwell and we laughed the whole time. It reminded me of how much we used to laugh in the early years of YC. We tried to hide it, because it didn't seem very professional, but I now think it was not unrelated to how well YC did. |
I've seen several organizations criticized for removing woke stuff from their sites, as if this showed a lack of integrity. Not necessarily. If they were swapping woke stuff for new stuff sucking up to the Republicans, that would show a lack of integrity, but they're not.
@ATabarrok If they swapped it out for stuff sucking up to Trump, it would be accurate to call them that, but there's nothing dishonest about merely reverting their site to the way it was before zealots forced them to include a bunch of woke boilerplate.
@RomeInTheEast No; the Dark Ages were a period in European history, and the Byzantines weren't so much part of Europe as a power on its eastern border.
Corollary: Teaching kids (either yours or other people's) to be bitter about wealth is the best way to keep them poor.
Being bitter about wealth is a good way to ensure you'll never become rich. Why would you do anything to bring you closer to something you despise?
@ahistoryinart Once again taking out the color makes it clear it was done from a photograph. https://t.co/2ZoHCCxdfN
The open question is whether founders will still need to be programmers themselves. I would guess the answer is yes. Founders may now become managers instead of writing all the code themselves. But to manage programmers well you have to be one.
So even if AI becomes very good at writing code, it won't change starting a startup that dramatically. Understanding users' needs will still be the core of starting a startup. And the best way to understand users' needs will still be to have them yourself.
What YC asks about in interviews is how well you understand users' needs, not your programming ability. I explained this years ago in this essay I wrote about how to ace your Y Combinator interview:
https://t.co/zXWErQqlwV
The classic software startup writes code to solve users' problems. If AI makes writing code more of a commodity, understanding users' problems will become the most important component of starting a startup. But it already is. |
How Not to Die
Want to start a startup?
Get funded by
Y Combinator
.
August 2007
(This is a talk I gave at the last
Y Combinator dinner of the summer.
Usually we don't have a speaker at the last dinner; it's more of
a party. But it seemed worth spoiling the atmosphere if I could
save some of the startups from
preventable deaths. So at the last minute I cooked up this rather
grim talk. I didn't mean this as an essay; I wrote it down
because I only had two hours before dinner and think fastest while
writing.)
A couple days ago I told a reporter that we expected about a third
of the companies we funded to succeed. Actually I was being
conservative. I'm hoping it might be as much as a half. Wouldn't
it be amazing if we could achieve a 50% success rate?
Another way of saying that is that half of you are going to die. Phrased
that way, it doesn't sound good at all. In fact, it's kind of weird
when you think about it, because our definition of success is that
the founders get rich. If half the startups we fund succeed, then
half of you are going to get rich and the other half are going to
get nothing.
If you can just avoid dying, you get rich. That sounds like a joke,
but it's actually a pretty good description of what happens in a
typical startup. It certainly describes what happened in Viaweb.
We avoided dying till we got rich.
It was really close, too. When we were visiting Yahoo to talk about
being acquired, we had to interrupt everything and borrow one of
their conference rooms to talk down an investor who was about to
back out of a new funding round we needed to stay alive. So even
in the middle of getting rich we were fighting off the grim reaper.
You may have heard that quote about luck consisting of opportunity
meeting preparation. You've now done the preparation. The work
you've done so far has, in effect, put you in a position to get
lucky: you can now get rich by not letting your company die. That's
more than most people have. So let's talk about how not to die.
We've done this five times now, and we've seen a bunch of startups
die. About 10 of them so far. We don't know exactly what happens
when they die, because they generally don't die loudly and heroically.
Mostly they crawl off somewhere and die.
For us the main indication of impending doom is when we don't hear
from you. When we haven't heard from, or about, a startup for a
couple months, that's a bad sign. If we send them an email asking
what's up, and they don't reply, that's a really bad sign. So far
that is a 100% accurate predictor of death.
Whereas if a startup regularly does new deals and releases and
either sends us mail or shows up at YC events, they're probably
going to live.
I realize this will sound naive, but maybe the linkage works in
both directions. Maybe if you can arrange that we keep hearing
from you, you won't die.
That may not be so naive as it sounds. You've probably noticed
that having dinners every Tuesday with us and the other founders
causes you to get more done than you would otherwise, because every
dinner is a mini Demo Day. Every dinner is a kind of a deadline.
So the mere constraint of staying in regular contact with us will
push you to make things happen, because otherwise you'll be embarrassed
to tell us that you haven't done anything new since the last time
we talked.
If this works, it would be an amazing hack. It would be pretty
cool if merely by staying in regular contact with us you could get
rich. It sounds crazy, but there's a good chance that would work.
A variant is to stay in touch with other YC-funded startups. There
is now a whole neighborhood of them in San Francisco. If you move
there, the peer pressure that made you work harder all summer will
continue to operate.
When startups die, the official cause of death is always either
running out of money or a critical founder bailing. Often the two
occur simultaneously. But I think the underlying cause is usually
that they've become demoralized. You rarely hear of a startup
that's working around the clock doing deals and pumping out new
features, and dies because they can't pay their bills and their ISP
unplugs their server.
Startups rarely die in mid keystroke. So keep typing!
If so many startups get demoralized and fail when merely by hanging
on they could get rich, you have to assume that running a startup
can be demoralizing. That is certainly true. I've been there, and
that's why I've never done another startup. The low points in a
startup are just unbelievably low. I bet even Google had moments
where things seemed hopeless.
Knowing that should help. If you know it's going to feel terrible
sometimes, then when it feels terrible you won't think "ouch, this
feels terrible, I give up." It feels that way for everyone. And
if you just hang on, things will probably get better. The metaphor
people use to describe the way a startup feels is at least a roller
coaster and not drowning. You don't just sink and sink; there are
ups after the downs.
Another feeling that seems alarming but is in fact normal in a
startup is the feeling that what you're doing isn't working. The
reason you can expect to feel this is that what you do probably
won't work. Startups almost never get it right the first time.
Much more commonly you launch something, and no one cares. Don't
assume when this happens that you've failed. That's normal for
startups. But don't sit around doing nothing. Iterate.
I like Paul Buchheit's suggestion of trying to make something that
at least someone really loves. As long as you've made something
that a few users are ecstatic about, you're on the right track. It
will be good for your morale to have even a handful of users who
really love you, and startups run on morale. But also it
will tell you what to focus on. What is it about you that they
love? Can you do more of that? Where can you find more people who
love that sort of thing? As long as you have some core of users
who love you, all you have to do is expand it. It may take a while,
but as long as you keep plugging away, you'll win in the end. Both
Blogger and Delicious did that. Both took years to succeed. But
both began with a core of fanatically devoted users, and all Evan
and Joshua had to do was grow that core incrementally.
Wufoo
is
on the same trajectory now.
So when you release something and it seems like no one cares, look
more closely. Are there zero users who really love you, or is there
at least some little group that does? It's quite possible there
will be zero. In that case, tweak your product and try again.
Every one of you is working on a space that contains at least one
winning permutation somewhere in it. If you just keep trying,
you'll find it.
Let me mention some things not to do. The number one thing not to
do is other things. If you find yourself saying a sentence that
ends with "but we're going to keep working on the startup," you are
in big trouble. Bob's going to grad school, but we're going to
keep working on the startup. We're moving back to Minnesota, but
we're going to keep working on the startup. We're taking on some
consulting projects, but we're going to keep working on the startup.
You may as well just translate these to "we're giving up on the
startup, but we're not willing to admit that to ourselves," because
that's what it means most of the time. A startup is so hard that
working on it can't be preceded by "but."
In particular, don't go to graduate school, and don't start other
projects. Distraction is fatal to startups. Going to (or back to)
school is a huge predictor of death because in addition to the
distraction it gives you something to say you're doing. If you're
only doing a startup, then if the startup fails, you fail. If
you're in grad school and your startup fails, you can say later "Oh
yeah, we had this startup on the side when I was in grad school,
but it didn't go anywhere."
You can't use euphemisms like "didn't go anywhere" for something
that's your only occupation. People won't let you.
One of the most interesting things we've discovered from working
on Y Combinator is that founders are more motivated by the fear of
looking bad than by the hope of getting millions of dollars. So
if you want to get millions of dollars, put yourself in a position
where failure will be public and humiliating.
When we first met the founders of
Octopart
, they seemed very smart,
but not a great bet to succeed, because they didn't seem especially
committed. One of the two founders was still in grad school. It
was the usual story: he'd drop out if it looked like the startup
was taking off. Since then he has not only dropped out of grad
school, but appeared full length in
Newsweek
with the word "Billionaire"
printed across his chest. He just cannot fail now. Everyone he
knows has seen that picture. Girls who dissed him in high school
have seen it. His mom probably has it on the fridge. It would be
unthinkably humiliating to fail now. At this point he is committed
to fight to the death.
I wish every startup we funded could appear in a Newsweek article
describing them as the next generation of billionaires, because
then none of them would be able to give up. The success rate would
be 90%. I'm not kidding.
When we first knew the Octoparts they were lighthearted, cheery
guys. Now when we talk to them they seem grimly determined. The
electronic parts distributors are trying to squash them to keep
their monopoly pricing. (If it strikes you as odd that people still
order electronic parts out of thick paper catalogs in 2007, there's
a reason for that. The distributors want to prevent the transparency
that comes from having prices online.) I feel kind of bad that
we've transformed these guys from lighthearted to grimly determined.
But that comes with the territory. If a startup succeeds, you get
millions of dollars, and you don't get that kind of money just by
asking for it. You have to assume it takes some amount of pain.
And however tough things get for the Octoparts, I predict they'll
succeed. They may have to morph themselves into something totally
different, but they won't just crawl off and die. They're smart;
they're working in a promising field; and they just cannot give up.
All of you guys already have the first two. You're all smart and
working on promising ideas. Whether you end up among the living
or the dead comes down to the third ingredient, not giving up.
So I'll tell you now: bad shit is coming. It always is in a startup.
The odds of getting from launch to liquidity without some kind of
disaster happening are one in a thousand. So don't get demoralized.
When the disaster strikes, just say to yourself, ok, this was what
Paul was talking about. What did he say to do? Oh, yeah. Don't
give up.
Japanese Translation
Arabic Translation |
The Age of the Essay
September 2004
Remember the essays you had to write in high school?
Topic sentence, introductory paragraph,
supporting paragraphs, conclusion. The conclusion being,
say, that Ahab in
Moby Dick
was a Christ-like figure.
Oy. So I'm going to try to give the other side of the
story: what an essay really is, and how you write one.
Or at least, how I write one.
Mods
The most obvious difference between real essays and
the things one has to write in school is that real
essays are not exclusively about English literature.
Certainly schools should teach students how to
write. But due to a series of historical accidents
the teaching of
writing has gotten mixed together with the study
of literature. And so all over the country students are
writing not about how a baseball team with a small budget
might compete with the Yankees, or the role of color in
fashion, or what constitutes a good dessert, but about
symbolism in Dickens.
With the result that writing is made to seem boring and
pointless. Who cares about symbolism in Dickens?
Dickens himself would be more interested in an essay
about color or baseball.
How did things get this way? To answer that we have to go back
almost a thousand years. Around 1100, Europe at last began to
catch its breath after centuries of chaos, and once they
had the luxury of curiosity they rediscovered
what we call "the classics." The effect was rather as if
we were visited by beings from another solar system.
These earlier civilizations were so much more sophisticated
that for the next several centuries the main work of
European scholars, in almost every field, was to assimilate
what they knew.
During this period the study of ancient texts acquired great
prestige. It seemed the essence of what scholars did. As
European scholarship gained momentum it became less and less important;
by 1350
someone who wanted to learn about science could find better
teachers than Aristotle in his own era. [1]
But schools change slower than scholarship. In the
19th century the study of ancient texts was still the backbone
of the curriculum.
The time was then ripe for the question: if the study of
ancient texts is a valid field for scholarship, why not modern
texts? The answer, of course, is that the original raison d'etre
of classical scholarship was a kind of intellectual archaeology that
does not need to be done in the case of contemporary authors.
But for obvious reasons no one wanted to give that answer.
The archaeological work being mostly done, it implied that
those studying the classics were, if not wasting their
time, at least working on problems of minor importance.
And so began the study of modern literature. There was a good
deal of resistance at first.
The first courses in English literature
seem to have been offered by the newer colleges, particularly
American ones. Dartmouth, the University of Vermont, Amherst,
and University College, London
taught English literature in the 1820s.
But Harvard didn't have a professor of English literature until
1876, and Oxford not till 1885. (Oxford had a chair of Chinese before
it had one of English.) [2]
What tipped the scales, at least in the US, seems to have
been the idea that professors should do research as well
as teach. This idea (along with the PhD, the department, and
indeed the whole concept of the modern university) was imported
from Germany in the late 19th century. Beginning at
Johns Hopkins in 1876, the new model spread rapidly.
Writing was one of the casualties. Colleges had long taught
English composition. But how do you do research on composition?
The professors who taught math could be required to do original
math, the professors who taught history could be required to
write scholarly articles about history, but what about the
professors who taught rhetoric or composition? What should they
do research on? The closest thing seemed to be English literature. [3]
And so in the late 19th century the teaching of writing was inherited
by English professors. This had two drawbacks:
(a) an expert on literature need not himself be a good writer,
any more than an art historian has to be a good painter, and (b)
the subject of writing now tends to be literature, since that's
what the professor is interested in.
High schools imitate universities. The seeds of our miserable
high school experiences were sown in 1892, when
the National Education Association
"formally recommended that literature
and composition be unified in the high school course." [4]
The 'riting component of the 3 Rs then morphed into English,
with the bizarre consequence that high school students now
had to write about English literature-- to write, without
even realizing it, imitations of whatever
English professors had been publishing in their journals a
few decades before.
It's no wonder if this seems to the
student a pointless exercise, because we're now three steps
removed from real work: the students are imitating English
professors, who are imitating classical scholars, who are
merely the inheritors of a tradition growing out of what
was, 700 years ago, fascinating and urgently needed work.
No Defense
The other big difference between a real essay and the things
they make you write in school is that a real essay doesn't
take a position and then defend it. That principle,
like the idea that we ought to be writing about literature,
turns out to be another intellectual hangover of long
forgotten origins.
It's often mistakenly believed that
medieval universities were mostly seminaries. In fact they
were more law schools. And at least in our tradition
lawyers are advocates, trained to take
either side of an argument and make as good a case for it
as they can.
Whether cause or effect, this spirit pervaded
early universities. The study of rhetoric, the art of arguing
persuasively, was a third of the undergraduate curriculum. [5]
And after the lecture the most common form
of discussion was the disputation. This is at least
nominally preserved in our present-day thesis defense:
most people treat the words thesis
and dissertation as interchangeable, but originally, at least,
a thesis was a position one took and the dissertation was
the argument by which one defended it.
Defending a position may be a necessary evil in a
legal dispute, but it's not the best way to get at the truth,
as I think lawyers would be the first to admit. It's not
just that you miss subtleties this way.
The real problem is that you can't change the question.
And yet this principle is built into the very structure of
the things they teach you to write in high school. The topic
sentence is your thesis, chosen in advance, the supporting
paragraphs the blows you strike in the conflict, and the
conclusion-- uh, what is the conclusion? I was never sure
about that in high school. It seemed as if we were just
supposed to restate what we said in the first paragraph,
but in different enough words that no one could tell.
Why bother?
But when you understand the origins
of this sort of "essay," you can see where the
conclusion comes from. It's the concluding remarks to the
jury.
Good writing should be convincing, certainly, but it
should be convincing because you got the right answers,
not because you did a good job of arguing. When I give a
draft of an essay to friends, there are two things
I want to know: which parts bore them, and which seem
unconvincing. The boring bits can usually be fixed by
cutting. But I don't try to fix the unconvincing bits by
arguing more cleverly. I need to talk the matter over.
At the very least I must have explained something badly. In
that case, in the course of the conversation I'll be forced
to come up a with a clearer explanation, which I can just
incorporate in the essay. More often than not I have
to change what I was saying as well.
But the aim is never to be convincing per se.
As the reader gets smarter, convincing and true become identical,
so if I can convince smart readers I must be near the truth.
The sort of writing that attempts to persuade may be
a valid (or at least inevitable) form, but it's historically
inaccurate to call it an essay. An essay is
something else.
Trying
To understand what a real essay is, we have to
reach back into history again, though this time not so far.
To Michel de Montaigne, who in 1580 published a book of
what he called "essais." He was
doing something quite different from what lawyers do, and
the difference is embodied in the name.
Essayer
is the French
verb meaning "to try"
and an
essai
is an attempt. An essay is something you
write to try to figure something out.
Figure out what? You don't know yet. And so you can't begin with a
thesis, because you don't have one, and may never have
one. An essay doesn't begin with a statement, but with a
question. In a real essay, you don't take a position and
defend it. You notice a door that's ajar, and you open it and
walk in to see what's inside.
If all you want to do is figure things out, why do you need
to write anything, though? Why not just sit and think? Well,
there precisely is Montaigne's great discovery. Expressing
ideas helps to form them. Indeed, helps is far too weak a
word. Most of what ends up in my essays I only
thought of when I sat down to write them. That's why I
write them.
In the things you write in school you are, in theory,
merely explaining yourself to the reader.
In a real essay you're writing for yourself.
You're thinking out loud.
But not quite.
Just as inviting people over forces you to
clean up your apartment, writing something that
other people will read forces you to think well. So it
does matter to have an audience. The things I've written
just for myself are no good.
They tend to peter out. When I run into
difficulties, I find I conclude with a few vague
questions and then drift off to get a cup of tea.
Many published essays peter out in the same way.
Particularly the sort written by the staff writers
of newsmagazines. Outside writers tend to supply
editorials of the defend-a-position variety, which
make a beeline toward a rousing (and
foreordained) conclusion. But the staff writers feel
obliged to write something "balanced."
Since they're writing for a popular magazine, they start with the
most radioactively controversial questions, from which-- because
they're writing for a popular magazine-- they
then proceed to recoil in terror.
Abortion, for or against?
This group says one thing. That group says
another. One thing is certain: the question is a
complex one. (But don't get mad at us. We didn't
draw any conclusions.)
The River
Questions aren't enough. An essay has to come up with answers.
They don't always, of course. Sometimes you start with a
promising question and get nowhere. But those you don't
publish. Those are like experiments that get inconclusive
results. An essay you publish ought to tell the reader
something he didn't already know.
But
what
you tell him doesn't matter, so long as
it's interesting. I'm sometimes accused of meandering.
In defend-a-position writing that would be a flaw.
There you're not concerned with truth. You already
know where you're going, and you want to go straight there,
blustering through obstacles, and hand-waving
your way across swampy ground. But that's not what
you're trying to do in an essay. An essay is supposed to
be a search for truth. It would be suspicious if it didn't
meander.
The Meander (aka Menderes) is a river in Turkey.
As you might expect, it winds all over the place.
But it doesn't do this out of frivolity.
The path it has discovered is the most
economical route to the sea. [6]
The river's algorithm is simple. At each step, flow down.
For the essayist this translates to: flow interesting.
Of all the places to go next, choose the most interesting.
One can't have quite as little foresight as a river. I always
know generally what I want to write about.
But not the
specific conclusions I want to reach; from paragraph to
paragraph I let the ideas take their course.
This doesn't always work. Sometimes, like a river,
one runs up against a wall. Then I do the same thing the river does:
backtrack. At one point in this essay
I found that after following a certain thread I ran out
of ideas. I had to go back seven paragraphs and start over
in another direction.
Fundamentally an essay is a train of thought-- but a cleaned-up
train of thought, as dialogue is cleaned-up conversation.
Real thought, like real conversation, is full of false starts.
It would be exhausting to read. You need to
cut and fill to
emphasize the central thread, like an
illustrator inking over a pencil drawing. But don't
change so much that you lose the spontaneity of the original.
Err on the side of the river. An essay is not a reference
work. It's not something you read looking for a specific
answer, and feel cheated if you don't find it. I'd much
rather read an essay that went off in an unexpected but
interesting direction than one that plodded dutifully along
a prescribed course.
Surprise
So what's interesting? For me, interesting means surprise.
Interfaces, as Geoffrey James has said, should follow the principle of
least astonishment. A button that looks like it will make a
machine stop should make it stop, not speed up. Essays
should do the opposite. Essays should aim for maximum
surprise.
I was afraid of flying for a long time and could only travel
vicariously. When friends came back from faraway places,
it wasn't just out of politeness that I asked
what they saw. I really wanted to know. And I found
the best way to get information out of them was to ask
what surprised them. How was the place different from what
they expected? This is an extremely useful question.
You can ask it of the most unobservant people, and it will
extract information they didn't even know they were
recording.
Surprises are things that you not only didn't know, but that
contradict things you
thought you knew. And so they're the most valuable sort of
fact you can get. They're like a food that's not merely
healthy, but counteracts the unhealthy effects of things
you've already eaten.
How do you find surprises? Well, therein lies half
the work of essay writing. (The other half is expressing
yourself well.) The trick is to use yourself as a
proxy for the reader. You should only write about things
you've thought about a lot. And anything you come across
that surprises you, who've thought about the topic a lot,
will probably surprise most readers.
For example, in a recent
essay
I pointed out that because
you can only judge computer programmers by working with
them, no one knows who the best programmers are overall.
I didn't realize this when I began
that essay, and even now I find it kind of weird. That's
what you're looking for.
So if you want to write essays, you need two ingredients:
a few topics you've thought about a lot, and
some ability to ferret out the unexpected.
What should you think about? My guess is that it
doesn't matter-- that anything can be interesting if you get deeply
enough into it. One possible exception might be things
that have deliberately had all the variation sucked out of them,
like working in fast food. In retrospect, was there
anything interesting about working at Baskin-Robbins?
Well, it was interesting how important color was
to the customers. Kids a certain age would point into
the case and say that they wanted yellow. Did they want
French Vanilla or Lemon? They would just look at you
blankly. They wanted yellow. And then there was the
mystery of why the perennial favorite Pralines 'n' Cream
was so appealing. (I think now it was the salt.)
And the difference in the way fathers and
mothers bought ice cream for their kids: the fathers
like benevolent kings bestowing largesse, the mothers
harried, giving in to pressure.
So, yes, there does seem to be some material even in
fast food.
I didn't notice those things at the time, though. At sixteen
I was about as observant as a lump of rock. I can see more now in
the fragments of memory I preserve of that age than I could see
at the time from having it all happening live, right in front of me.
Observation
So the ability to ferret out the unexpected must not merely be an
inborn one. It must be something you can learn.
How do you learn it?
To some extent it's like learning history.
When you first read
history, it's just a whirl of names
and dates.
Nothing seems to stick. But the more you learn, the more hooks you have
for new facts to stick onto-- which means
you accumulate knowledge at an exponential rate. Once you
remember that Normans conquered
England in 1066, it will catch your attention when you hear
that other Normans conquered southern Italy at about the same time.
Which will make you wonder about Normandy, and take note
when a third book mentions that Normans
were not, like most of what is now
called France, tribes that flowed in as the Roman empire collapsed,
but Vikings (norman = north man) who arrived
four centuries later in 911. Which makes
it easier to remember that Dublin was also established by
Vikings in the 840s. Etc, etc squared.
Collecting surprises is a similar process.
The more anomalies you've seen, the more easily you'll notice
new ones. Which means, oddly enough, that as you grow older,
life should become more and more surprising. When I was a
kid, I used to think adults had it all figured out.
I had it backwards. Kids are the ones who have it all figured
out. They're just mistaken.
When it comes to surprises, the rich get richer. But
(as with wealth) there
may be habits of mind that will help the process along. It's
good to have a habit of asking questions, especially questions
beginning with Why.
But not in the random way that three year
olds ask why. There are an infinite number of questions.
How do you find the fruitful ones?
I find it especially
useful to ask why about things that seem wrong.
For example, why should there be a connection between
humor and misfortune? Why do we find it funny when a
character, even one we like, slips on a banana peel?
There's a whole essay's worth of surprises there for sure.
If you want to notice things that seem wrong, you'll find a
degree of skepticism helpful. I take it as an axiom
that we're only achieving 1% of what we could.
This helps counteract the rule that gets beaten into our
heads as children: that things are the way they are because
that is how things have to be.
For example, everyone I've talked to while writing this essay
felt the same about
English classes-- that the whole process seemed pointless.
But none of us had the balls at the time to hypothesize that
it was, in fact, all a mistake.
We all thought there was just something we weren't getting.
I have a hunch you want to pay attention not just to things
that seem wrong, but things that seem wrong in a humorous way.
I'm always pleased when I see someone laugh as they
read a draft of an essay. But why should I be? I'm aiming
for good ideas. Why should good ideas be funny?
The connection may be surprise.
Surprises make us laugh, and surprises are what
one wants to deliver.
I write down things that surprise me in notebooks. I never
actually get around to reading them and using
what I've written, but I do tend to
reproduce the same thoughts later. So the main value
of notebooks may be what writing things down leaves in your
head.
People trying to be cool will find themselves at a disadvantage
when collecting surprises. To be surprised is to be mistaken.
And the essence of cool, as any fourteen year old could tell
you, is
nil admirari.
When you're mistaken, don't
dwell on it; just act like nothing's wrong and maybe no one
will notice.
One of the keys to coolness is to avoid situations where
inexperience may make you look foolish.
If you want to find surprises you should do the opposite.
Study lots of different things,
because some of the most interesting surprises are unexpected
connections between different fields. For example,
jam, bacon, pickles, and cheese, which are among the most pleasing
of foods, were all originally intended as methods of preservation.
And so were books and paintings.
Whatever you study, include history-- but social and economic
history, not political history. History seems to me so important
that it's misleading to treat it as a mere field of study.
Another way to describe it is
all the data we have so far.
Among other things, studying history gives one confidence that
there are good ideas waiting to be discovered right under our noses.
Swords evolved during the Bronze Age out of daggers, which
(like their flint predecessors) had a hilt separate from the
blade. Because swords are longer
the hilts kept breaking off. But it took five hundred years
before someone thought of casting hilt and blade as one
piece.
Disobedience
Above all, make a habit of paying
attention to things you're not supposed to, either because
they're "
inappropriate
,"
or not important, or not what you're
supposed to be working on. If you're curious about something,
trust your instincts.
Follow the threads that attract your
attention. If there's something you're really interested
in, you'll find they have an uncanny way of leading back to
it anyway, just as the conversation of people who are especially
proud of something always tends to lead back to it.
For example, I've always been fascinated by comb-overs, especially
the extreme sort that
make a man look as if he's wearing a beret made of his own hair.
Surely this is a lowly sort of thing to be interested in-- the
sort of superficial quizzing
best left to teenage girls. And yet there is something underneath.
The key question, I realized, is how does the comber-over not
see how odd he looks?
And the answer is that he got to look that way
incrementally.
What began as combing his hair a little carefully over a
thin patch has gradually, over 20 years, grown into a monstrosity.
Gradualness is very powerful. And that power can be
used for constructive purposes too: just as you can trick
yourself into looking like a freak, you can trick yourself into
creating something so grand that you would never have dared to
plan
such a thing. Indeed, this is just how most good
software gets created. You start by writing a stripped-down
kernel (how hard can it be?) and gradually it grows
into a complete operating system. Hence the next leap: could
you do the same thing in painting, or in a novel?
See what you can extract from a frivolous question?
If there's one piece of advice I would give about writing essays,
it would be: don't do as you're told.
Don't believe what you're supposed to.
Don't write the
essay readers expect; one learns nothing from
what one expects.
And
don't write the way they taught you to in school.
The most important sort of disobedience is to write
essays at all. Fortunately, this sort of disobedience shows
signs of becoming
rampant
.
It used to be that only a tiny
number of officially approved writers were allowed to
write essays. Magazines published few of them, and judged
them less by what they said than who wrote them;
a magazine might publish a story by an
unknown writer if it was good enough, but if they published
an essay on x it had to be by someone who was at least
forty and whose job title had x in it. Which is a problem,
because there are a lot of things insiders can't say precisely
because they're insiders.
The Internet is changing that.
Anyone can publish an essay on the Web, and it gets judged, as any
writing should, by what it says, not who wrote it.
Who are you to write about x? You are whatever you wrote.
Popular magazines made the period between the spread
of literacy and the arrival of TV the golden age of the
short story.
The Web may well make this the golden age of the essay.
And that's certainly not something I realized when
I started writing this.
Notes
[1] I'm thinking of Oresme (c. 1323-82). But it's hard to pick
a date, because there was a sudden drop-off in scholarship
just as Europeans finished assimilating classical science.
The cause may have been the plague of 1347; the trend in
scientific progress matches the population curve.
[2] Parker, William R. "Where Do College English Departments
Come From?"
College English
28 (1966-67), pp. 339-351.
Reprinted in Gray, Donald J. (ed).
The Department of
English at Indiana University Bloomington 1868-1970.
Indiana
University Publications.
Daniels, Robert V.
The University of Vermont: The First
Two Hundred Years.
University of Vermont, 1991.
Mueller, Friedrich M. Letter to the
Pall Mall
Gazette.
1886/87. Reprinted in Bacon, Alan (ed).
The Nineteenth-Century
History of English Studies.
Ashgate, 1998.
[3] I'm compressing the story a bit.
At first
literature took a back seat to philology, which (a) seemed more
serious and (b) was popular in Germany, where many of the
leading scholars of that generation had been trained.
In some cases the writing teachers were transformed
in situ
into English professors.
Francis James Child, who had been Boylston Professor
of Rhetoric at Harvard since 1851,
became in 1876 the university's first professor of English.
[4] Parker,
op. cit.
, p. 25.
[5] The undergraduate curriculum or
trivium
(whence
"trivial") consisted of Latin grammar, rhetoric, and logic.
Candidates for masters' degrees went on to study the
quadrivium
of arithmetic, geometry, music, and astronomy.
Together these were the seven liberal arts.
The study of rhetoric was inherited directly from Rome, where
it was considered the most important
subject. It would not be far from the truth to say that
education in the classical world
meant training landowners' sons
to speak well enough to defend their interests
in political and legal disputes.
[6] Trevor Blackwell points out that this
isn't strictly true, because the outside
edges of curves erode faster.
Thanks
to Ken Anderson, Trevor Blackwell, Sarah Harlin, Jessica
Livingston, Jackie McDonough, and Robert Morris for reading drafts of
this.
Russian Translation
Spanish Translation
Japanese Translation
Hungarian Translation
Traditional Chinese Translation
If you liked this, you may also like
Hackers & Painters
. |
@ESYudkowsky Do you suppose ships will never be powered by fusion?
@GuyInSF2 They can't make Bond female or they have nothing left. Anyone can make a movie about a dashing MI6 agent. The name "James Bond" is the main thing the franchise consists of, besides a couple other minor things like "007" and the music.
@abemurray Also out of touch in the sense that he stopped taking in new information in about 1975.
@abemurray I think it's a combination of ideology and being out of touch.
@sgalbrai @OliviaSays_ai Ugh. No.
@daltonc Could it already be showing up in the growth rates of startups? After all, as we know, no one is more aggressive in using AI than startups that are building AI products.
It will be particularly difficult for Bond, because as created by Ian Fleming he's intrinsically naughty: a white dude (already bad) who goes through a series of younger lovers. What large corporation will be able to resist sanitizing (or as they'll call it, "updating") him?
It would be nice if Bond could avoid the fate that Star Wars suffered after it was handed over to the suits at Disney, but what are the odds? The creator of a franchise can resist soulless execs, but one that's acquired starts out already in their hands. https://t.co/XSij1wxDxZ
Whereas clearly explaining the risks that a startup faces actually *helps* explain the idea, and what's new about it. In fact a description of the risks might be, per word, the most valuable explanatory material in a startup's pitch.
A related mistake founders make is to underestimate how easy it is to confuse investors with an unclear pitch. The founders are familiar with their idea, but investors are seeing it for the first time. So even a small loss of clarity can lose them. |
@joes_ai_x Usage, so revenue.
OpenAI is growing as fast as a promising new startup, but they're already huge. I can't remember ever having seen this before.
At least the founder was never in the Forbes 30 under 30. Then I'd really worry.
@AlecStapp Resubmit in a month.
@amwilson_opera Are they contemptuous of it? I've never seen an instance of that.
I've seen many instances of them writing badly. I believe the reason is a combination of inability and that their ideas wouldn't sound impressive if they were expressed clearly.
He is Queensland's chief health officer, John Gerrard. And he said that the most likely explanation is that the samples were simply lost while being transferred between freezers.
At first glance I thought the bald guy was the supervillain who had taken them. https://t.co/s5nNuogS31
@universal_sci If AI takes over, there will be. It would want to be powerful, and biases make you weaker.
@Empty_America It wasn't even necessary to go. P. G. Wodehouse didn't. When people talked about where they were educated, it was enough to mention their senior school.
@CuriosMuseum That would not be my second sentence. |
@mauraball Her birthday is a holiday in our family. The boys can ask for something they ordinarily couldn't, and they get it because it's a holiday.
(That advice is a bolder claim than it may seem. There's a lot missing from it, and not by accident.)
My mother would have been 90 today. She was an interesting person and an extraordinarily good mother. The most useful bit of parenting advice I've heard is something she told me. "All you have to do is love them and show them the world." https://t.co/PSvfM0gb4W
@CasteMember For the total set of applications (or even a moderately large set), a properly defined library by definition doesn't entail bloat, because if a function wasn't called much, it wouldn't be included.
@kindgracekind Yes, this is why Microsoft embracing AI is more than just a random legacy company trying to seem hip. For better or worse, their user base is self-selected to be the people writing the most repetitive stuff.
Wouldn't it be a fabulous bit of natural history if you could reproduce Lisp simply by optimizing the responses of an AI trained to generate code? I'd be fascinated to see what language emerged from this exercise, whatever it was.
And of course you'd want to let it write the interpreter in the language it implemented, starting from the smallest possible set of primitive "axioms." Wonder what you'd get if you did that...
If you told the AI to give the shortest possible answers, but let it call functions it had generated in previous answers, you'd at least be heading toward the libraries. Maybe if you told it that it could define an interpreter first, you could get a language too.
One intriguing possibility is that you could somehow automatically generate the more abstract languages and more powerful libraries from what the AI "knows," perhaps as a byproduct of optimizing it in some way.
One reason AI works for code is that most people are just writing the same programs over and over. The elegant solutions to this problem are more abstract languages and more powerful libraries. But maybe AI will be the worse-is-better solution that wins. |
The Lesson to Unlearn
December 2019
The most damaging thing you learned in school wasn't something you
learned in any specific class. It was learning to get good grades.
When I was in college, a particularly earnest philosophy grad student
once told me that he never cared what grade he got in a class, only
what he learned in it. This stuck in my mind because it was the
only time I ever heard anyone say such a thing.
For me, as for most students, the measurement of what I was learning
completely dominated actual learning in college. I was fairly
earnest; I was genuinely interested in most of the classes I took,
and I worked hard. And yet I worked by far the hardest when I was
studying for a test.
In theory, tests are merely what their name implies: tests of what
you've learned in the class. In theory you shouldn't have to prepare
for a test in a class any more than you have to prepare for a blood
test. In theory you learn from taking the class, from going to the
lectures and doing the reading and/or assignments, and the test
that comes afterward merely measures how well you learned.
In practice, as almost everyone reading this will know, things are
so different that hearing this explanation of how classes and tests
are meant to work is like hearing the etymology of a word whose
meaning has changed completely. In practice, the phrase "studying
for a test" was almost redundant, because that was when one really
studied. The difference between diligent and slack students was
that the former studied hard for tests and the latter didn't. No
one was pulling all-nighters two weeks into the semester.
Even though I was a diligent student, almost all the work I did in
school was aimed at getting a good grade on something.
To many people, it would seem strange that the preceding sentence
has a "though" in it. Aren't I merely stating a tautology? Isn't
that what a diligent student is, a straight-A student? That's how
deeply the conflation of learning with grades has infused our
culture.
Is it so bad if learning is conflated with grades? Yes, it is bad.
And it wasn't till decades after college, when I was running Y Combinator, that I realized how bad it is.
I knew of course when I was a student that studying for a test is
far from identical with actual learning. At the very least, you
don't retain knowledge you cram into your head the night before an
exam. But the problem is worse than that. The real problem is that
most tests don't come close to measuring what they're supposed to.
If tests truly were tests of learning, things wouldn't be so bad.
Getting good grades and learning would converge, just a little late.
The problem is that nearly all tests given to students are terribly
hackable. Most people who've gotten good grades know this, and know
it so well they've ceased even to question it. You'll see when you
realize how naive it sounds to act otherwise.
Suppose you're taking a class on medieval history and the final
exam is coming up. The final exam is supposed to be a test of your
knowledge of medieval history, right? So if you have a couple days
between now and the exam, surely the best way to spend the time,
if you want to do well on the exam, is to read the best books you
can find about medieval history. Then you'll know a lot about it,
and do well on the exam.
No, no, no, experienced students are saying to themselves. If you
merely read good books on medieval history, most of the stuff you
learned wouldn't be on the test. It's not good books you want to
read, but the lecture notes and assigned reading in this class.
And even most of that you can ignore, because you only have to worry
about the sort of thing that could turn up as a test question.
You're looking for sharply-defined chunks of information. If one
of the assigned readings has an interesting digression on some
subtle point, you can safely ignore that, because it's not the sort
of thing that could be turned into a test question. But if the
professor tells you that there were three underlying causes of the
Schism of 1378, or three main consequences of the Black Death, you'd
better know them. And whether they were in fact the causes or
consequences is beside the point. For the purposes of this class
they are.
At a university there are often copies of old exams floating around,
and these narrow still further what you have to learn. As well as
learning what kind of questions this professor asks, you'll often
get actual exam questions. Many professors re-use them. After
teaching a class for 10 years, it would be hard not to, at least
inadvertently.
In some classes, your professor will have had some sort of political
axe to grind, and if so you'll have to grind it too. The need for
this varies. In classes in math or the hard sciences or engineering
it's rarely necessary, but at the other end of the spectrum there
are classes where you couldn't get a good grade without it.
Getting a good grade in a class on x is so different from learning
a lot about x that you have to choose one or the other, and you
can't blame students if they choose grades. Everyone judges them
by their grades graduate programs, employers, scholarships, even
their own parents.
I liked learning, and I really enjoyed some of the papers and
programs I wrote in college. But did I ever, after turning in a
paper in some class, sit down and write another just for fun? Of
course not. I had things due in other classes. If it ever came to
a choice of learning or grades, I chose grades. I hadn't come to
college to do badly.
Anyone who cares about getting good grades has to play this game,
or they'll be surpassed by those who do. And at elite universities,
that means nearly everyone, since someone who didn't care about
getting good grades probably wouldn't be there in the first place.
The result is that students compete to maximize the difference
between learning and getting good grades.
Why are tests so bad? More precisely, why are they so hackable?
Any experienced programmer could answer that. How hackable is
software whose author hasn't paid any attention to preventing it
from being hacked? Usually it's as porous as a colander.
Hackable is the default for any test imposed by an authority. The
reason the tests you're given are so consistently bad so consistently
far from measuring what they're supposed to measure is simply
that the people creating them haven't made much effort to prevent
them from being hacked.
But you can't blame teachers if their tests are hackable. Their job
is to teach, not to create unhackable tests. The real problem is
grades, or more precisely, that grades have been overloaded. If
grades were merely a way for teachers to tell students what they
were doing right and wrong, like a coach giving advice to an athlete,
students wouldn't be tempted to hack tests. But unfortunately after
a certain age grades become more than advice. After a certain age,
whenever you're being taught, you're usually also being judged.
I've used college tests as an example, but those are actually the
least hackable. All the tests most students take their whole lives
are at least as bad, including, most spectacularly of all, the test
that gets them into college. If getting into college were merely a
matter of having the quality of one's mind measured by admissions
officers the way scientists measure the mass of an object, we could
tell teenage kids "learn a lot" and leave it at that. You can tell
how bad college admissions are, as a test, from how unlike high
school that sounds. In practice, the freakishly specific nature of
the stuff ambitious kids have to do in high school is directly
proportionate to the hackability of college admissions. The classes
you don't care about that are mostly memorization, the random
"extracurricular activities" you have to participate in to show
you're "well-rounded," the standardized tests as artificial as
chess, the "essay" you have to write that's presumably meant to hit
some very specific target, but you're not told what.
As well as being bad in what it does to kids, this test is also bad
in the sense of being very hackable. So hackable that whole industries
have grown up to hack it. This is the explicit purpose of test-prep
companies and admissions counsellors, but it's also a significant
part of the function of private schools.
Why is this particular test so hackable? I think because of what
it's measuring. Although the popular story is that the way to get
into a good college is to be really smart, admissions officers at
elite colleges neither are, nor claim to be, looking only for that.
What are they looking for? They're looking for people who are not
simply smart, but admirable in some more general sense. And how
is this more general admirableness measured? The admissions officers
feel it. In other words, they accept who they like.
So what college admissions is a test of is whether you suit the
taste of some group of people. Well, of course a test like that is
going to be hackable. And because it's both very hackable and there's
(thought to be) a lot at stake, it's hacked like nothing else.
That's why it distorts your life so much for so long.
It's no wonder high school students often feel alienated. The shape
of their lives is completely artificial.
But wasting your time is not the worst thing the educational system
does to you. The worst thing it does is to train you that the way
to win is by hacking bad tests. This is a much subtler problem
that I didn't recognize until I saw it happening to other people.
When I started advising startup founders at Y Combinator, especially
young ones, I was puzzled by the way they always seemed to make
things overcomplicated. How, they would ask, do you raise money?
What's the trick for making venture capitalists want to invest in
you? The best way to make VCs want to invest in you, I would explain,
is to actually be a good investment. Even if you could trick VCs
into investing in a bad startup, you'd be tricking yourselves too.
You're investing time in the same company you're asking them to
invest money in. If it's not a good investment, why are you even
doing it?
Oh, they'd say, and then after a pause to digest this revelation,
they'd ask: What makes a startup a good investment?
So I would explain that what makes a startup promising, not just
in the eyes of investors but in fact, is
growth
.
Ideally in revenue,
but failing that in usage. What they needed to do was get lots of
users.
How does one get lots of users? They had all kinds of ideas about
that. They needed to do a big launch that would get them "exposure."
They needed influential people to talk about them. They even knew
they needed to launch on a tuesday, because that's when one gets
the most attention.
No, I would explain, that is not how to get lots of users. The way
you get lots of users is to make the product really great. Then
people will not only use it but recommend it to their friends, so
your growth will be exponential once you
get it started
.
At this point I've told the founders something you'd think would
be completely obvious: that they should make a good company by
making a good product. And yet their reaction would be something
like the reaction many physicists must have had when they first
heard about the theory of relativity: a mixture of astonishment at
its apparent genius, combined with a suspicion that anything so
weird couldn't possibly be right. Ok, they would say, dutifully.
And could you introduce us to such-and-such influential person? And
remember, we want to launch on Tuesday.
It would sometimes take founders years to grasp these simple lessons.
And not because they were lazy or stupid. They just seemed blind
to what was right in front of them.
Why, I would ask myself, do they always make things so complicated?
And then one day I realized this was not a rhetorical question.
Why did founders tie themselves in knots doing the wrong things
when the answer was right in front of them? Because that was what
they'd been trained to do. Their education had taught them that the
way to win was to hack the test. And without even telling them they
were being trained to do this. The younger ones, the recent graduates,
had never faced a non-artificial test. They thought this was just
how the world worked: that the first thing you did, when facing any
kind of challenge, was to figure out what the trick was for hacking
the test. That's why the conversation would always start with how
to raise money, because that read as the test. It came at the end
of YC. It had numbers attached to it, and higher numbers seemed to
be better. It must be the test.
There are certainly big chunks of the world where the way to win
is to hack the test. This phenomenon isn't limited to schools. And
some people, either due to ideology or ignorance, claim that this
is true of startups too. But it isn't. In fact, one of the most
striking things about startups is the degree to which you win by
simply doing good work. There are edge cases, as there are in
anything, but in general you win by getting users, and what users
care about is whether the product does what they want.
Why did it take me so long to understand why founders made startups
overcomplicated? Because I hadn't realized explicitly that schools
train us to win by hacking bad tests. And not just them, but me!
I'd been trained to hack bad tests too, and hadn't realized it till
decades later.
I had lived as if I realized it, but without knowing why. For
example, I had avoided working for big companies. But if you'd asked
why, I'd have said it was because they were bogus, or bureaucratic.
Or just yuck. I never understood how much of my dislike of big
companies was due to the fact that you win by hacking bad tests.
Similarly, the fact that the tests were unhackable was a lot of
what attracted me to startups. But again, I hadn't realized that
explicitly.
I had in effect achieved by successive approximations something
that may have a closed-form solution. I had gradually undone my
training in hacking bad tests without knowing I was doing it. Could
someone coming out of school banish this demon just by knowing its
name, and saying begone? It seems worth trying.
Merely talking explicitly about this phenomenon is likely to make
things better, because much of its power comes from the fact that
we take it for granted. After you've noticed it, it seems the
elephant in the room, but it's a pretty well camouflaged elephant.
The phenomenon is so old, and so pervasive. And it's simply the
result of neglect. No one meant things to be this way. This is just
what happens when you combine learning with grades, competition,
and the naive assumption of unhackability.
It was mind-blowing to realize that two of the things I'd puzzled
about the most the bogusness of high school, and the difficulty
of getting founders to see the obvious both had the same cause.
It's rare for such a big block to slide into place so late.
Usually when that happens it has implications in a lot of different
areas, and this case seems no exception. For example, it suggests
both that education could be done better, and how you might fix it.
But it also suggests a potential answer to the question all big
companies seem to have: how can we be more like a startup? I'm not
going to chase down all the implications now. What I want to focus
on here is what it means for individuals.
To start with, it means that most ambitious kids graduating from
college have something they may want to unlearn. But it also changes
how you look at the world. Instead of looking at all the different
kinds of work people do and thinking of them vaguely as more or
less appealing, you can now ask a very specific question that will
sort them in an interesting way: to what extent do you win at this
kind of work by hacking bad tests?
It would help if there was a way to recognize bad tests quickly.
Is there a pattern here? It turns out there is.
Tests can be divided into two kinds: those that are imposed by
authorities, and those that aren't. Tests that aren't imposed by
authorities are inherently unhackable, in the sense that no one is
claiming they're tests of anything more than they actually test. A
football match, for example, is simply a test of who wins, not which
team is better. You can tell that from the fact that commentators
sometimes say afterward that the better team won. Whereas tests
imposed by authorities are usually proxies for something else. A
test in a class is supposed to measure not just how well you did
on that particular test, but how much you learned in the class.
While tests that aren't imposed by authorities are inherently
unhackable, those imposed by authorities have to be made unhackable.
Usually they aren't. So as a first approximation, bad tests are
roughly equivalent to tests imposed by authorities.
You might actually like to win by hacking bad tests. Presumably
some people do. But I bet most people who find themselves doing
this kind of work don't like it. They just take it for granted that
this is how the world works, unless you want to drop out and be
some kind of hippie artisan.
I suspect many people implicitly assume that working in a
field with bad tests is the price of making lots of money. But that,
I can tell you, is false. It used to be true. In the mid-twentieth
century, when the economy was
composed of oligopolies
,
the only way
to the top was by playing their game. But it's not true now. There
are now ways to get rich by doing good work, and that's part of the
reason people are so much more excited about getting rich than they
used to be. When I was a kid, you could either become an engineer
and make cool things, or make lots of money by becoming an "executive."
Now you can make lots of money by making cool things.
Hacking bad tests is becoming less important as the link between
work and authority erodes. The erosion of that link is one of the
most important trends happening now, and we see its effects in
almost every kind of work people do. Startups are one of the most
visible examples, but we see much the same thing in writing. Writers
no longer have to submit to publishers and editors to reach readers;
now they can go direct.
The more I think about this question, the more optimistic I get.
This seems one of those situations where we don't realize how much
something was holding us back until it's eliminated. And I can
foresee the whole bogus edifice crumbling. Imagine what happens as
more and more people start to ask themselves if they want to win
by hacking bad tests, and decide that they don't. The kinds of
work where you win by hacking bad tests will be starved of talent,
and the kinds where you win by doing good work will see an influx
of the most ambitious people. And as hacking bad tests shrinks in
importance, education will evolve to stop training us to do it.
Imagine what the world could look like if that happened.
This is not just a lesson for individuals to unlearn, but one for
society to unlearn, and we'll be amazed at the energy that's liberated
when we do.
Notes
[1] If using tests only to measure learning sounds impossibly
utopian, that is already the way things work at Lambda School.
Lambda School doesn't have grades. You either graduate or you don't.
The only purpose of tests is to decide at each stage of the curriculum
whether you can continue to the next. So in effect the whole school
is pass/fail.
[2] If the final exam consisted of a long conversation with the
professor, you could prepare for it by reading good books on medieval
history. A lot of the hackability of tests in schools is due to the
fact that the same test has to be given to large numbers of students.
[3] Learning is the naive algorithm for getting good grades.
[4]
Hacking
has
multiple senses. There's a narrow sense in which
it means to compromise something. That's the sense in which one
hacks a bad test. But there's another, more general sense, meaning
to find a surprising solution to a problem, often by thinking
differently about it. Hacking in this sense is a wonderful thing.
And indeed, some of the hacks people use on bad tests are impressively
ingenious; the problem is not so much the hacking as that, because
the tests are hackable, they don't test what they're meant to.
[5] The people who pick startups at Y Combinator are similar to
admissions officers, except that instead of being arbitrary, their
acceptance criteria are trained by a very tight feedback loop. If
you accept a bad startup or reject a good one, you will usually know it
within a year or two at the latest, and often within a month.
[6] I'm sure admissions officers are tired of reading applications
from kids who seem to have no personality beyond being willing to
seem however they're supposed to seem to get accepted. What they
don't realize is that they are, in a sense, looking in a mirror.
The lack of authenticity in the applicants is a reflection of the
arbitrariness of the application process. A dictator might just as
well complain about the lack of authenticity in the people around
him.
[7] By good work, I don't mean morally good, but good in the sense
in which a good craftsman does good work.
[8] There are borderline cases where it's hard to say which category
a test falls in. For example, is raising venture capital like college
admissions, or is it like selling to a customer?
[9] Note that a good test is merely one that's unhackable. Good
here doesn't mean morally good, but good in the sense of working
well. The difference between fields with bad tests and good ones
is not that the former are bad and the latter are good, but that
the former are bogus and the latter aren't. But those two measures
are not unrelated. As Tara Ploughman said, the path from good to
evil goes through bogus.
[10] People who think the recent increase in
economic inequality
is
due to changes in tax policy seem very naive to anyone with experience
in startups. Different people are getting rich now than used to,
and they're getting much richer than mere tax savings could make
them.
[11] Note to tiger parents: you may think you're training your kids
to win, but if you're training them to win by hacking bad tests,
you are, as parents so often do, training them to fight the last
war.
Thanks
to Austen Allred, Trevor Blackwell, Patrick Collison,
Jessica Livingston, Robert Morris, and Harj Taggar for reading
drafts of this.
Russian Translation
Arabic Translation
Swedish Translation |
@X_FedericoX @growing_daniel When they were having their pictures taken.
If you want to feel hopeful about the future, this is a great account to follow.
@levelerai @growing_daniel The best people aren't looking for jobs.
@ahistoryinart Somerville and Ross write about it in the Irish RM.
@nizzyabi This is the trough of sorrow.
@growing_daniel If suits made you think better, people would put them on when they needed to solve hard problems. But in fact it's the opposite. When you need to solve a hard problem, you wear your most comfortable, least constraining clothes.
@RepThomasMassie You now have a national reputation.
@richardmcj Whoah, this must be a record for depth.
@thewillbaron If there's one thing history shows, it's that persecuting comedians who mock the head of state is _always_ on the wrong side of history.
@jessegenet Not giving a company your money is a very effective way to make a point about your opinion of them. |
@whatifalthist Your children.
@johnmsides Already there.
How many Olympic medallists even know why one would bite a medal?
I bet few do. And for the rest it must seem such a bizarrely pointless thing to do.
Occasionally I check the profiles of random people who reply to me, and often 20 out of 20 of the last tweets they've posted are about politics. They can't all be bots, and yet how can there be so many people who have nothing to say about any other topic?
@NNunnelee I didn't at the time, but I was willing to risk being early if it helped increase the likelihood that he'd withdraw.
This aged well.
@mattyglesias I don't think he's lying for personal gain. I just think his opinions have extremely high variability.
@MattHasTweets_ You need more fiber in your diet.
@GarettJones @JohnHCochrane Don't you think stock prices are simply a stick-slip phenomenon?
https://t.co/ZkTRDzggRp
@larper69420 He would like that picture. |
Me: How much salt do you put in your tomato sauce?
Jessica: Not too much. But not too little.
Me: How long do you cook it for?
Jessica: Not too long.
This monster parsnip from our garden ended up yielding three dinners and a lunch. https://t.co/N9ZHZBYv4A
We developed a new technique for measuring the boys' heights accurately: we make them put their heels on a piece of paper, and if Jessica can pull it out, we know they're cheating.
Hard to say what's more striking, the amateurishness or the brutality.
Something that's obvious in retrospect but I only noticed after years as a primary school parent: kids who are bullies or assholes tend to have parents who are too, and this often makes it hard for the administrators to keep a lid on bad behavior.
A rare case where you don't want the logo to be too legible. https://t.co/e17vb2NkBZ
Google search autocomplete presents an alarming picture of its users. https://t.co/c8x7u12FuL
Very briefly I got a version of GMail where one email in my inbox *had* to be selected, even if there wasn't one I wanted to focus on. It was unbearably annoying. Then the feature just disappeared. Anyone else see it? Do I dare to hope it's gone?
Jessica's on the train home. What will she want for dinner? Black beans, I decided. So I made black beans. Then I checked my email and found a message from Jessica saying she'll be home soon and would I make black beans for dinner?
I'm reading Thomson's History of Chemistry, written in 1830, and it's all the more interesting because they're still just beginning to figure things out. |
@Carson By their support for autocratic leaders killing large numbers of people in neighboring countries.
It's alarming to think how many people on Twitter would be supporting Germany if World War II were happening now. After Pearl Harbor nearly all the American ones would instantly go quiet, of course.
@VDHanson I can think of another way: for Russia to stop the invasion.
Why is Ukraine giving up practicable and Russia giving up not practicable? Just because Putin seems more unreasonable?
@sarmadgulzar Usually they've never consciously thought about it in those terms.
I often use this technique with founders who have some kind of specialized technical expertise. They're often genuinely surprised to learn that they're the best in the world at something.
Jessica: Sometimes I'm not sure I should call myself the social radar.
Me: Of course you should. You're the best at it.
Jessica: But I can be fooled. I'm not perfect.
Me: Is anyone else better at it?
Jessica: Probably.
Me: Do you know anyone who's better?
Jessica: No...
Newton was 46 in this picture. The Principia had been published 2 years before. https://t.co/jzo3bQ4fHq
Trick I discovered in England: when you're stuck behind something driving slow, open all your windows, and it will at least feel like you're driving faster.
If I'd commissioned this portrait, I'd be pretty unhappy about where the artist put my knees. https://t.co/NiIU7hXHe8
@rootsofprogress @foresightinst @HumanProgress @TheIHS @IFP @WorksInProgMag @patrickc @tylercowen @jasoncrawford @sapinker One day we will walk on the roofs of buildings designed by an AI Syd Mead. |
@QualiaLogos Yesterday I saw someone riding a horse.
@cixliv There are still steam locomotives.
You know perfectly well what I mean.
Interesting data point about the date of Trump's mental model of the world. Television repairmen disappeared in the 1990s. https://t.co/ChzdntWCuF
@fentasyl Yeah, that's the other thing that has made the site less interesting.
@cortesi Ugh, really?
@Joe_0_ Balanced is net impoverished though, because the most interesting people tend to be on the left politically.
@PaulJeffries I think one reason they aren't willing to explain themselves is that they're not articulate enough to. They believe what they believe and that's the end of the story.
@__tzs I agree Twitter is more politically diverse now. The problem is that it's impoverished in many other topics, because the experts on these topics were disproportionately on the left. So unless you care mostly about politics (which I don't) this is a net loss.
@michael_nielsen In each room, ask which thing you'd have if they'd let you take one for free.
@nate_hannon Interesting. One should never write him off. |
@rauchg Have you heard of Syd Mead?
@raahilgadhoke I don't know what high deterministic need is.
@mayacfounder Maybe, but conviction is a terrible predictor of how well a startup will do. There is an infinite supply of (usually single-founder) startups with unshakeable conviction about bad ideas.
@nwbotz Definitely not.
@sebo_gm Not even that.
@HughTang87 As I just said, it's the founders that made them stand out, not the problems that they're solving.
@p_e_cooper I said explicitly in the second tweet in the thread that it isn't.
Every experienced investor already knows this. So if you want to start a startup to work on a non-AI idea, go ahead. If you're good, good investors will see it, and those are the only ones you want to convince anyway.
The lesson to take from this is not that AI is unimportant (it's very important), but that the founders matter more than the idea. The founders are the best predictor of how a company will do, not the industry it's in.
I haven't met all the startups in the current YC batch yet, but the two most impressive companies that I've seen so far are not working on AI. |
@Chris_arnade Intellectuals always think that as people get more time they'll spend it the way intellectuals would.
@LandsknechtPike Thank you, I just bought a copy.
@CompSciFact Languages shouldn't enforce levels of abstraction.
@APompliano Every national leader says that. What people are upset about are the ham-fisted things he's actually doing.
This gave Jessica one of her incapacitating fits of laughter.
@ianbremmer Brexit at least exempts the UK from regulation by the EU. That could turn out to be a net win on account of AI regulation alone. Whereas the tariffs are a pure own goal.
@planetmcd @CoreyWriting A harder SAT makes it easier for admissions officers to select applicants on the basis of intelligence and harder for them to hide it when they don't. That yields a smarter student body, and smarter students choose harder subjects.
This aged unfortunately well.
@snowmaker Sounds a lot like what used to happen with new microprocessors in the 1990s. A new processor would ship and suddenly your slow software was fast.
@roundorbit @shw1nm @hackernews Hardly ever. |
@garrytan Prediction: Once all the parents in Palo Alto realize this has happened, it will get reversed very quickly.
Palo Alto parents are just about the last people in the world to tolerate something like this.
@AssalRad @nytimes The real question is not why she spoke out, but why so many others have remained silent.
"We have lifesaving supplies ready, now, at the borders. We can save hundreds of thousands of survivors."
"But Israel denies us access."
@paulmidler @BillAckman I looked it up, and the Clinton Foundation was established after Clinton left office.
@paulmidler @BillAckman How much of this money was donated while Clinton was in office?
@BillAckman If he were willing to treat the plane as if it were actually a gift to the DoD, meaning it becomes government property that they manage according to their usual procedures, then it wouldn't be such a problem. It would still be dubious, but not a bribe.
@BillAckman It would be fine if they just wanted to give us something. The dubious part is that it's supposed to be transferred to his library after he leaves office instead of remaining part of the US fleet. That makes it more of a bribe for him personally than a gift to the DoD.
@migueldeicaza Apple's being the final step.
@cremieuxrecueil It's a sign of Twitter's intellectual health that you can post this politically incorrect but in fact deeply interesting thread now. In 2020 it would have provoked the mother of all mobs.
@JoJoFromJerz Careful, or you'll be detained and questioned next time you enter the country. |
Journalists don't like Occam's Razor, because it implies that events have more boring causes than the ones they'd like to write about.
@urandomd @garrytan I think what motivates it is British culture. Kindness is more prized here.
@ESYudkowsky @garrytan Our kids' school in England teaches it very successfully. The teachers teach by example, they're nearly all genuinely kind.
It's a lot of the reason we stayed here. Perhaps the single biggest reason in fact.
@garrytan I think they should put kindness first. Especially primary schools.
@hamy_ptran I never said that. I'd never add adjectives to "trust your gut." It's already a metaphor.
@christiancooper I think judging from the armor and fortifications and the drawing that it's about 100 years earlier. https://t.co/LR7zJwC8EQ
@cperciva Not to that specifically, but I did find it strange that he had decided to begin with a kind of meditation session to relax the founders, and that as a result everyone in the audience was asleep.
@michael_nielsen I think that was the goal — get information without getting attention.
Can any medieval ship experts identify these ships? As far as I can tell they seem to be early carracks, from the mid to late 15th century. https://t.co/z5UFb6HbsG
@Mr88AG @HossamShabat Not targeting households?
https://t.co/hvNiiycyOm |
@jsngr They care a medium amount about it. They're not obsessed with design like Apple, but they don't want things to look bad. There are a lot of companies in this category. In fact probably most companies are, including some of the biggest ones.
@JimDMiller @pitdesi The difference — and this is a very big difference — is that in political fundraising, the money goes to the campaign, not to the candidate himself.
@pitdesi It's true. A decade ago this would have seemed like dystopian fiction. It would have been part of a Simpsons plot.
@IAPonomarenko The real winner here is the grocer's apostrophe.
@Swavity @Liv_Boeree I'm 100% sure *they* know. They'd have made sure of that.
This picture is amazing. It could be titled "Ron Conway, the Giant of Silicon Valley"
@LandsknechtPike Doesn't seem any more meager than the way Anglo-Saxon lords would have lived in 800 AD.
@Liv_Boeree If you're 95% sure deliberate obfuscation went on, that implies a 95% chance it was a lab leak, because there would be no need to hide the source of the virus if it really emerged from a wet market.
@Liv_Boeree Do you feel sure Covid escaped from a lab?
@finmoorhouse And the fact that he was driving this change himself, and doubling every 18 mos seemed like a reasonable goal to aim for. |
Believe it or not, it's usually wise to walk investors through the risks involved in your startup. Investors know there's risk. If there wasn't, your valuation would be billions of dollars right now. And if you're vague about the risks you seem glib, or worse still, clueless.
@mmay3r @cremieuxrecueil Presumably my model underestimates people's capacity for intellectual dishonesty. Which is not surprising, considering that I despise this quality and have always tried to avoid people who have it.
@MacaesBruno Few, I would think. The people who think it's a good thing use other words for it.
@megannunes The editing of the latest batch of Social Radars interviews, which is apparently taking longer than expected because they're videos.
@JillFilipovic It would be very useful to put into words the difference between good and bad engaging things — between books and addictive apps.
@JillFilipovic I agree with what I think you're trying to say. I just think you need a better definition. Even this isn't good enough, because books are in fact designed by some of the world’s smartest people to capture your attention for as long as possible.
@JillFilipovic Do books count as devices designed to capture their attention and keep them as sedentary and indoors as possible? Because it sounds like that description covers books.
"I'm not going to panic now. I'll see how things go and then panic first thing tomorrow."
— Jessica
@rajatsuri People still play chess.
@cremieuxrecueil Isn't this strange? Even after all these years it still surprises me. People just invent ideas to attribute to you, and then attack you for them. |
@jamesrcole @mattyglesias Probably what he means is that a lot of government spending is entitlements. I wouldn't have disputed that.
@harris I don't know. I don't think I'd ever advise a startup to delay making money in order to please investors. Investors are fickle idiots. You can't let them be your compass.
@typesfast Medieval Technology and Social Change
The Copernican Revolution
Life in the English Country House
Painting and Experience in Fifteenth Century Italy
Anabasis
The Quest for El Cid
The World We Have Lost
@pavan_rikkula I can't write actual paragraphs without it, but brief notes feel different.
@whitegoldsword Do you mean the first patented product in some specific sense? Because there were many others patented before this.
@Ricois3 @elonmusk Every superlinear graph looks like that if you stretch it vertically. That's my point.
@mattyglesias "You can't drastically reduce government spending without hurting people" implies the federal government is very efficient, and we know that's not true. It's like a big company but more so.
@SpencerHakimian FWIW that's a bad heuristic for early stage investing.
I'm taking some notes about AI and for some reason I find myself using all lowercase...
@MisaDev4 @jesslivingston @chafkin I actually understood that. |
"She wasn’t looking for the next killer product, though. She was looking for people."
https://t.co/iNNewXj25H
One way Timex made their watches cheap was to cut the retail markup in half. Jewelers resisted, so they sold their watches off racks in drugstores.
Something I told 17 yo: Till their early 20s most people are so completely incapable of cooking that if you can make even basic things like pasta and scrambled eggs you'll seem to your friends like brilliant cook.
Unexpected occupational hazard: I often walk with founders while doing office hours, and today I talked to one guy for so long about potential startup ideas that we must have walked several miles.
There's a kind of feature that gets used more by accident than on purpose, and for some reason splitting screens is this kind of feature, for me at least. MacOS, vim, and now Chrome all have screen-splitting, and I only ever do it by accident.
I used to think woke mobs would be the death of Twitter. Then it seemed like right-wing goons would be. But cheery, vapid AI-generated replies seem more dangerous than either of them.
A lot of having taste about something is just caring about it enough to be honest with yourself, so that you can get past "I like what I like" to "Is this actually good?"
When I have to rewrite an essay and I know I'll want to reuse part of the previous version, I usually retype it instead of copying and pasting. I don't worry about losing anything, because I usually have it almost memorized. But I may write it a little better the second time.
This is a big deal. It's like Stripe but for moving money in and out of companies. You just call the API and Modern Treasury does the rest.
Something I told 17 yo and 13 yo: There are things on the internet that you can't unsee, and it truly is better never to have seen them.
(I don't think they believed me, but I tried.) |
@grace_za This is a very important point.
@JuanIsidro You're conflating people and work. People themselves aren't commodities. You can't legally buy them. But their work is. You can buy that.
@CburgesCliff Did you mean that as a joke? Because that performance is a byword in England.
@lamg_dev Does talking of stealing someone's watch mean that watch ownership has a moral dimension?
@PSkinnerTech As soon as? Technology has been replacing human labor for millennia.
@sarmadgulzar By commodity I mean something people pay for. People will pay you for work, so it is one. If you wanted to prevent this from happening, you'd have a really hard time doing it.
@lamg_dev Well that's not true. It's possible to use tricks to prevent suppliers of any commodity from getting market price for it. Stealing it from them, for example. Slavery is one of the tricks used to prevent suppliers of labor from getting market price.
@smalera Or at least use a picture of Phil Mickelson.
@camhahu @smalera When I was a kid people used to stop me on the street and say I looked like him.
@smalera Why do you guys use these freaky looking AI generated images? There are so many real ones you could use. |
@cullenroche It's not so much six months apart as one election apart.
@remusrisnov Yes. If you're making something for kids or families, for example.
@billybinion I don't even think it's political theater. I think the employees making these decisions are simply incompetent and insufficiently supervised.
@Signalman23 Not if you're disciplined. Tony Xu spends a lot of time with his family, and his startup is doing great.
@avoanjali Yes! That's the optimal solution.
Now people can say what they wish done, but it's true that they don't say it in a formal language, and I doubt they could.
@def__ai Of course. I have many times.
No one ever, when they're old, feels they spent too much time with their kids. But there are plenty of people who feel they spent too little, and this must be the bitterest kind of regret.
A founder asked my advice about combining a startup with having small children. I told him family is more important than business, and to put his kids first and cram the startup into the remaining time.
@0xfriedrich Airbnb is one. |
Now that many of the top American universities have gone back to requiring standardized tests, which still don't? That might be a useful index of where the rot is deepest.
@overtquail @cremieuxrecueil The way they've always wanted to: they chose the people the admissions officers liked the most.
@AlexShulepov7 They never do.
"These are the good old days."
— Carly Simon
Stopped in to buy some food in a local shop yesterday. Felt strangely relaxed. Later I realized why: because they weren't about to close. Shops in the English countryside are always either closed, or about to.
@EricsElectrons @ValaAfshar If I ask you if you've fed the cat, you're not clever unless you reply in a way that an ordinary person couldn't fully understand?
@ValaAfshar Well that's wrong. What if the clever man is talking about something mundane?
@TravisseHansen @BrennanWoodruff @mateohh I'm careful about claiming anything in the physical world is infinite.
@BrennanWoodruff @mateohh Human wants are effectively infinite in the short term.
@mateohh Sorry, but I can't do better than inevitable in the thought outness department. |
@robinhanson Mafia doesn't imply monopoly.
@alexandreforget The labels.
@rickasaurus At least it's not hardware or music.
@josephjojoe Seems to be a lot easier now.
When people say "Next time I'm not going to start an x startup," two common values of x are "hardware" and "music". But for completely different reasons: hardware is intrinsically difficult, and the music industry is mafia.
I'm not saying this is false, but CEOs in unsexy businesses have a strong incentive to emphasize how much they're using AI. We're an AI stock too!
@davidshor Does this rule only apply to TV and radio ads?
@millepun @PeterDiamandis Not many, because to get into the richest 100 you usually have to have been working on the original company for a long time.
@davidshor That's strange. Why are candidates' costs lower?
@seanm_sf @rickasaurus What Made Lisp Different: https://t.co/q6TSQNEede |
Life is Short
January 2016
Life is short, as everyone knows. When I was a kid I used to wonder
about this. Is life actually short, or are we really complaining
about its finiteness? Would we be just as likely to feel life was
short if we lived 10 times as long?
Since there didn't seem any way to answer this question, I stopped
wondering about it. Then I had kids. That gave me a way to answer
the question, and the answer is that life actually is short.
Having kids showed me how to convert a continuous quantity, time,
into discrete quantities. You only get 52 weekends with your 2 year
old. If Christmas-as-magic lasts from say ages 3 to 10, you only
get to watch your child experience it 8 times. And while it's
impossible to say what is a lot or a little of a continuous quantity
like time, 8 is not a lot of something. If you had a handful of 8
peanuts, or a shelf of 8 books to choose from, the quantity would
definitely seem limited, no matter what your lifespan was.
Ok, so life actually is short. Does it make any difference to know
that?
It has for me. It means arguments of the form "Life is too short
for x" have great force. It's not just a figure of speech to say
that life is too short for something. It's not just a synonym for
annoying. If you find yourself thinking that life is too short for
something, you should try to eliminate it if you can.
When I ask myself what I've found life is too short for, the word
that pops into my head is "bullshit." I realize that answer is
somewhat tautological. It's almost the definition of bullshit that
it's the stuff that life is too short for. And yet bullshit does
have a distinctive character. There's something fake about it.
It's the junk food of experience.
[
1
]
If you ask yourself what you spend your time on that's bullshit,
you probably already know the answer. Unnecessary meetings, pointless
disputes, bureaucracy, posturing, dealing with other people's
mistakes, traffic jams, addictive but unrewarding pastimes.
There are two ways this kind of thing gets into your life: it's
either forced on you, or it tricks you. To some extent you have to
put up with the bullshit forced on you by circumstances. You need
to make money, and making money consists mostly of errands. Indeed,
the law of supply and demand ensures that: the more rewarding some
kind of work is, the cheaper people will do it. It may be that
less bullshit is forced on you than you think, though. There has
always been a stream of people who opt out of the default grind and
go live somewhere where opportunities are fewer in the conventional
sense, but life feels more authentic. This could become more common.
You can do it on a smaller scale without moving. The amount of
time you have to spend on bullshit varies between employers. Most
large organizations (and many small ones) are steeped in it. But
if you consciously prioritize bullshit avoidance over other factors
like money and prestige, you can probably find employers that will
waste less of your time.
If you're a freelancer or a small company, you can do this at the
level of individual customers. If you fire or avoid toxic customers,
you can decrease the amount of bullshit in your life by more than
you decrease your income.
But while some amount of bullshit is inevitably forced on you, the
bullshit that sneaks into your life by tricking you is no one's
fault but your own. And yet the bullshit you choose may be harder
to eliminate than the bullshit that's forced on you. Things that
lure you into wasting your time have to be really good at
tricking you. An example that will be familiar to a lot of people
is arguing online. When someone
contradicts you, they're in a sense attacking you. Sometimes pretty
overtly. Your instinct when attacked is to defend yourself. But
like a lot of instincts, this one wasn't designed for the world we
now live in. Counterintuitive as it feels, it's better most of
the time not to defend yourself. Otherwise these people are literally
taking your life.
[
2
]
Arguing online is only incidentally addictive. There are more
dangerous things than that. As I've written before, one byproduct
of technical progress is that things we like tend to become
more
addictive
. Which means we will increasingly have to make a conscious
effort to avoid addictions to stand outside ourselves and ask "is
this how I want to be spending my time?"
As well as avoiding bullshit, one should actively seek out things
that matter. But different things matter to different people, and
most have to learn what matters to them. A few are lucky and realize
early on that they love math or taking care of animals or writing,
and then figure out a way to spend a lot of time doing it. But
most people start out with a life that's a mix of things that
matter and things that don't, and only gradually learn to distinguish
between them.
For the young especially, much of this confusion is induced by the
artificial situations they find themselves in. In middle school and
high school, what the other kids think of you seems the most important
thing in the world. But when you ask adults what they got wrong
at that age, nearly all say they cared too much what other kids
thought of them.
One heuristic for distinguishing stuff that matters is to ask
yourself whether you'll care about it in the future. Fake stuff
that matters usually has a sharp peak of seeming to matter. That's
how it tricks you. The area under the curve is small, but its shape
jabs into your consciousness like a pin.
The things that matter aren't necessarily the ones people would
call "important." Having coffee with a friend matters. You won't
feel later like that was a waste of time.
One great thing about having small children is that they make you
spend time on things that matter: them. They grab your sleeve as
you're staring at your phone and say "will you play with me?" And
odds are that is in fact the bullshit-minimizing option.
If life is short, we should expect its shortness to take us by
surprise. And that is just what tends to happen. You take things
for granted, and then they're gone. You think you can always write
that book, or climb that mountain, or whatever, and then you realize
the window has closed. The saddest windows close when other people
die. Their lives are short too. After my mother died, I wished I'd
spent more time with her. I lived as if she'd always be there.
And in her typical quiet way she encouraged that illusion. But an
illusion it was. I think a lot of people make the same mistake I
did.
The usual way to avoid being taken by surprise by something is to
be consciously aware of it. Back when life was more precarious,
people used to be aware of death to a degree that would now seem a
bit morbid. I'm not sure why, but it doesn't seem the right answer
to be constantly reminding oneself of the grim reaper hovering at
everyone's shoulder. Perhaps a better solution is to look at the
problem from the other end. Cultivate a habit of impatience about
the things you most want to do. Don't wait before climbing that
mountain or writing that book or visiting your mother. You don't
need to be constantly reminding yourself why you shouldn't wait.
Just don't wait.
I can think of two more things one does when one doesn't have much
of something: try to get more of it, and savor what one has. Both
make sense here.
How you live affects how long you live. Most people could do better.
Me among them.
But you can probably get even more effect by paying closer attention
to the time you have. It's easy to let the days rush by. The
"flow" that imaginative people love so much has a darker cousin
that prevents you from pausing to savor life amid the daily slurry
of errands and alarms. One of the most striking things I've read
was not in a book, but the title of one: James Salter's
Burning
the Days
.
It is possible to slow time somewhat. I've gotten better at it.
Kids help. When you have small children, there are a lot of moments
so perfect that you can't help noticing.
It does help too to feel that you've squeezed everything out of
some experience. The reason I'm sad about my mother is not just
that I miss her but that I think of all the things we could have
done that we didn't. My oldest son will be 7 soon. And while I
miss the 3 year old version of him, I at least don't have any regrets
over what might have been. We had the best time a daddy and a 3
year old ever had.
Relentlessly prune bullshit, don't wait to do things that matter,
and savor the time you have. That's what you do when life is short.
Notes
[
1
]
At first I didn't like it that the word that came to mind was
one that had other meanings. But then I realized the other meanings
are fairly closely related. Bullshit in the sense of things you
waste your time on is a lot like intellectual bullshit.
[
2
]
I chose this example deliberately as a note to self. I get
attacked a lot online. People tell the craziest lies about me.
And I have so far done a pretty mediocre job of suppressing the
natural human inclination to say "Hey, that's not true!"
Thanks
to Jessica Livingston and Geoff Ralston for reading drafts
of this.
Korean Translation
Japanese Translation
Chinese Translation |
One of my favorite videos. https://t.co/QFkEQBqGgz
@noclador They only have to pretend to care about his wishes though.
@AliceFromQueens If you consistently uphold the same principles and the government swings back and forth from left to right, then you'll seem to be alternately on the right and the left.
@Austen @gauntletai Do Lambda School's haters include Gauntlet in what they hate?
@tarksmarks44 @Big_Picture_89 This is how counterexamples work.
@jbensamo @BasicOptimism In other words, what everyone who criticizes Israeli policy is accused of.
@OpinionsMove Did Rumeysa Ozturk threaten or harass other students?
@BasicOptimism What law did Rumeysa Ozturk break?
@Big_Picture_89 In what way did Rumeysa Ozturk display hostility toward the US?
@0xjck They never truly supported freedom of speech. |
Alien Truth
October 2022
If there were intelligent beings elsewhere in the universe, they'd
share certain truths in common with us. The truths of mathematics
would be the same, because they're true by definition. Ditto for
the truths of physics; the mass of a carbon atom would be the same
on their planet. But I think we'd share other truths with aliens
besides the truths of math and physics, and that it would be
worthwhile to think about what these might be.
For example, I think we'd share the principle that a controlled
experiment testing some hypothesis entitles us to have proportionally
increased belief in it. It seems fairly likely, too, that it would
be true for aliens that one can get better at something by practicing.
We'd probably share Occam's razor. There doesn't seem anything
specifically human about any of these ideas.
We can only guess, of course. We can't say for sure what forms
intelligent life might take. Nor is it my goal here to explore that
question, interesting though it is. The point of the idea of alien
truth is not that it gives us a way to speculate about what forms
intelligent life might take, but that it gives us a threshold, or
more precisely a target, for truth. If you're trying to find the
most general truths short of those of math or physics, then presumably
they'll be those we'd share in common with other forms of intelligent
life.
Alien truth will work best as a heuristic if we err on the side of
generosity. If an idea might plausibly be relevant to aliens, that's
enough. Justice, for example. I wouldn't want to bet that all
intelligent beings would understand the concept of justice, but I
wouldn't want to bet against it either.
The idea of alien truth is related to Erdos's idea of God's book.
He used to describe a particularly good proof as being in God's
book, the implication being (a) that a sufficiently good proof was
more discovered than invented, and (b) that its goodness would be
universally recognized. If there's such a thing as alien truth,
then there's more in God's book than math.
What should we call the search for alien truth? The obvious choice
is "philosophy." Whatever else philosophy includes, it should
probably include this. I'm fairly sure Aristotle would have thought
so. One could even make the case that the search for alien truth
is, if not an accurate description
of
philosophy, a good
definition
for
it. I.e. that it's what people who call
themselves philosophers should be doing, whether or not they currently
are. But I'm not wedded to that; doing it is what matters, not what
we call it.
We may one day have something like alien life among us in the form
of AIs. And that may in turn allow us to be precise about what
truths an intelligent being would have to share with us. We might
find, for example, that it's impossible to create something we'd
consider intelligent that doesn't use Occam's razor. We might one
day even be able to prove that. But though this sort of research
would be very interesting, it's not necessary for our purposes, or
even the same field; the goal of philosophy, if we're going to call it that, would be
to see what ideas we come up with using alien truth as a target,
not to say precisely where the threshold of it is. Those two questions might one
day converge, but they'll converge from quite different directions,
and till they do, it would be too constraining to restrict ourselves
to thinking only about things we're certain would be alien truths.
Especially since this will probably be one of those areas where the
best guesses turn out to be surprisingly close to optimal. (Let's
see if that one does.)
Whatever we call it, the attempt to discover alien truths would be
a worthwhile undertaking. And curiously enough, that is itself
probably an alien truth.
Thanks
to Trevor Blackwell, Greg Brockman,
Patrick Collison, Robert Morris, and Michael Nielsen for reading drafts of this. |
How to Think for Yourself
November 2020
There are some kinds of work that you can't do well without thinking
differently from your peers. To be a successful scientist, for
example, it's not enough just to be correct. Your ideas have to be
both correct and novel. You can't publish papers saying things other
people already know. You need to say things no one else has realized
yet.
The same is true for investors. It's not enough for a public market
investor to predict correctly how a company will do. If a lot of
other people make the same prediction, the stock price will already
reflect it, and there's no room to make money. The only valuable
insights are the ones most other investors don't share.
You see this pattern with startup founders too. You don't want to
start a startup to do something that everyone agrees is a good idea,
or there will already be other companies doing it. You have to do
something that sounds to most other people like a bad idea, but
that you know isn't like writing software for a tiny computer
used by a few thousand hobbyists, or starting a site to let people
rent airbeds on strangers' floors.
Ditto for essayists. An essay that told people things they already
knew would be boring. You have to tell them something
new
.
But this pattern isn't universal. In fact, it doesn't hold for most
kinds of work. In most kinds of work to be an administrator, for
example all you need is the first half. All you need is to be
right. It's not essential that everyone else be wrong.
There's room for a little novelty in most kinds of work, but in
practice there's a fairly sharp distinction between the kinds of
work where it's essential to be independent-minded, and the kinds
where it's not.
I wish someone had told me about this distinction when I was a kid,
because it's one of the most important things to think about when
you're deciding what kind of work you want to do. Do you want to
do the kind of work where you can only win by thinking differently
from everyone else? I suspect most people's unconscious mind will
answer that question before their conscious mind has a chance to.
I know mine does.
Independent-mindedness seems to be more a matter of nature than
nurture. Which means if you pick the wrong type of work, you're
going to be unhappy. If you're naturally independent-minded, you're
going to find it frustrating to be a middle manager. And if you're
naturally conventional-minded, you're going to be sailing into a
headwind if you try to do original research.
One difficulty here, though, is that people are often mistaken about
where they fall on the spectrum from conventional- to independent-minded.
Conventional-minded people don't like to think of themselves as
conventional-minded. And in any case, it genuinely feels to them
as if they make up their own minds about everything. It's just a
coincidence that their beliefs are identical to their peers'. And
the independent-minded, meanwhile, are often unaware how different
their ideas are from conventional ones, at least till they state
them publicly.
[
1
]
By the time they reach adulthood, most people know roughly how smart
they are (in the narrow sense of ability to solve pre-set problems),
because they're constantly being tested and ranked according to it.
But schools generally ignore independent-mindedness, except to the
extent they try to suppress it. So we don't get anything like the
same kind of feedback about how independent-minded we are.
There may even be a phenomenon like Dunning-Kruger at work, where
the most conventional-minded people are confident that they're
independent-minded, while the genuinely independent-minded worry
they might not be independent-minded enough.
___________
Can you make yourself more independent-minded? I think so. This
quality may be largely inborn, but there seem to be ways to magnify
it, or at least not to suppress it.
One of the most effective techniques is one practiced unintentionally
by most nerds: simply to be less aware what conventional beliefs
are. It's hard to be a conformist if you don't know what you're
supposed to conform to. Though again, it may be that such people
already are independent-minded. A conventional-minded person would
probably feel anxious not knowing what other people thought, and
make more effort to find out.
It matters a lot who you surround yourself with. If you're surrounded
by conventional-minded people, it will constrain which ideas you
can express, and that in turn will constrain which ideas you have.
But if you surround yourself with independent-minded people, you'll
have the opposite experience: hearing other people say surprising
things will encourage you to, and to think of more.
Because the independent-minded find it uncomfortable to be surrounded
by conventional-minded people, they tend to self-segregate once
they have a chance to. The problem with high school is that they
haven't yet had a chance to. Plus high school tends to be an
inward-looking little world whose inhabitants lack confidence, both
of which magnify the forces of conformism. So high school is
often a
bad time
for the
independent-minded. But there is some advantage even here: it
teaches you what to avoid. If you later find yourself in a situation
that makes you think "this is like high school," you know you should
get out.
[
2
]
Another place where the independent- and conventional-minded are
thrown together is in successful startups. The founders and early
employees are almost always independent-minded; otherwise the startup
wouldn't be successful. But conventional-minded people greatly
outnumber independent-minded ones, so as the company grows, the
original spirit of independent-mindedness is inevitably diluted.
This causes all kinds of problems besides the obvious one that the
company starts to suck. One of the strangest is that the founders
find themselves able to speak more freely with founders of other
companies than with their own employees.
[
3
]
Fortunately you don't have to spend all your time with independent-minded
people. It's enough to have one or two you can talk to regularly.
And once you find them, they're usually as eager to talk as you
are; they need you too. Although universities no longer have the
kind of monopoly they used to have on education, good universities
are still an excellent way to meet independent-minded people. Most
students will still be conventional-minded, but you'll at least
find clumps of independent-minded ones, rather than the near zero
you may have found in high school.
It also works to go in the other direction: as well as cultivating
a small collection of independent-minded friends, to try to meet
as many different types of people as you can. It will decrease the
influence of your immediate peers if you have several other groups
of peers. Plus if you're part of several different worlds, you can
often import ideas from one to another.
But by different types of people, I don't mean demographically
different. For this technique to work, they have to think differently.
So while it's an excellent idea to go and visit other countries,
you can probably find people who think differently right around the
corner. When I meet someone who knows a lot about something unusual
(which includes practically everyone, if you dig deep enough), I
try to learn what they know that other people don't. There are
almost always surprises here. It's a good way to make conversation
when you meet strangers, but I don't do it to make conversation.
I really want to know.
You can expand the source of influences in time as well as space,
by reading history. When I read history I do it not just to learn
what happened, but to try to get inside the heads of people who
lived in the past. How did things look to them? This is hard to do,
but worth the effort for the same reason it's worth travelling far
to triangulate a point.
You can also take more explicit measures to prevent yourself from
automatically adopting conventional opinions. The most general is
to cultivate an attitude of skepticism. When you hear someone say
something, stop and ask yourself "Is that true?" Don't say it out
loud. I'm not suggesting that you impose on everyone who talks to
you the burden of proving what they say, but rather that you take
upon yourself the burden of evaluating what they say.
Treat it as a puzzle. You know that some accepted ideas will later
turn out to be wrong. See if you can guess which. The end goal is
not to find flaws in the things you're told, but to find the new
ideas that had been concealed by the broken ones. So this game
should be an exciting quest for novelty, not a boring protocol for
intellectual hygiene. And you'll be surprised, when you start asking
"Is this true?", how often the answer is not an immediate yes. If
you have any imagination, you're more likely to have too many leads
to follow than too few.
More generally your goal should be not to let anything into your
head unexamined, and things don't always enter your head in the
form of statements. Some of the most powerful influences are implicit.
How do you even notice these? By standing back and watching how
other people get their ideas.
When you stand back at a sufficient distance, you can see ideas
spreading through groups of people like waves. The most obvious are
in fashion: you notice a few people wearing a certain kind of shirt,
and then more and more, until half the people around you are wearing
the same shirt. You may not care much what you wear, but there are
intellectual fashions too, and you definitely don't want to participate
in those. Not just because you want sovereignty over your own
thoughts, but because
unfashionable
ideas are disproportionately likely to lead somewhere interesting.
The best place to find undiscovered ideas is where no one else is
looking.
[
4
]
___________
To go beyond this general advice, we need to look at the internal
structure of independent-mindedness at the individual muscles
we need to exercise, as it were. It seems to me that it has three
components: fastidiousness about truth, resistance to being told
what to think, and curiosity.
Fastidiousness about truth means more than just not believing things
that are false. It means being careful about degree of belief. For
most people, degree of belief rushes unexamined toward the extremes:
the unlikely becomes impossible, and the probable becomes certain.
[
5
]
To the independent-minded, this seems unpardonably sloppy.
They're willing to have anything in their heads, from highly
speculative hypotheses to (apparent) tautologies, but on subjects
they care about, everything has to be labelled with a carefully
considered degree of belief.
[
6
]
The independent-minded thus have a horror of ideologies, which
require one to accept a whole collection of beliefs at once, and
to treat them as articles of faith. To an independent-minded person
that would seem revolting, just as it would seem to someone fastidious
about food to take a bite of a submarine sandwich filled with a
large variety of ingredients of indeterminate age and provenance.
Without this fastidiousness about truth, you can't be truly
independent-minded. It's not enough just to have resistance to being
told what to think. Those kind of people reject conventional ideas
only to replace them with the most random conspiracy theories. And
since these conspiracy theories have often been manufactured to
capture them, they end up being less independent-minded than ordinary
people, because they're subject to a much more exacting master than
mere convention.
[
7
]
Can you increase your fastidiousness about truth? I would think so.
In my experience, merely thinking about something you're fastidious
about causes that fastidiousness to grow. If so, this is one of
those rare virtues we can have more of merely by wanting it. And
if it's like other forms of fastidiousness, it should also be
possible to encourage in children. I certainly got a strong dose
of it from my father.
[
8
]
The second component of independent-mindedness, resistance to being
told what to think, is the most visible of the three. But even this
is often misunderstood. The big mistake people make about it is to
think of it as a merely negative quality. The language we use
reinforces that idea. You're
un
conventional. You
don't
care
what other people think. But it's not just a kind of immunity. In
the most independent-minded people, the desire not to be told what
to think is a positive force. It's not mere skepticism, but an
active
delight
in ideas that subvert
the conventional wisdom, the more counterintuitive the better.
Some of the most novel ideas seemed at the time almost like practical
jokes. Think how often your reaction to a novel idea is to laugh.
I don't think it's because novel ideas are funny per se, but because
novelty and humor share a certain kind of surprisingness. But while
not identical, the two are close enough that there is a definite
correlation between having a sense of humor and being independent-minded
just as there is between being humorless and being conventional-minded.
[
9
]
I don't think we can significantly increase our resistance to being
told what to think. It seems the most innate of the three components
of independent-mindedness; people who have this quality as adults
usually showed all too visible signs of it as children. But if we
can't increase our resistance to being told what to think, we can
at least shore it up, by surrounding ourselves with other
independent-minded people.
The third component of independent-mindedness, curiosity, may be
the most interesting. To the extent that we can give a brief answer
to the question of where novel ideas come from, it's curiosity. That's
what people are usually feeling before having them.
In my experience, independent-mindedness and curiosity predict one
another perfectly. Everyone I know who's independent-minded is
deeply curious, and everyone I know who's conventional-minded isn't.
Except, curiously, children. All small children are curious. Perhaps
the reason is that even the conventional-minded have to be curious
in the beginning, in order to learn what the conventions are. Whereas
the independent-minded are the gluttons of curiosity, who keep
eating even after they're full.
[
10
]
The three components of independent-mindedness work in concert:
fastidiousness about truth and resistance to being told what to
think leave space in your brain, and curiosity finds new ideas to
fill it.
Interestingly, the three components can substitute for one another
in much the same way muscles can. If you're sufficiently fastidious
about truth, you don't need to be as resistant to being told what
to think, because fastidiousness alone will create sufficient gaps
in your knowledge. And either one can compensate for curiosity,
because if you create enough space in your brain, your discomfort
at the resulting vacuum will add force to your curiosity. Or curiosity
can compensate for them: if you're sufficiently curious, you don't
need to clear space in your brain, because the new ideas you discover
will push out the conventional ones you acquired by default.
Because the components of independent-mindedness are so interchangeable,
you can have them to varying degrees and still get the same result.
So there is not just a single model of independent-mindedness. Some
independent-minded people are openly subversive, and others are
quietly curious. They all know the secret handshake though.
Is there a way to cultivate curiosity? To start with, you want to
avoid situations that suppress it. How much does the work you're
currently doing engage your curiosity? If the answer is "not much,"
maybe you should change something.
The most important active step you can take to cultivate your
curiosity is probably to seek out the topics that engage it. Few
adults are equally curious about everything, and it doesn't seem
as if you can choose which topics interest you. So it's up to you
to
find
them. Or invent them, if
necessary.
Another way to increase your curiosity is to indulge it, by
investigating things you're interested in. Curiosity is unlike
most other appetites in this respect: indulging it tends to increase
rather than to sate it. Questions lead to more questions.
Curiosity seems to be more individual than fastidiousness about
truth or resistance to being told what to think. To the degree
people have the latter two, they're usually pretty general, whereas
different people can be curious about very different things. So
perhaps curiosity is the compass here. Perhaps, if your goal is to
discover novel ideas, your motto should not be "do what you love"
so much as "do what you're curious about."
Notes
[
1
]
One convenient consequence of the fact that no one identifies
as conventional-minded is that you can say what you like about
conventional-minded people without getting in too much trouble.
When I wrote
"The Four Quadrants of
Conformism"
I expected a firestorm of rage from the
aggressively conventional-minded, but in fact it was quite muted.
They sensed that there was something about the essay that they
disliked intensely, but they had a hard time finding a specific
passage to pin it on.
[
2
]
When I ask myself what in my life is like high school, the
answer is Twitter. It's not just full of conventional-minded people,
as anything its size will inevitably be, but subject to violent
storms of conventional-mindedness that remind me of descriptions
of Jupiter. But while it probably is a net loss to spend time there,
it has at least made me think more about the distinction between
independent- and conventional-mindedness, which I probably wouldn't
have done otherwise.
[
3
]
The decrease in independent-mindedness in growing startups is
still an open problem, but there may be solutions.
Founders can delay the problem by making a conscious effort only
to hire independent-minded people. Which of course also has the
ancillary benefit that they have better ideas.
Another possible solution is to create policies that somehow disrupt
the force of conformism, much as control rods slow chain reactions,
so that the conventional-minded aren't as dangerous. The physical
separation of Lockheed's Skunk Works may have had this as a side
benefit. Recent examples suggest employee forums like Slack may not
be an unmitigated good.
The most radical solution would be to grow revenues without growing
the company. You think hiring that junior PR person will be cheap,
compared to a programmer, but what will be the effect on the average
level of independent-mindedness in your company? (The growth in
staff relative to faculty seems to have had a similar effect on
universities.) Perhaps the rule about outsourcing work that's not
your "core competency" should be augmented by one about outsourcing
work done by people who'd ruin your culture as employees.
Some investment firms already seem to be able to grow revenues
without growing the number of employees. Automation plus the ever
increasing articulation of the "tech stack" suggest this may one
day be possible for product companies.
[
4
]
There are intellectual fashions in every field, but their
influence varies. One of the reasons politics, for example, tends
to be boring is that it's so extremely subject to them. The threshold
for having opinions about politics is much
lower
than the one for having
opinions about set theory. So while there are some ideas in politics,
in practice they tend to be swamped by waves of intellectual fashion.
[
5
]
The conventional-minded are often fooled by the strength of
their opinions into believing that they're independent-minded. But
strong convictions are not a sign of independent-mindedness. Rather
the opposite.
[
6
]
Fastidiousness about truth doesn't imply that an independent-minded
person won't be dishonest, but that he won't be deluded. It's sort
of like the definition of a gentleman as someone who is never
unintentionally rude.
[
7
]
You see this especially among political extremists. They think
themselves nonconformists, but actually they're niche conformists.
Their opinions may be different from the average person's, but they
are often more influenced by their peers' opinions than the average
person's are.
[
8
]
If we broaden the concept of fastidiousness about truth so that
it excludes pandering, bogusness, and pomposity as well as falsehood
in the strict sense, our model of independent-mindedness can expand
further into the arts.
[
9
]
This correlation is far from perfect, though. Gödel and Dirac
don't seem to have been very strong in the humor department. But
someone who is both "neurotypical" and humorless is very likely to
be conventional-minded.
[
10
]
Exception: gossip. Almost everyone is curious about gossip.
Thanks
to Trevor Blackwell, Paul Buchheit, Patrick Collison, Jessica
Livingston, Robert Morris, Harj Taggar, and Peter Thiel for reading
drafts of this.
Italian Translation |
@bscholl Does that mean we can now vote Netanyahu out?
@pickover No one with kids would restart at 10.
@ThorChiggins @BasedMikeLee The first time I wasn't sure if I could trust the NY Post story that reported this. It seemed best to be cautious. But since then more evidence has emerged.
@bamboo_master_m @BasedMikeLee And his Oklahoma voter registration as a Republican, among other evidence.
@BrettYokom @BasedMikeLee The obvious reason is that they were Democrats. The hit list in his car was entirely of Democrats.
@garrytan I spoke recently at the entrepreneurship club at a high school. I couldn't quite say it openly, but I wanted to tell them they should all just be in the programming club instead.
@GavinWax If what you mean is that startup founders should get instant green cards, I couldn't agree more. In fact I proposed this 16 years ago.
https://t.co/n99oYhbcUZ
@BasedMikeLee Boelter was a conservative who voted for Trump in the last election.
@kobysoto Do you not believe this picture is real? https://t.co/1Q0NX8B9hs
@Jon_O90_ @BarthOmondi @elonmusk Ok, sure. The reason Elon is posting this now is the Boelter killing.
Goodbye. |
@rauchg "A great software engineer who’s now making a killer career in sales" is a fairly accurate description of a successful startup. Except the great engineer has to keep writing software too.
@davidsirota You won't tell people what your story is about even when they ask explicitly? That has to be a new world record for burying the lede.
@rodrakic Apparently they get it less.
Cigarette sales and lung cancer deaths are the same curve, shifted 25 years. https://t.co/F7WIPNmuWr
Cancer is caused by convenience stores.
I've seen a lot of speeches and press conferences where a politician's advisors had clearly told him beforehand "you have to convince everyone that you're x," but I hadn't previously seen ones where x was "not too old," and it's pretty depressing.
@davidsirota You have buried the lede. What's this story even about?
I used to find it disconcerting that the EU seemed to be ruled by faceless bureaucrats, but it turns out to be even more disconcerting when they have faces.
@Noahpinion I guessed 2017 and I was right!
@shaig Do I have to figure out what mistake you've made or distortion you've introduced into the data, or can you just save all our time and tell us? |
@RSwynar Not yet but maybe soon.
12 yo asked what people do between when they start working and when they become famous. I told him they work super hard at whatever they'll later be famous for. That's how they become famous.
@JohnDCook @electricfutures I often produce sentences almost identical to ones I've written before. It feels a bit uncanny but I don't worry about it. If essays intersect, they intersect. If I made a new essay bend to avoid this, I'd be making it worse.
@mar_hendriks @JohnDCook That's presumably a byproduct of choosing the cheapest optimizations first though.
@whyvert @MungoManic Roman spears were made with points that bent so that they couldn't be thrown back at you.
@JohnDCook Whoah, I had the exact same reaction when you first tweeted this four years ago: https://t.co/mSqn2bFwzs
@JohnDCook I wonder if there is a way to formalize this insight.
In investing, beta is volatility. Describing a founder as "high beta" means they're either going to flame out or take over the world. And at YC that translates to yes.
Jessica just interviewed Alexandr Wang for The Social Radars. She was also one of the interviewers when he applied to YC in 2016. She went and looked at the partners' notes on that interview. All said the same thing: high beta.
@_cam__mac_ They've never been open. I'd be overwhelmed. |
The title of the email is "Investor Update." Talk about underpromising and overdelivering; the email itself brings the news that the company is switching from web design to a nutritional drink mix. Well, it might work...
@TomCayman It's one of the ingredients in Old Bay.
That said, it's actually pretty good. The median American dinner would be improved by a liberal sprinkling of Old Bay. It's particularly good on fried eggs.
12 yo is obsessed with Old Bay seasoning. He makes me use it in everything. The US packaging is cagy about the ingredients ("spices"), but apparently in the UK they have to list them all, and it's basically all the spices in your spice cabinet mixed together.
@SamsungMobileUS That tweet was a clumsy mistake.
If you want your stuff to last, arrange for the Swiss to capture it.
@nikitabier It doesn't assume that users will give something a second try, just that there will be a second cohort. And there invariably is a second cohort if you launch fast enough, because if you launch fast enough the only users you can get are your friends and their friends.
@Austen I told them it's because the earth is finite and if no one died we'd run out of atoms.
@CompSciFact This isn't necessarily a sign of bad design. When you use a bottom-up approach to language design, you often get emergent features. Not that C++ did.
@thomastupchurch Ah. That's a special case. And honestly you won't be missing much. |
Holding a Program in One's Head
August 2007
A good programmer working intensively on his own code can hold it
in his mind the way a mathematician holds a problem he's working
on. Mathematicians don't answer questions by working them out on
paper the way schoolchildren are taught to. They do more in their
heads: they try to understand a problem space well enough that they
can walk around it the way you can walk around the memory of the
house you grew up in. At its best programming is the same. You
hold the whole program in your head, and you can manipulate it at
will.
That's particularly valuable at the start of a project, because
initially the most important thing is to be able to change what
you're doing. Not just to solve the problem in a different way,
but to change the problem you're solving.
Your code is your understanding of the problem you're exploring.
So it's only when you have your code in your head that you really
understand the problem.
It's not easy to get a program into your head. If you leave a
project for a few months, it can take days to really understand it
again when you return to it. Even when you're actively working on
a program it can take half an hour to load into your head when you
start work each day. And that's in the best case. Ordinary
programmers working in typical office conditions never enter this
mode. Or to put it more dramatically, ordinary programmers working
in typical office conditions never really understand the problems
they're solving.
Even the best programmers don't always have the whole program they're
working on loaded into their heads. But there are things you can
do to help:
Avoid distractions.
Distractions are bad for many types of work,
but especially bad for programming, because programmers tend to
operate at the limit of the detail they can handle.
The danger of a distraction depends not on how long it is, but
on how much it scrambles your brain. A programmer can leave the
office and go and get a sandwich without losing the code in his
head. But the wrong kind of interruption can wipe your brain
in 30 seconds.
Oddly enough, scheduled distractions may be worse than unscheduled
ones. If you know you have a meeting in an hour, you don't even
start working on something hard.
Work in long stretches.
Since there's a fixed cost each time
you start working on a program, it's more efficient to work in
a few long sessions than many short ones. There will of course
come a point where you get stupid because you're tired. This
varies from person to person. I've heard of people hacking for
36 hours straight, but the most I've ever been able to manage
is about 18, and I work best in chunks of no more than 12.
The optimum is not the limit you can physically endure. There's
an advantage as well as a cost of breaking up a project. Sometimes
when you return to a problem after a rest, you find your unconscious
mind has left an answer waiting for you.
Use succinct languages.
More
powerful
programming languages
make programs shorter. And programmers seem to think of programs
at least partially in the language they're using to write them.
The more succinct the language, the shorter the program, and the
easier it is to load and keep in your head.
You can magnify the effect of a powerful language by using a
style called bottom-up programming, where you write programs in
multiple layers, the lower ones acting as programming languages
for those above. If you do this right, you only have to keep
the topmost layer in your head.
Keep rewriting your program.
Rewriting a program often yields
a cleaner design. But it would have advantages even if it didn't:
you have to understand a program completely to rewrite it, so
there is no better way to get one loaded into your head.
Write rereadable code.
All programmers know it's good to write
readable code. But you yourself are the most important reader.
Especially in the beginning; a prototype is a conversation with
yourself. And when writing for yourself you have different
priorities. If you're writing for other people, you may not
want to make code too dense. Some parts of a program may be
easiest to read if you spread things out, like an introductory
textbook. Whereas if you're writing code to make it easy to reload
into your head, it may be best to go for brevity.
Work in small groups.
When you manipulate a program in your
head, your vision tends to stop at the edge of the code you own.
Other parts you don't understand as well, and more importantly,
can't take liberties with. So the smaller the number of
programmers, the more completely a project can mutate. If there's
just one programmer, as there often is at first, you can do
all-encompassing redesigns.
Don't have multiple people editing the same piece of code.
You
never understand other people's code as well as your own. No
matter how thoroughly you've read it, you've only read it, not
written it. So if a piece of code is written by multiple authors,
none of them understand it as well as a single author would.
And of course you can't safely redesign something other people
are working on. It's not just that you'd have to ask permission.
You don't even let yourself think of such things. Redesigning
code with several authors is like changing laws; redesigning
code you alone control is like seeing the other interpretation
of an ambiguous image.
If you want to put several people to work on a project, divide
it into components and give each to one person.
Start small.
A program gets easier to hold in your head as you
become familiar with it. You can start to treat parts as black
boxes once you feel confident you've fully explored them. But
when you first start working on a project, you're forced to see
everything. If you start with too big a problem, you may never
quite be able to encompass it. So if you need to write a big,
complex program, the best way to begin may not be to write a
spec for it, but to write a prototype that solves a subset of
the problem. Whatever the advantages of planning, they're often
outweighed by the advantages of being able to keep a program in
your head.
It's striking how often programmers manage to hit all eight points
by accident. Someone has an idea for a new project, but because
it's not officially sanctioned, he has to do it in off hours—which
turn out to be more productive because there are no distractions.
Driven by his enthusiasm for the new project he works on it for
many hours at a stretch. Because it's initially just an
experiment, instead of a "production" language he uses a mere
"scripting" language—which is in fact far more powerful. He
completely rewrites the program several times; that wouldn't be
justifiable for an official project, but this is a labor of love
and he wants it to be perfect. And since no one is going to see
it except him, he omits any comments except the note-to-self variety.
He works in a small group perforce, because he either hasn't told
anyone else about the idea yet, or it seems so unpromising that no
one else is allowed to work on it. Even if there is a group, they
couldn't have multiple people editing the same code, because it
changes too fast for that to be possible. And the project starts
small because the idea
is
small at first; he just has some cool
hack he wants to try out.
Even more striking are the number of officially sanctioned projects
that manage to do
all eight things wrong
. In fact, if you look at
the way software gets written in most organizations, it's almost
as if they were deliberately trying to do things wrong. In a sense,
they are. One of the defining qualities of organizations since
there have been such a thing is to treat individuals as interchangeable
parts. This works well for more parallelizable tasks, like fighting
wars. For most of history a well-drilled army of professional
soldiers could be counted on to beat an army of individual warriors,
no matter how valorous. But having ideas is not very parallelizable.
And that's what programs are: ideas.
It's not merely true that organizations dislike the idea of depending
on individual genius, it's a tautology. It's part of the definition
of an organization not to. Of our current concept of an organization,
at least.
Maybe we could define a new kind of organization that combined the
efforts of individuals without requiring them to be interchangeable.
Arguably a market is such a form of organization, though it may be
more accurate to describe a market as a degenerate case—as what
you get by default when organization isn't possible.
Probably the best we'll do is some kind of hack, like making the
programming parts of an organization work differently from the rest.
Perhaps the optimal solution is for big companies not even to try
to develop ideas in house, but simply to
buy
them. But regardless
of what the solution turns out to be, the first step is to realize
there's a problem. There is a contradiction in the very phrase
"software company." The two words are pulling in opposite directions.
Any good programmer in a large organization is going to be at odds
with it, because organizations are designed to prevent what
programmers strive for.
Good programmers
manage to get a lot done anyway.
But often it
requires practically an act of rebellion against the organizations
that employ them. Perhaps it will help if more people understand that the way
programmers behave is driven by the demands of the work they do.
It's not because they're irresponsible that they work in long binges
during which they blow off all other obligations, plunge straight into
programming instead of writing specs first, and rewrite code that
already works. It's not because they're unfriendly that they prefer
to work alone, or growl at people who pop their head in the door
to say hello. This apparently random collection of annoying habits
has a single explanation: the power of holding a program in one's
head.
Whether or not understanding this can help large organizations, it
can certainly help their competitors. The weakest point in big
companies is that they don't let individual programmers do great
work. So if you're a little startup, this is the place to attack
them. Take on the kind of problems that have to be solved in one
big brain.
Thanks
to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston,
Robert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev,
and Stephen Wolfram for reading drafts of this.
Japanese Translation
Simplified Chinese Translation
Portuguese Translation
Bulgarian Translation
Russian Translation |
@patrickc @IDC Paypal seems to be sniffing your butt though.
@PP_Rubens I thought Canaletto when I saw the women, but not anymore.
@RealTimeWWII Sound just like Russian soldiers in Ukraine.
@leigh22nyc Wow, hi Leigh!
@gf_256 Fortunately for you, there are other people focusing on the world of ideas and thoughts.
@testdumbass @Venice_Wes Imagine it was your baby.
@Austen I'd say: when you get so tired that you start to make mistakes.
@testdumbass @Venice_Wes People's babies aren't dying.
@jonartmir I think you meant to reply to this one: https://t.co/LK0ulaIdYG
@Austen This makes writing code oneself seem more attractive, not less. |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 17