Collective Stupidity -- How Can We Avoid It? - By Sabine Hossenfelder
This is a transcript of https://youtube.com/watch?v=25kqobiv4ng; this video has been transcribed with Whisper and formatted with Python. All rights are reserved and all credits go to Sabine Hossenfelder.
You and I together are more than the sum of the parts.
It's not just that I know some things you don't know and you know some things I don't know, like how to prevent hair from looking like sauerkraut.
No, there's more to it.
Maybe, hopefully, every once in a while I point one of you into a new direction and you see something I couldn't see.
Together we're more intelligent than either of us alone.
But collective intelligence has a flip side - collective stupidity.
Sometimes we're more stupid together than we are on our own.
But what makes some groups of people intelligent and others stupid? That's what we'll talk about today.
Everything around you is made of just a few types of elementary particles.
Fundamentally, there's little difference between you and a cheesecracker.
Both are made from up and down quarks with electrons, held together by gluons and photons.
If you combine many of those particles, you get increasingly complex systems.
First you get atoms, then molecules, and those molecules can combine to living beings, which can combine to societies.
Each time you combine many constituents with their interactions, you can get completely new behaviour.
We call this behaviour emergent.
The difference between you and a cheesecracker isn't on the fundamental level, it's on the emergent level.
You talk.
A cheesecracker doesn't.
Or if it does, maybe cut back on those THC gummies.
A simple example of emergent behaviour is a wave.
If you combine a lot of water molecules, you get waves.
But waves don't exist for single molecules.
This makes no sense.
Waves only exist on the collective level.
This is what it means for something to be emergent.
It's a property that doesn't make sense on the fundamental level.
It's not only particles that have emergent behaviour.
Living beings have too.
People can do a Laola wave, sheep flow like fluids through gates, and starlings do mesmerising murmurations.
And it's not only motion that living beings coordinate, they also coordinate the exchange of information.
Fungi, for example, coordinate their growth to optimise the transport of nutrients.
According to a study that was just published last year, they do this by using directional memory and collision-induced branching.
This emergent behaviour can even be exploited to get fungi to produce microscopic devices.
For example, slime mould has been coaxed into growing a network that can transport dyes.
In another experiment, the mould networks were connected to create logical circuits.
While these are remarkable examples of collective behaviour, scientists don't think that fungi are actually collectively intelligent.
Though I think that's exactly what the fungi want us to think.
The animals that are probably best known for their collective intelligence are bees.
They use motion, often called the bee dance, to share information about food sources and they build hives together with a sophisticated division of labour to raise their young.
We call them collectively intelligent not just because they are able to share information, but because they base decisions on that shared information.
They learn from each other.
Ants show a similar intelligence in colonies.
They communicate with each other by pheromones to signal where food can be found, and together they can defeat enemies much larger than themselves.
But ants also show us the problems with collective intelligence.
Ants try to follow each other's trails, and if some of them accidentally draw such a trail into a circle, they'll walk in circles until they die.
This death spiral isn't collective intelligence, it's collective stupidity.
It's what happens when a usually beneficial behaviour goes badly wrong.
And the same thing can happen for humans.
Two heads are better than one, the saying goes, and sometimes it's true.
Contestants of the quiz show who wants to be a millionaire can ask the audience to answer a limited number of questions for them.
The audience then collectively votes on what they believe the correct answer to be.
It doesn't always work, but statistically the audience gets it right 91% of the time.
But the wisdom of the crowds wasn't born with quiz shows.
The idea is much older.
It dates back to 1907 when Francis Galton went to a livestock exhibition in Plymouth and asked a group of about 800 people to guess the weight of an ox that was on display.
He collected the results and calculated the average value of all estimates.
The result was 1207 pounds, almost exactly the right weight of 1198 pounds.
He published the results of this cutting-edge research in Nature.
Those were the days, people.
Broadly speaking, the reason large groups are better than individuals in answering simple questions is that some people in the group are knowledgeable about the topic and the rest make errors that average out.
This tells you that asking the audience isn't going to make you a millionaire if there are very few people who know the answer.
But if you ever need to know the weight of an ox, then asking people at a 19th century farmer's market is a pretty good idea.
There are more modern applications of this idea too.
Average guesses of crowds are valuable information.
It's why companies use crowdsourcing to collect feedback from some customers to make recommendations for others.
It's why they solicit reviews to judge the quality of products and services.
It's why YouTube wants to know how long you watch a video and whether you can be bothered to click like.
This information is worth real money.
Better still, people provide it for free.
Another example of collective intelligence that you're all familiar with is Wikipedia.
Yes, it has its problems, which is why I've given up correcting entries on quantum mechanics, but it's good enough to be useful just by collecting information.
Indeed, Wikipedia usefully has an entry about the reliability of Wikipedia that collects studies on the subject.
Results depend on topic, but by and large they found that Wikipedia tends to be as accurate as other encyclopedias, though it's frequently incomplete, often completely omitting relevant information.
A particularly impressive use of collective human intelligence are stock markets.
Yes, they have a bad reputation, but that's because most of us only take note of the stock market when something goes wrong.
Most of the time, however, the stock exchange guides investment to all our advantage.
It works because the incentives of individual traders are aligned with the optimal distribution of resources, at least in theory.
But this only works so long as we have appropriate regulations that govern trade.
Like, if you signed it, you're bound to it.
If you agree to pay forty-four billion dollars for a social media platform and put your name on the paper, then you can't just change your mind the next day.
You're also not allowed to trade insider information.
Monopolies must be broken up and there's a load of other regulations on trade.
Because without them, the stock market wouldn't produce results that we want.
It'd still collect information from individuals, but the outcome would no longer be what we desire.
And that brings us to the problem.
The problem is, groups are only collectively intelligent when the mechanism to collect their information is carefully set up.
If you crowdsource information from a group and want errors to average out, then the members of the group must put forward their private information independently of the others.
This means, most importantly, you shouldn't know what other people have said before you put forward your own guess.
This is why they only show you poll results after you've voted yourself.
And the "ask the audience" is set up that way too.
But if that's not the case, if people know what others have said before making up their own mind, then the information can become systematically biased.
This can lead to all kinds of trouble.
The famous Ash experiment from the 1950s illustrates this.
In this experiment, participants were assigned to groups with confederates of the experimenter and were asked to match the length of lines on cards with a comparison line.
The confederates consistently gave obviously incorrect answers.
But many participants agreed with them, even though their own perceptions told them that the answers were wrong.
Now that the participants agreed with the wrong answers doesn't necessarily mean they believed them.
If I was in a room with a group of people who insisted that the longer line is actually the shorter one, I'd also agree with them.
I'd also keep my back to the wall and inch towards the exit.
This is why, even though the Ash experiment has been reproduced in many different variants, just how to interpret it has remained somewhat controversial.
People might have many reasons to agree with others even if they don't believe them.
But while the interpretation for why people act this way has remained unclear, there's little doubt that they do.
And this is how information cascades work.
An information cascade happens when individuals ignore the information that they privately hold and instead pass on the information they obtain from others, for whatever reasons.
Like all collective behaviors, this one isn't necessarily bad.
In fact, it's usually beneficial.
You have all seen information cascades on social media.
In the early days of the Covid pandemic, information was sparse and we passed on what little we heard about symptoms and prevention.
That's an example for how information cascades can be useful.
But misinformation can spread the same way.
This is for example how panic buying comes about.
No one actually thinks they need 100 rolls of toilet paper.
But if everyone else thinks they need it, maybe these people know something I don't know.
Better safe than sorry.
For crowd judgment to become systematically skewed, it isn't necessary that people completely ignore their information.
It's sufficient already if they're influenced.
And this happens everywhere around us.
Multiple studies have found that information cascades happen for software adoption, online reading, product ratings and other everyday instances.
Since I know you don't come here for the fluff, sociologists distinguish such information cascades from herd behavior.
Herd behavior just means that individuals behave the same way, but not necessarily that in doing so they ignore their own information.
In reality, we often see a mix of information cascades and herd behavior.
A particularly influential example of herd behavior comes from an experiment by Milgram in the 1960s.
Yes, that's the same Milgram who did the much discussed prisoner experiments.
But this one was a little more innocent.
He recruited a few people to stand in a street corner and point at nothing in the sky.
Sure enough, other people came to join them to look at nothing.
YouTube is basically built on this idea.
If you come across a video that's been watched by a million people, you're more likely to watch it than if it had only ten views.
And most of the time that's probably a good decision.
But it also means that social media has a strong "rich get richer" trend, where you eventually end up with some people who are popular for being popular.
They're interesting just because others think they're interesting.
Many financial crashes are due to information cascades, and that's certainly nothing new.
In the 18th century, the Scottish businessman John Law founded the Mississippi Company whose purpose it was to develop the French territory near the Mississippi River.
His stocks sold like warm bagels all over Europe, and Law was granted a monopoly on the trade.
This was already a bad idea, but things got worse when the French government began printing more money so that everyone could buy more Mississippi stocks.
Inevitably, eventually, investors realized there was no way the supposed wealth was ever to become real.
They tried to get their money out of the bank, and the bubble collapsed, leaving many people bankrupt, economic growth seriously damaged, and trust in the financial system in shambles.
The dot-com bubble of the late 1990s worked like that too.
Everyone and their dog were investing into internet startups, even though no one really knew how those were supposed to eventually make money.
The value of these stocks became incredibly overinflated.
When those startups eventually went live but created little to no revenue, the bubble burst.
A more recent example of an information cascade was the 2008 financial crash.
Banks were handing out mortgages to borrowers who couldn't reasonably be expected to pay them back.
The banks then collected the mortgages and other loans into packages called securities that were sold to investors.
When interest rates went up, it became clear that these mortgages and loans wouldn't be paid back.
The value of the securities dropped rather suddenly and caused a big wave of bankruptcies.
The 2008 financial crisis is particularly tragic in that it was preventable.
Many people working in those banks knew that handing out those loans was a really bad idea that would eventually go wrong.
But if they hadn't played along, they'd have lost their job, so the cascade rolled on.
What do we learn from all that? Can we use some of this information to our own advantage? Well, yes.
First of all, we learn that if you want to make good use of the collected intelligence of a group, you have to try and find a format in which everyone is comfortable coming forward with their information.
And you need a way to prevent one person from being biased by another person to the extent possible.
That is, of course, if coming to an intelligent decision is what you want in the first place.
If you want the meeting to just be over quickly, then I suggest you ask the most aggressive dude for an opinion first and let him shout down anyone who dares disagree.
Of course, making good use of collective intelligence is easier said than done.
So let me mention two things that I found useful.
First, we have a natural tendency to focus on the issues that come up more often, but those aren't necessarily the most important ones.
This is why managers like to use tables to identify how important and urgent a problem is before spending time on it.
I know that academics tend to find those tables somewhat silly, but it's indeed a way to prevent collective stupidity.
The second useful thing to know is that just reminding people that their opinion might be biased can help to reduce the effect.
That's for small teams whose decision-making you can influence, but what about the large crowds that you find on social media? One thing that I mentioned already in my earlier video about social media is that just stepping back and thinking about what you're doing is a way to prevent regrets.
And I formulate that so carefully because for some people maybe preventing a cascade of false information isn't what they want in the first place.
But more interestingly, you can beat the crowd by being part of a group.
I know this sounds somewhat contradictory, but let me explain.
Numerous studies have found that small groups make better decisions than individuals on objective tasks.
That is, tasks for which there is an answer that is either right or wrong.
Such as, which way is the baggage claim left or right? Ask your family and you're less likely to find yourself next to a broken vending machine at the far end of the terminal.
It's easy to see how this is useful on social media.
Not sure whether that email is legit or a scam? Ask a few friends.
Another useful thing to know is that the biggest problem for groups in getting things right is having members which are confident but often wrong.
That's because the confident people make up their mind first, and this then causes an information cascade which sways the less confident people.
So be careful around confident people.
It's not that they are necessarily wrong, but if they are, they amplify errors.
Another issue is that we tend to overrate the relevance of our own opinion compared to that of others.
It's called egocentric bias.
A way to beat this issue is to hold back on forming an opinion, but of course that doesn't work if everyone does it.
In summary, making intelligent decisions isn't easy and we're not naturally good at it.
Whether a group of people makes intelligent or dumb decisions depends strongly on how the information is aggregated.
Under certain circumstances, errors can amplify each other rather than cancel out.
As you see, it isn't easy being sharper than a cheesecracker.
Collective intelligence is all well and fine, but as they say, garbage in, garbage out.
The most important thing you can do for your intelligence is to pick your input wisely.