Outside the Box

Why Facts Don’t Change Our Minds

March 22, 2017

You and I are both rational beings who let facts drive our thinking, but it seems our fellow humans are not so thoughtful. Or at least that’s what the research says. It turns out that behavioral psychologists have been undermining the bastions of human reasonability for decades, starting with some nefarious characters in the Stanford University psych department back in the ’70s, whose devilishly clever experiments were then taken a frightening step further at that equally suspect institution over on the other coast: Harvard. Don’t these mental types have anything better to do than conclusively prove that nobody (but you and I) can think straight?

Apparently not. And then the Harvard guys had the temerity to suggest that the human race’s muddleheadedness goes allllllll the way back to the time we spent trotting around on the African savannah. Remember that? Lotsa fun – if you didn’t get chewed up by a pack of hyenas or run down by a herd of water buffalos. You see, we weren’t just sitting out there on the plain playing checkers or debating the finer points of Cartesian philosophy. No, we were hanging on by the skin of our teeth – even as our teeth got smaller so our brains could get bigger. But it turns out that the most significant way our brains got bigger – and the main reason we survived and evolved into the total media animals we are today – was that we figured out how to cooperate.

Or at least that’s what the Harvard guys say. Their argument runs more or less like this:

Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” [the Harvard guys] write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

It’s quite frustrating … when you think about it. But I guess it’s better to face the truth about ourselves than to go along blindly, always wondering over the irrational hijinks our fellow two-leggeds are forever getting up to.

The whole sordid – but not entirely unhopeful – story is laid out by Elizabeth Kolbert in a piece titled “Why Facts Don’t Change Our Minds,” in – you guessed it – The New Yorker … yes, the only rag in the greater English-speaking world that insists on throwing an umlaut over the second o in cooperate – like they wanted to make sure we knew how to pronounce the word or something.

This week’s Outside the Box is truly one to make you think. And maybe meditate on how you process data. So read on, and be relieved of your irrational bias against the wisdom of the herd. (Or maybe, just maybe, you’ll want to help me figure out how we’re going to save ourselves from ourselves this time.)

I find myself in Dallas, home alone while Shane is with her son Dakota skiing in Colorado, which gives me some time to catch up with friends in the evenings and work even harder at meeting deadlines.

And it is getting harder to meet deadlines, because I keep running into fabulous new information that totally absorbs me, and then my friends call me up and tell me about this or that latest innovation which is so utterly compelling that I have to spend yet another hour listening to the story.

Even as I become increasingly alarmed at our global economic and political process, I become more positive about the future of the human experiment. In just the last few days Patrick Cox and I have had multiple conversations about completely different technologies and research efforts that have significant potential for extending not just our lifespans but our health spans. If you are not receiving Patrick’s free letter, you are really missing out. And if you are a serious biotech investor you should definitely be reading his subscriber letter.

We are finalizing the details on our 14th annual Strategic Investment Conference, which I guarantee you will be the best conference I have ever put on. You need to go ahead and register before we sell out. Once I have the last i dotted I will give you a detailed outline of what to expect.

The weather in Dallas is absolutely fabulous. There are very few days in Texas when I am comfortable simply turning off all the air conditioning and/or heaters and just opening the doors and letting the house absorb the ambience. Spring is evidently coming early almost everywhere in North America and Europe.

You have a great week, and now let’s turn to the problems that your and my neighbors have in dealing with facts.

Your looking for more hours in the day analyst,

John Mauldin, Editor
Outside the Box

Get John Mauldin's Over My Shoulder

"Must See" Research Directly from John Mauldin to You

Be the best-informed person in the room
with your very own risk-free trial of Over My Shoulder.
Join John Mauldin's private readers’ circle, today.


Why Facts Don’t Change Our Minds

By Elizabeth Kolbert
Originally published in The New Yorker, Feb. 27, 2017

New discoveries about the human mind show the limitations of reason.

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine – they’d been obtained from the Los Angeles County coroner’s office – the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well – significantly better than the average student – even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student – a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research – or even occasionally picked up a copy of Psychology Today – knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies – you guessed it – were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats – the human equivalent of the cat around the corner – it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water – and everything that’s been deposited in it – gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we – or our friends or the pundits on CNN – spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side – sort of – Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure – a rush of dopamine – when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring.

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”

 

Get Varying Expert Opinions in One Publication
with John Mauldin’s Outside the Box

Discuss This

0 comments

We welcome your comments. Please comply with our Community Rules.

Comments

Page 1 of 2  1 2 > 

tugboat@bresnan.net

March 25, 8:44 p.m.

In summary:  the less a man knows, the more he suspects.  This is very much what I have come to expect from The New Yorker.

mike_bradley@mentor.com

March 23, 2:20 p.m.

Wow, in this article creation/God is ignored, and we have (yet again) Trump bashing, this time via Steve Bannon.

The gist of the article:  Takes a lot of effort to change ones mind.  (Seems like we already knew that :-)

I am perplexed with John Mauldin.  Some articles conservative, some liberal, some acknowledge God, others don’t.  Do I dare assume he promotes both sides of the fence to attract a broader set of readers?  Or is he confused as to what he believes?  Or is there some shred of information in articles couched in politics that we are supposed to extract?
Maybe some of all the above?

Mathew Andresen

March 23, 2 p.m.

ery interesting article. I agree that confirmation bias is a huge problem, and the comments on the “knowledge allusion were pretty interesting.

short answer we are all very stubborn to admit we are wrong even when shown incontrovertible proof. Moreover, we all know a lot less than we think we do on many subjects. Worse, the less we know the more stubborn we usually are.

In particular I like the solution on giving people more details instead of sound bites.

One thing to think about of course is that our fearless leaders (on both sides of the isle) are just as guilty if not more so on this than most of us. Something to think about next time they tell you they know how to run your life better than you do.

I’m going to leave alone the many problems with their statements on gun violence at the end

jodonnell@upstreamip.com

March 23, 11:54 a.m.

I haven been finding it increasingly difficult to find sources of facts that seem to be impartial and objective. As a result, I have become more dependent on my own ability to come to opinions without reference to other sources. Perhaps this is the ultimate “confirmation bias”, but I usually know when I’m lying to myself. I was a Psych major for a while, and became increasingly less likely to believe the professors as time and experience went by. I suspect that the public will become increasingly doubtful about the veracity of both the media and the government as time and experience proceed. This is not a Trumped Up” charge. It is just happening, in my opinion.

donschott@aol.com

March 23, 11:38 a.m.

The article is a bit disappointing as it feeds the partisan divides in the US, rather than start a factual, thoughtful discussion to improve life.  First off, there is little to differentiate humans based on cooperation—just watch some ants, wolves, chimps for a few minutes.  The general readings on this (Dale Purves, M.D. Duke University Music and Biology Course) show voice and music as uniquely human. 
Second, vaccinations do maim and kill a few people, and it doesn’t help to deny the facts. It is somewhere between exaggeration and misleading to write “Of course, what’s hazardous is not being vaccinated;” According to National Vaccine Information Center 300 children died from MMR vaccine and anyone (like parents) who reads the following insert of side effects: “lupus (autoimmune connective tissue disorder);Guillain-Barre syndrome (inflammation of the nerves); Encephalitis; aseptic meningitis (inflammation of the lining of the brain); deafness; cardiomyopathy (weakening of the heart muscle);hypotonic-hyporesponsive episodes (collapse/shock);convulsions; subacute sclerosing anencephalitis (SSPE);ataxia (loss of ability to coordinate muscle movements); parathesia (numbness, burning, prickling, itching, tingling skins sensation indicating nerve irritation)Transverse MyelitisAcute disseminated encephalomyelitis (ADEM)” would take issue. 
The authors just blow these concerns off with “Providing people with accurate information doesn’t seem to help.”  But, maybe no one is seriously trying to “help”.

Third, like above, the authors dismiss out of hand the argument that “owning a gun makes you safer.”  Again, on personal levels and among groups of people in the US guns have saved lives and protected their homes.  It is not wise to dismiss these facts when arguing guns harm innocent people.  The solution may lie in addressing both concerns, not belittling and dismissing the gun owners.

Finally, the tone of the article is dismissive of the US that in six short years took the House, Senate, 2/3 governors and state legislatures from the Democrat Party,  and gave the White House to a womanizing, blowhard billionaire from Queens with no experience and a third the Clintons’ campaign chest who only received one endorsement from the fifty largest US newspapers.  An observer from Mars would certainly question the integrity and objectivity of the US media.

A much more fruitful bipartisan dialog, or road map to fix the Democrat Party, would be to review the Pew Trust World Happiness Report released this month Chapter 7. Restoring American Happiness Figure 7.2 Decline in US Overall Trust, Figure 7.3. Decline in US Trust in Government, Figure 7.4. The Rise of Inequality.  Until we can reverse these trends in meaningful ways we are doomed to somewhere between the ants and chimps.

fallingman7@gmail.com

March 23, 11:27 a.m.

What a smug piece of propaganda.

It’d take me a week to fully deconstruct the BS in this piece, but I have paid work to do, so let me concentrate on two points.

Questioning is good, right?  Critical thinking and all that.

But if one questions the safety of vaccines / surmises that they might just have harmful effects, that makes one “anti-vaccine” virtually by definition it seems. 

Since when does questioning something and asking for more research into the issue make one anti?  Andrew Wakefield, the producer of the brilliant film, Vaxxed, ISN’T anti-vaccine, but he’s been labeled as such.  He just wants the trivalent MMR jab, which is clearly problematic, to be administered as single shots.  Is that being anti-vaccine? 

RFK Jr. isn’t anti-vaccine, but that’s what they get away with calling him just because he revealed the deceit at Simpsonwood and wants to take a closer look at the safety claims They attack Rand Paul for raising questions about the CDC’s vaccination schedule.  They attack Robert De Niro and Donald Trump just for having doubts.

You mustn’t question. You must fall in line.  Is that really what true science is about?  Compliance?  I don’t think so.

Barbara Loe Fisher’s mission at the National Vaccine Information Center is to make sure people are aware of the data and are giving truly informed consent when they agree to have their kids or themselves vaccinated.  She doesn’t give advice.  She doesn’t tell people not to give their kids vaccinations.  And yet, they villify her.  She’s an “anti-vaxxer.”  So what does it tell you when seeking to inform and be informed makes you not only anti-vaccine but anti-reason according to Elizabeth Kolbert?

And what’s this nonsense about the ability to cooperate being the biggest advantage humans have as a species?  There’s this little thing called language.  It’s the sina qua non of civilization.  The ability to cooperate is derived from it.  That’s so obvious.  Why would you try to bluff your way past that fact?

Okay one more:  On the advisability of owning a gun, that’s entirely context specific, now, isn’t it?  Playing Russian Roulette?  Having a gun makes you less safe.  Stopping a break in of your house by a couple of doped-up lowlifes looking for stuff to steal to support their vicodin habit?  Safer.  And that’s not a hypothetical example.  Maybe she would have had to BE HERE to UNDERSTAND.

Her argument amounts to sophistry.

Bottom line This is pseudo-intellectual tripe.  Take some real science and dress it up / warp it to fit your particular worldview and advance your political agenda.  Sick. 

And you publish it as if it’s full of real insight. You need to get a clue.

cleartrace@gmail.com

March 23, 9:57 a.m.

Noting the partisan citings was like a call for help: “We’re human, too…”

galynn@gmail.com

March 23, 8:42 a.m.

“If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.” 

Give me a break!  Must everything be political.  I posit that “If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the OBAMA Administration.” would be just a accurate.

This does, however, confirm my bias that everything in politics today is all about the tribe.  Nuts to the voters, the country, what option will provide a better result. 

It’s all about the tribe.

ian@rhodian.net

March 23, 7:11 a.m.

As insightful and entertaining as the article is, it is all the more depressing because the author is very clearly blind to her own ideological confirmation bias.

stephen myers

March 23, 1:20 a.m.

Ditto Mr. Webbink’s comment.  The easy-to-read story behind their work is recounted in Pater Lewis’ The Undoing Project.  A more complete summary of where some of their findings and those of those i\they’ve influenced would be in Kahneman’s Thinking Fast and Slow, which spent bit of time on the best seller lists a few years ago.  Why is it that new findings in Science seem to take from 50 to 100 years to begin to be vaguely recognized by the population at large (e.g. Me)?

Page 1 of 2  1 2 >