Hidden Brain - The Transformative Ideas of Daniel Kahneman

Episode Date: April 1, 2024

If you've ever taken an economics class, you were probably taught that people are rational. But about 50 years ago, the psychologists Daniel Kahneman and Amos Tversky began to chip away at this basic ...assumption. In doing so, they transformed our understanding of human behavior. This week, we remember Kahneman, who recently died at the age of 90, by revisiting our 2018 and 2021 conversations with him. If you enjoyed this look at the work of Daniel Kahneman, you might also enjoy our conversations about behavioral economics with Kahneman's friend and collaborator Richard Thaler: Misbehaving with Richard Thaler Follow the Anomalies 

Transcript
Discussion (0)
Starting point is 00:00:00 This is Hidden Brain. I'm Shankar Vedantam. The past is not a place where my mind naturally wanders. My colleagues and friends can tell you that for better or worse, my focus is almost always on what lies ahead. But it's a very specific moment from the past that I've been thinking about a lot in recent days. It was March 2018 and a snowstorm was headed for New York City. I wasn't in New York, but that storm was the focus of all my worry and attention.
Starting point is 00:00:34 That's because New York was the home of the great psychologist and Nobel Prize winner Daniel Kahneman. We had invited him to come to NPR studios in Washington DC to tape an episode with us in front of a live audience that had come in from around the country. Danny was worried that the storm might keep him from returning to New York and he really needed to get back after the event. It looked like the storm would derail our plans. Mother Nature was not gracious that day. But Danny was. He extended his stay in Washington, D.C., and the conversation I had with him is one of
Starting point is 00:01:13 my favorite memories as host of the show. It's now also a memory that's tinged with sadness. Daniel Kahneman died on March 27th. He was 90 years old. It's hard to overstate the foundational role that Danny has played in our modern understanding of human behavior. So today, we thought we'd bring you a special episode looking back at two conversations I had with him. What they reveal is how far our understanding of the mind has come over the past century, and how much Danny and his intellectual partner,
Starting point is 00:01:50 Amos Tversky, charted a path that has inspired the work of countless other researchers. Thinking. Fast, slow, and transformative. This week on Hidden Brain. If you've ever taken an Econ 101 class, you were probably taught what's long been a core idea in economics. People are rational. They seek out the best information. They measure costs and benefits and maximize pleasure and profit. This idea of the rational economic actor has been around for
Starting point is 00:02:36 centuries. But about 50 years ago, two obscure psychologists shattered these foundational assumptions. The psychologists showed that people routinely walk away from good money, and they explained why once people get in a hole, they often keep digging. The methods of these psychologists were as unusual as their insights. Daniel Kahneman and Amos Tversky spent hours together, talking. They came up with playful thought experiments. As Daniel Kahneman remembered it, they laughed a lot.
Starting point is 00:03:12 We found our own mistakes very funny. What was fun was finding yourself about to say something really stupid. Daniel Kahneman and Amos Tversky, who passed away in 1996, transformed the way we understand the mind. And that transformation had philosophical implications. The stories about the past are so good that they create an illusion that life is understandable. And that's an illusion.
Starting point is 00:03:42 And they create the illusion that you can predict the future, and that's an illusion. And they create the illusion that you can predict the future. And that's an illusion. Life may not, as Danny says, be understandable, at least not fully. But that didn't stop me from asking him to reflect on his own journey. At our event in 2018, I started by asking him to look back on his life and how much of his success he would attribute to talent and how much to luck. I mean, you know, some talent was really needed, but luck, you know, I can see so many points
Starting point is 00:04:18 in my life where luck made all the difference. And mainly the luck is with the people you meet and the friendships you make. There is a large element of luck in that. My life was transformed by sheer luck in finding a partner, an intellectual partner with whom we got along very well and we got a lot done. Before we get to Amos, I want to talk about another person whom you met. This was in 1941-42. You're a very young Jewish boy living in German occupied Paris. One day you're out beyond curfew, an SS officer spots you and runs up to you.
Starting point is 00:05:01 What happens next? Well, he doesn't run up to me, but he beckons me to him. And I was wearing a sweater. It was past the curfew and my sweater had a yellow star on it, and so I was wearing it inside out. And he called me and he picked me up and I was really quite worried that he might see my yellow star. He hugged me tight and then he put me down, he opened his wallet and he showed me a little
Starting point is 00:05:36 boy. Then he gave me some money and we went our separate ways. Obviously, I reminded him of his son and he wanted to hug his son so he hugged me. That's an experience that for some reason, I mentioned it in my Nobel autobiography but as illustrating a theme that was a theme in my family actually, my mother especially, that people are very complicated and that seemed to be an instance of something very complicated. And so it stayed that, in that sense, it's a memory that was important to me. So when an event like that happens, I can imagine most people just saying thank God
Starting point is 00:06:18 and moving on, but you found it interesting, partly because you said there's something interesting that happened here. And from a very very young age it seems you were drawn to these curiosities about how the mind worked. In some ways the SS officer was making a mistake. He was looking at you and drawing an association from you to sort of to another child, maybe his own his own son and of course in many ways it was an error. It was it would the mind was not working in a quote unquote rational fashion, but it was more associative. Rupert Spira The complexity was that it's the combination of somebody who must have done some very evil things and have thought some very evil thoughts,
Starting point is 00:06:59 and yet he was hugging me. And I mean, that kind of complexity was everywhere. I mean, Hitler, you know, liked children and liked flowers and was very kind to some people. So we have a lot of difficulty putting that together with the things he did. But that complexity was always interesting. At what point do you feel you became the person who was paying attention to his own thoughts? Because so many of your early insights were developed obviously in experiments that you ran on people, but you were also observing the way your own mind worked and observing,
Starting point is 00:07:38 if you will, oddities in the way that your own mind worked. Was that always the case with Danny Kahneman? I think so. I wrote a psychological essay when I was 11. It was short, but I'll tell you what the essay said actually because it shows quite a few things, I think. So my older sister was taking exams in philosophy, and I had read some Pascal, and Pascal explained why I gave proofs of God's existence. Pascal said that faith is God made sensible to the mind. A little boy of 11, very pompous of course, I said, how right.
Starting point is 00:08:34 And then the psychological part was I said, but this is very hard. The experience of faith is very rare. And so that's why we have churches and organs and pump to sort of, and I called it ersatz, I mean, a sort of fake experience, to generate a fake experience. So that was, you know, that was psychology. And obviously, you know, that's what interested me then and it's interested me since. Later on as you were working as a professional psychologist now, you made in some ways a career of thinking about how your own mind worked. And I'm fascinated by this idea because in some ways, a lot of people look at how their minds work and they're defensive about it.
Starting point is 00:09:20 Or they defend how their minds work. Or they say, no, what I did made perfect sense. And instead in some ways, I think your humility, and clearly it's a temperamental quality of yours, helped you to sort of see some of these oddities and think about why they happen. That's not quite the way it happened, actually. I had my friend and collaborator, Amos and I, we worked together on that and nobody ever accused him of being humble. He was not.
Starting point is 00:09:51 But what the two of us did, we found our own mistakes very funny. And so we had a lot of fun just exploring what is our first impulse when it's wrong. That can be an endless source of fun. There was no particular humility. On the contrary, in a way. That is, we never thought that people are stupid because we were finding all of that in ourselves and we didn't think we were stupid. There was very little humility there.
Starting point is 00:10:23 What there was was irony, and the irony was part of the fun. What was funny about it? I can see why it was interesting or why it was curious, but I understand that when you and Amos worked together, there was just endless amounts of hilarity. There was a lot of laughter. And what was fun was finding yourself about to say something really stupid and sort of holding back because you know better. But it's that impulse to say things that are without basis or that are purely associative
Starting point is 00:10:59 or that. And really it doesn't matter how intelligent you are or how educated you are. There are those intuitions or those thoughts that come from somewhere that come very reliably and predictably and that are wrong. Meeting Amos was clearly a stroke of luck. I don't think your life would have taken the same path that it did. Certainly not. I mean, it's rare, really, but he was exceptionally smart and very, very quick.
Starting point is 00:11:34 And there is, when you have two people who are working together, who really, in a way, love each other's mind and admire each other's mind. That is very special because it gives you a sort of confidence when you say something and the other person sees something in it that you haven't seen. And this is very rare, that this kind of mutual trust and looking for what's interesting and good in what the other person is saying. And both he and I, we were both quite critical people. I mean, he even more than I, but we made exception, an exception for each other. And that was a joy.
Starting point is 00:12:18 Psychologist Daniel Kahneman laid the groundwork for what is today known as behavioral economics. In 2002, Daniel Kahneman was awarded the Nobel Prize in Economics. So many of your early insights were based on thought experiments, where you came up with sort of very simple questions that you posed to both yourselves and to other people. Why these thought experiments?
Starting point is 00:12:39 And when we talked a few days ago, you actually said that this is part of the reason you think that your work appealed to a larger audience, because even if the ideas were complex, the questions were inherently interesting and accessible. Well that's a stroker block really. There's a famous psychologist, Walter Michel, who wrote a book on the marshmallows test a few years ago. His dissertation was with small children, and he asked those children two questions. One of them was, what do you want to be?
Starting point is 00:13:20 The other question was, you can have this lollipop today or two lollipops tomorrow, what do you prefer? These two questions were correlated with everything in sight. I just fell in love with that idea of the psychology of single questions. I looked for ways to do that sort of thing. The work with Amos on judgment turned out to lend itself to just that. There is a single question that elicits a funny thought and it makes a point. And you know, in the first place, we were very lucky in the choice of problem.
Starting point is 00:13:59 There are just no other problems in psychology that lend themselves to that sort of thing, that you can involve the reader and present questions to the reader and you make the reader think. You can do that in vision and there are those demonstrations of perceptual effects like figure ground or perceptual organization. They are on the page and that's the phenomenon. You are your own subject. Now you can do that on vision, you can do that on judgment, which is the field that
Starting point is 00:14:33 we did it in, and that's it. You can't do it on self-control, you can't do it on many other things. You can't do do in personality studies. When I was talking of luck, that's luck, to hit on something that we happen to be prepared for and that is uniquely, lends itself uniquely to something that creates experiences in readers. Sheer luck. So after many, many years of collaboration together, your partnership with Amos founded, I think it's fair to say. I'm wondering whether you've given the same thought to why that happened that you gave to other things that your mind does.
Starting point is 00:15:20 And whether those insights, I mean so many of your insights about how your mind has worked have helped the rest of us. Is there anything here that could help the rest of us think about collaboration and partnership? No, I mean, you know, there's natural stresses in collaboration. The world is not kind to collaborations. You know, when you have two people who are reasonably talented and they work together and they overlap closely. Then I'm quoting Amos, he said, when I give a lecture, people don't think I need anybody else to do the work.
Starting point is 00:15:54 And that was true to some extent of me as well. And so that creates stresses. Of course, I've given a lot of thought to it. We were fortunate that we went on as long as we did. We were fortunate that we remained friends, even when there were stresses in the collaboration and in the friction. I remember research that Abraham Tesser did many years ago where he looked at couples or other pairs of people who are very similar to one another. One of the things he found is, of course, the closer in similarity people were, the
Starting point is 00:16:30 more they reached for the same goal. You had, let's say, a couple who were both writers. The success of one person tended to make the other person feel smaller. Even though you're happy for your partner, there's a part of you that says, why can't I have the success that my partner has? It's a part of you that says, why can't I have the success that my partner has? And it's a very human thing. Yeah, of course. You know, and it's of course especially true if it's joint work, which that happens.
Starting point is 00:16:50 So this is, you know, there's really a dynamic. And it's, I would say we were just about perfect for, you know, 10, 12 years, which is a very long time. When we come back, what Danny and Amos discovered together. Stay with us. Applause This is Hidden Brain. I'm Shankar Vedantam. Today we are remembering psychologist Daniel Kahneman, who passed away recently at the age of 90. In the 1960s, Danny spent a summer with a group of eminent psychoanalysts at a treatment center in Massachusetts. The center
Starting point is 00:17:44 had a routine. A patient was examined for a month by multiple experts, then everyone came together to do a case study. They reviewed notes, they interviewed the patient together. One particular case left a strong impression on Danny. In the morning we learned that the woman, a young woman about whom we'd written the report had taken her life. And they did a very brave thing. They ran the case study. And I was deeply impressed both by the honesty of what they did, but what they were trying
Starting point is 00:18:19 to do, they were seeing signs that they had missed. It was in retrospect, obviously, this was hindsight at work. Now you know what's happened, so you're seeing signs and premonitions. People are really feeling guilty. I saw her on the stairs and she looked strange. Why didn't I stop to inquire? People look strange all the time. So yeah that was that was an important episode. And of course what this episode reveals is how once an event happens we trace back a
Starting point is 00:18:56 story about how that event came to be and of course in journalism we do this all the time. I you know I remember after the 9-11 attacks we spent years sort of deconstructing all the errors that were made after the 9-11 attacks, we spent years sort of deconstructing all the errors that were made and drawing a pattern. When you see that pattern laid out, you have to say, well, those people must have been really dumb because it's so obvious that there was a pattern that led to the 9-11 attacks. Yeah. This is hindsight, and it's one of the most important phenomena actually in psychology, in the psychology of judgment, because you understand the past.
Starting point is 00:19:28 The past surprises stop being surprising at the moment they happen. Then you have a story and you shouldn't have been surprised. When you reconstruct it, you also reconstruct wrongly what you believed at the time. So you minimize, you reduce a surprise. So not only was it inevitable, but also I almost, I really sensed it. So, you know, now where this goes really wrong and is that the stories about the past are so good
Starting point is 00:20:04 that they create an illusion that life is understandable. And that's an illusion. And that they create the illusion that you can predict the future, and that's an illusion. And it's maintained by hindsight. So hindsight is a central phenomenon, really. And of course, the errors we make eventually led to Prospect Theory, which was the work
Starting point is 00:20:28 which you were cited for in the Nobel Prize among other things. If you were to explain Prospect Theory to an eighth grader, is there a way to do that? Well it's very easy to explain. It's much harder to make it interesting. And the theory that dominated thinking when we wrote, and to a very large extent still dominates economic thinking, was formulated first in 1738. So it's been around a long time. And what it says is that when you're looking at a gamble, what you're evaluating is you're evaluating two states of wealth.
Starting point is 00:21:13 Your wealth if you will win and your wealth if you will lose and then if you're offered a sure thing instead of your wealth if you get that sure thing. And for 260 years and so, people accepted that theory. Instead of your wealth if you get that true thing and for 206 years and so people accepted that theory now the theory really is Doesn't make sense if you stop to think about it people don't think of gains and losses as states of wealth They just don't they think of gains and losses as gains and losses. That was the fundamental insight of prospect theory. You could ask if you get a Nobel Prize for that.
Starting point is 00:21:56 You do in a certain context, because if it surprises people. One of the things you say in the book is, our comforting conviction that the world makes sense rests on a secure foundation, our almost unlimited ability to ignore our ignorance. And of course, you've spent a lifetime exploring the depths of your ignorance and all of our ignorance, but in many ways, there's something deeply human about this. To see the world as being chaotic and unpredictable and noisy is fundamentally unsettling, and it's easier to see the world
Starting point is 00:22:32 as understandable and comprehensible, and that fits in a story. Well, it's not, you know, we really have no option. I mean, the mind is created to make sense of things. I mean, vision makes sense of things. We see objects. We see objects moving. And it's the same with judgment and thinking. We have to make sense of things. And we can't do otherwise. So it's not, you know, that we would be unsettled
Starting point is 00:22:56 if we did otherwise. We can't. We make sense of things. We're sense-making organisms. And of course, it's worth pointing out that even though this leads to errors, it's also the case that much of the time this is enormously valuable and our sense-making ability actually works great, that it actually allows us to navigate the world successfully. Of course. I mean, you know, we're right almost all the time. I mean, you know, we couldn't survive if we weren't right almost all the time. We make interesting mistakes, and sometimes they're important mistakes,
Starting point is 00:23:28 but mostly we're very well adapted to our environment. So when you think about news events, if I tell you there are 19 hijackers who have flown planes into major buildings, and then we go back and we get biographical sketches of these people and we understand their ideologies and you know it activates things in our minds because of course there are these agents that are doing these things to us and you know we then spend hundreds of billions of dollars
Starting point is 00:23:56 trying to combat terrorism and you say okay that makes sense this is a major threat we've dealt with it but let's say you have another threat over here where I tell you that in 80 years or 100 years, the temperature might rise five degrees. And as a result of this, the oceans might warm a little bit, and sea levels might rise by two or three inches. And as a result of this, the models predict that climate events will become more serious, at least according to the models,
Starting point is 00:24:25 but you have to understand probability. In order to try and head that off, you actually have to take very painful steps right now, maybe driving your car less, maybe living in a smaller house, all kinds of things that are painful in the here and now for something that seems difficult off in the distance and requires you to really understand statistics and probability. You've actually called climate change in some ways sort of a perfect storm of the ways in which our minds are not equipped to deal with certain kinds of threats. Yeah.
Starting point is 00:24:55 I mean, it's really, if you were to design a problem that the mind is not equipped to deal with, climate change would fit the bill. It's distance, it's abstract, it's contested. And it doesn't make, it doesn't take much. If it's contested, it's 50-50, for many people immediately. You don't ask, what do most scientists do? Which side of the National Academy of Sciences?
Starting point is 00:25:20 That's not the way it works. Some people say this, other people say that. And if I don't want to believe in it, I don't have to believe in it. So it's... I'm really... Well, I'm pessimistic in general, but I'm pessimistic in particular about the ability of democracies to deal with a threat like that effectively. If there were a comet hurtling down toward us, you know, an event that would be predictable,
Starting point is 00:25:49 whether they, we'd mobilize. So it's not even that it's distant in time, but this is too abstract, possible, it's very different. We can't, we're not doing it in fact. So besides being pessimistic, does your research and understanding of this phenomenon give you any insight into how we should maybe talk about climate change and what we can do? I think scientists, in a way, are deluded in that they have the idea
Starting point is 00:26:27 that there is one way of knowing things and it's you know things when you have evidence for them. But that's simply not the case. I mean, you know, people who have religious beliefs or strong political beliefs, they know things without having, you know, compelling evidence for them. things without having compelling evidence for them. There is a possibility of knowing things which is clearly determined socially. We have our religion and our politics and so on because we love or used to love and
Starting point is 00:27:00 trust the people who held those beliefs. There is no other way to explain why people hold to one religion and think other religions are funny, which is really a very common observation. The only way would be to create social pressure. So for me, it would be a milestone if you manage to take influential evangelists, preachers, to adopt the idea of global warming and to preach it. That would change things.
Starting point is 00:27:42 It's not going to happen by presenting more evidence. That, I think, is clear. When we come back, we'll talk about happiness, memory, and noise. Stay with us. Thank you. This is Hidden Brain. I'm Shankar Vedanathan. Daniel Kahneman won the Nobel Prize for a series of ideas that helped develop the field of behavioral economics. Danny, I don't know how you got an ethics panel to approve this study, but it's one of my favorite studies
Starting point is 00:28:34 of all time. Tell me about the colonoscopy study and the Pecan rule. Well, the colonoscopy study was devised to test an idea The colonoscopy study was devised to test an idea that when people form a memory of an episode or an impression of an episode that had a certain duration, that actually they completely neglect the duration. And what they're sensitive to are illustrative or crucial moments, and in particular when it's a painful experience. It's the peak of the pain and it's the end of the pain. It's how much pain you're at in the end.
Starting point is 00:29:13 So that was a theory for which we had other evidence. And my friend Donald Vredeumer, who is a physician in Toronto, he volunteered to create a study around that. So study was run on people with a colonoscopy, which at the time was very painful. I mean, for those of you who have not reached the age of colonoscopy, it won't be painful when you have it, but at that time, it really was. People had a colonoscopy, and then half of them, it ended when it ended. For half of them, they left the tube in for another minute or so. Now, this is not pleasant.
Starting point is 00:30:03 Nobody would volunteer to have the tube in for another minute, but it improves the memory very significantly because it's less painful than what went on before. It's not desirable. You wouldn't choose it, but it makes a difference between a really aversive memory, which you have when they pull the tube at a moment of high pain. The whole thing is very bad. But if you end on a gentle note, even if it's still painful, the memory improves. Memory wasn't designed to measure ongoing happiness or to measure total suffering.
Starting point is 00:30:47 For survival, you really don't need to put a lot of weight on duration, on the duration of experiences. It's how bad they are and whether they end well. That is really the information that you need for an organism. And so there are very good evolutionary reasons for the peak and end rule and for the neglect of duration. It leads to, you know, in some cases to absurd results. So if you are a policymaker, I feel like this is a real ethical dilemma. So let's say, for example, I'm running a hospital. I think the
Starting point is 00:31:20 colonoscopy study or versions of it have later found that if you actually give people the painful experience followed by the less painful experience, they are more likely to come back for the next colonoscopy because their memory of the colonoscopy was less painful. So you could argue from a public policy standpoint where you want people to get tested, the right thing to do is to extend their pain in order that they will remember the pain as being less and come back more often. However, also from an ethical point of view, you could argue that subjecting people to more pain
Starting point is 00:31:49 than you need to subject them to is unethical. So what should we do? That one is easy. I mean, you know, there are harder versions of it. But that one is easy because you would never frame it that way. You would just tell the people who are doing the procedure, be very gentle at the end.
Starting point is 00:32:08 Be slow and gentle at the end, and that sounds like a good thing. And it's good for policy, and it will get more people. It will leave better memories, it will more compliance, and so on. So there are ways sometimes of not presenting quite as sharply as you did. What would be a more difficult ethical dilemma that I didn't think of that you could apply to yourself? Well, I think that if real suffering is involved, somebody in pain, I'd say you can be in pain and barely conscious, or you can be in pain and they will eliminate
Starting point is 00:32:47 the memory at the end. How much weight should you give to pain that the patient might be screaming but will not remember? That's an ethical dilemma. Of course, this does have all kinds of other implications. You've done some work looking at, you know, if you could go on a vacation but you couldn't take photographs on the vacation, how would you think about the vacation? In other words, you essentially have these two models of how the mind works, that there's
Starting point is 00:33:15 a mind that experiences life and there's a mind that remembers life, and these two minds don't always agree with one another. Well, I mean, they have different interests in a way. So I spoke of the experiencing self, which is the one that lives moment to moment. And the remembering self is the one that keeps score. And the scores that are generated are generated by rules, such as the peak end rule and so on. And so sometimes you can see that experiences are very different duration and how do they
Starting point is 00:33:52 matter? Or what is the value that you should attach to an experience that you will not remember or that somebody will not remember. So my question in that context was, I mean consider your plans for your next vacation. And now imagine that at the end of the vacation, they will destroy all your pictures and they'll give you an amnesic drug so that you won't remember a thing. Now would you change your vacation plans?
Starting point is 00:34:28 If you knew that? Many people would, actually, because I think many people go on vacations to create memories for future consumption, which doesn't always happen. In my case, it never happens. I never look at pictures, But that's a dilemma. So you conducted a study, I remember, a few years ago. I think it was published in the journal Science, where you evaluated how happy parents felt as they went through their days.
Starting point is 00:34:57 And there's two ways you can, of course, ask the question. You can ask parents, how happy are you with parenting? And many parents will say it's the best thing they ever did. But then you can also ask parents on a moment to moment basis as they're in parenting how they feel and the answer turns out somewhat differently. Well yeah I mean it's you know it turns out that parenting if you really take the experiencing view of it then you know it's like washing dishes, you know, maybe a little worse often. And then, you know, and then it has its moments and it's the peak moments that people remember.
Starting point is 00:35:34 And when people remember the peak moments, it makes the whole thing worthwhile. So it changes the meaning of the whole experience. So that was a much contested finding, very unpopular finding, but a very strong finding. If you look at the experiences, people have more fun with their friends than with their spouses, quite a bit. And if you were trying to make, to increase the happiness of the experiencing self,
Starting point is 00:36:04 you would do very different things than people do because what people typically do, they try to satisfy their remembering self. Maximizing the happiness of your experiencing self would make you more social, less ambitious. It would make you spend a lot more time with people that you love or like or enjoy because it's very largely social. So there are important implications of that distinction. Is there any insight that someone can draw from this work about whether they should become
Starting point is 00:36:40 a parent given this discrepancy between the remembering self and the experiencing self? And I should remind you before you answer that your daughter is in the audience here with us. I have never met, almost never met people who regretted having had their children. So if you measure things by the remembering self, and that's really the only way. The point is that the experiencing self doesn't make decisions. All the decisions are made by the remembering self. And the remembering self never regrets having had children.
Starting point is 00:37:11 So, you know, from that point of view, the answer is clear. Psychologist Daniel Kahneman laid the groundwork for what is today known as behavioral economics. Danny explained many of these insights in his 2011 book, Thinking, Fast and Slow. You're working on a new book with a couple of other people, but it's also a new area that you're looking at. A lot of the earlier work looked at the issue of biases and errors, and there's a new focus for a lot of the work that you're doing right now, and it has to do with the question of noise. What does that mean? Well, there are really two kinds of, two broad families of error.
Starting point is 00:37:48 There is bias and there is noise. Noise simply means randomness. It's variability that shouldn't be there. If you imagine target shooting, then where the cluster of shots is, that could be far from the target, that's bias. But the variability of the cluster, that's noise. Many people, and certainly I in the past, are very interested in biases. And we think in terms of biases much more than we did like 60 years ago, where people
Starting point is 00:38:24 would think of random error. And I now think that we have exaggerated biases and that most errors are really noise. They're randomness. And it's a very different approach to error than focusing on biases. I mean, I don't want to overstate that biases are very important and so on, but noise has been neglected and I think it deserves attention. Do you think it has implications that are different from the implications related to bias when it comes to public policy, for example?
Starting point is 00:38:58 Is it easier to reduce the effects of noise to address them than bias? Well, certainly. I mean, you can, in the first place, if you take judgment away and have a computer make the decision, the computer will be noise-free. Algorithms are noise-free in the sense that you present the same problem twice, you'll get the same answer. Whereas if you present the same problem to different judges or to the same judge at different times of day, you are going to get different answers.
Starting point is 00:39:28 So you can eliminate noise by algorithm, and I think we should do that wherever we can. And where we can't do that, we should, I think the implication is that we should try to structure the judgment process so as to make it more reliable. I'll give you an example where this really matters. There is a thing in the United States that people call the asylum lottery because people who ask for asylum get a judge, they get the judge at random and in some cases they have an 80% chance of getting through, and in other cases, 15.
Starting point is 00:40:08 That's noise. And you really don't want it, I think. Well, we're in the process right now at Hidden Brain of hiring someone. And in fact, we just conducted two interviews today, and we have a couple tomorrow. And as I was doing the interviews, I was thinking about some of the work that you've done.
Starting point is 00:40:24 In some ways, this was your earliest work going back many, many decades, looking at how you can reduce errors in the interview process. And I don't know whether you think of it as bias or you think about it as noise, but either way, it leads to flawed outcomes. And you came up with a technique that could address it. Yeah, I actually did come up with a technique a little more than 62 years ago, actually. I was an officer in the Israeli army. It was 1954.
Starting point is 00:40:53 The Israeli army was very young. It was 1956, actually. I set up an interview system which is a template for a lot of what is going on and is certainly a template for the way I think decisions should be made. I haven't thought of that for many years. The template is you have a problem, you need to evaluate people, break it up into dimensions. What sounds elementary and I'm not going to say anything very surprising. Make judgments of each dimension independently of all the others. That's independence is essential.
Starting point is 00:41:32 Don't form a general impression until you have all the information. Delay intuition. Don't give it up necessarily. Delay it. And the results are just better when you do things that way. And I think that's probably is very general as a way of thinking about judgment and decision making. It's a way of reducing noise, of increasing reliability, and it's not very costly. And I'd like to promote it.
Starting point is 00:42:01 So of course the idea, if I understand correctly, is you score people on different criteria, give them a ranking so that you're evaluating it, but there's also an interesting piece of advice, which I understand they still offer in the Israeli army when they're doing these evaluations, a final piece of advice after you've done the calculations. What is that advice? Yeah, well, that's... So I set up that interviewing system. I was 22 years old. And the people, the interviewers who were 19 years old, they really didn't like that
Starting point is 00:42:34 suggestion. What they really wanted was to have a heart-to-heart conversation and then to form a general impression of how good a combat soldier that individual would be. But they said, you are turning us into robots. And they had a point. And then I told them, OK, I'll compromise. You do it my way, the interview. You run the whole interview, just and you generate those scores independently, fact-based and
Starting point is 00:43:05 so on. Don't think of anything until the end. In the end, close your eyes and give a score. How good a soldier will that person be? Now much to my surprise, that intuitive score is really very good. It's as good as the average of the six traits and it's different so it adds content. So having an intuition, if you delay it, it's quite good. The kicker of that story was that about 50 years later or so I got a Nobel Prize, so for a short time I was a celebrity in Israel. They took me to the army, to my old base, and they explained how they were doing the
Starting point is 00:43:51 interview because they were still using that system, essentially, but very little change. Then the commander was telling me, and then she said, and then we tell them, close your eyes. So that thing had lasted for 50 years, that expression. So what I love about that, it's not so much intuition versus bias, but it's more maybe by just delaying intuition, the intuition gets better. And of course, if you don't do the detailed analysis, you still have an intuition that feels very powerful.
Starting point is 00:44:22 And your ignorance is sort of papered over by this tendency of the mind. You know, intuition is compelling as such. I mean, you know, we have the intuition almost by definition. We trust it. And so delaying this and remaining very close to facts as you collect your separate dimensions is really very useful and it permits an intuition that is well informed because normally we form intuitions very quickly and then we spend the rest of the time confirming that this intuition was right. That by the way is a fact. It's been studied that way in interviews.
Starting point is 00:45:02 People form impressions in the first minute or two and they spend the rest of the time testing that they're right and and of course confirming that they're right. So this was clearly an example of how you came up with a mechanism in some ways to overcome how the mind works but on many many other fronts it seems like the biases, errors that you've discovered, even yourself, you say that you don't necessarily, you're not the master of those biases after studying them for more than half a century. Yeah, I mean, even myself. I mean, I'm considered one of the worst offenders on many of these mistakes. I'm overconfident when I really preach against that, and I make extreme predictions when
Starting point is 00:45:50 I preach against that. Some people read thinking fast and slow in the hope that reading it will improve their minds. I wrote it and it didn't improve my mind. I wrote it and it didn't improve my mind. It's not... those things are, you know, they're deep and they're powerful and they're hard to change. Danny, yesterday was your 84th birthday. Happy birthday. You've studied a great number of different things over the years, and you tell me that one of the things that you're actually interested in studying is the subject of misery.
Starting point is 00:46:30 Much more than happiness, you're fascinated by misery. Now of course, I can just put this down to the pessimism that clearly you've demonstrated for a long time, but you actually say you can draw more specific conclusions, and there are takeaways from studying misery than from studying happiness. Yeah, I'm actually, you know, I contributed to what is called happiness research, but I'm really disturbed by it and I'm disturbed by positive psychology, in part because I think that making people happier is, you know, could be important, hard to do. It may not be society's business to make people happier, but reducing suffering, that's something
Starting point is 00:47:13 else. It's easy to agree that this is important. It's easy to agree that society should be involved. Furthermore, it's easier to measure misery than to measure happiness, and what we can do about it is clearer than what we can do to enhance happiness. From all these points of view, I think that, and again, it's a matter of semantic luck. We speak of length and not of shortness, and so we speak of happiness and not of the other side of unhappiness.
Starting point is 00:47:46 But if you focus on unhappiness and misery you end up doing very different things, thinking very different thoughts and taking different actions which I think we should do. So you've been a wonderful sport Danny and really grateful for you for coming down and I am almost a little shame-faced about doing what I'm about to do right now, which is I'm wondering if we can increase your happiness just a tad, but it might increase your misery by singing Happy Birthday to You. You're one of Hidden Brain's heroes,
Starting point is 00:48:16 and we feel that it's really appropriate to end with that. So on the count of three, happy birthday to you. Happy birthday to you. Happy birthday dear Danny. Happy birthday to you. Thank you. Thank you.
Starting point is 00:48:42 Thank you. Danny Kahneman, thank you for joining me today on Hidden Brains 100th episode. My pleasure. Several years after this conversation with Danny Kahneman, he published the book he mentioned on the topic of noise. In 2021, we brought him back to Hidden Brain for a second chat and a deeper dive into that idea. That's the conversation you're going to hear next.
Starting point is 00:49:17 I hope you enjoy it. This is Hidden Brain. I'm Shankar Vedantam. We're going to start today with a little experiment. I'll be the guinea pig. I'm going to open the Stopwatch app on my phone. I'll hit start and count off 5 seconds while looking at the phone. 1, 2, 3, 4, 5.
Starting point is 00:49:46 Okay, let me do that again. 1, 2, 3, 4, 5. Okay, now I'm going to hit start and count off 5 seconds without looking at the phone. 1, 2, 3, four, five. It was 5.43 seconds. Last time. 1, 2, 3, 4, 5. 5.59 seconds. The errors I made seem trivial, but it turns out they are not. Multiply the small mistakes I made in milliseconds over all the countless decisions I make every day, and you can end up with a serious problem. Multiply the errors I make as an individual by an entire society made up of other error-prone humans, and you
Starting point is 00:51:06 can get disaster. What makes these mistakes insidious is that they are rarely the result of conscious decision-making. Human judgment is imprecise, and imprecise judgment produces unwanted variability, what the Nobel Prize-winning psychologist Daniel Kahneman calls noise. Wherever there is judgment, there is noise, and there is more of it than you think. This week on Hidden Brain, the gigantic effect of inadvertent mistakes in business, medicine, and the criminal justice system, and how we can save us from ourselves. Daniel Kahneman's insights into how we think have revolutionized many areas of the social sciences.
Starting point is 00:52:13 He was my guest on Hidden Brain for our 100th episode. We talked about his early research and his first book, Thinking, Fast and Slow. As we close in on our 200th episode, we wanted to bring him back to talk about a set of ideas he's been working on for several years. They're described in his new book, Noise, a flaw in human judgment. Daniel Kahneman, welcome to Hidden Brain.
Starting point is 00:52:39 Glad to be here. I want to begin by exploring what you mean by the term noise. You spend some time studying an insurance company and one of the things an insurance company needs to do is to tell prospective clients how much their premiums are going to cost. So an underwriter says if you want us to cover you against this loss here's this quote. From the insurance company's point of view Danny, what is the risk of offering quotes that are too high and also quotes that are too low? Well, a quote that is too high, you are very likely to lose the business because there
Starting point is 00:53:12 are competitors and they'll offer a better price. A quote that is too low, you're leaving money on the table and you may not be covering your losses if you do that a great deal. So errors in both directions are costly. We define noise as unwanted variability in judgments or decisions. That is, if the same client would get different quotes from different underwriters in the same company, this is bad for the company. And variability is a basic component of error.
Starting point is 00:53:45 So I think of the insurance business as being driven by mathematics. That's my stereotype, that there are hard-nosed statisticians who work at these companies. So I would not expect a quote from one underwriter to be wildly different from the next. You asked executives at this insurance company how much variability they expected between underwriters. What was their estimate of this kind of subjective variability? I mean, it turns out that there is a very general answer to that question. People have a very general idea about that number will be, and it's around 10%.
Starting point is 00:54:19 Now, when we actually measured that in an insurance company, the answer was 55%. And that was a number, that was an amount of variability, as we call it, an amount of noise that no one expected. And that really is what set me off on this journey that led to this book. Now, the difference between 10% and 55% might seem trivial. Who cares? Well, the consequences of this variability were anything but trivial.
Starting point is 00:54:56 I mean, I asked people what actually would be the cost of setting up a premium that is too high or too low. And when they carried out that exercise, they thought that the overall cost of these mistakes was in the billions of dollars. Now what was in some sense saving that company was that probably other companies were noisy as well. But if you have a company that is noisy while others are noise-free, the noisy company is going to lose a lot of money very quickly.
Starting point is 00:55:30 So with the insurance company, it's not just that the insurance company is losing money. There is also a cost that's being paid by all the people who are trying to get insurance. It might be that if you happen to get a quote that's too high, you might end up being uninsured or you might be spending more on insurance that you need to be spending. There is sort of a general human cost to these errors, not just in terms of the bottom line for the insurance company. Well, of course, when you have a noisy underwriting system, then the customer is facing a lottery that the customer has not signed up for.
Starting point is 00:56:03 And that is true everywhere. That is, wherever people reach a judgment or a decision by using their mind rather than computing, wherever there is judgment, there is noise, and there is more of it than you think. I want to look at a few other places, because in some ways what's striking about your book is both the number of different domains where you see noise and the extent of noise in those different domains including in places where you really feel this should be a setting where noise does not play a role.
Starting point is 00:56:39 You cite a study done by Jaya Ramji Nogales and her co-authors who found that in asylum cases, this was a courtroom in Miami, one judge would grant asylum to 88% of the applicants and another granted asylum to only 5% of the applicants. So this is more than a lottery. This is like playing roulette. This is a scandal. Clearly the system isn't operating well. In many situations
Starting point is 00:57:06 it's just that when people look at the same data they see them differently. They see them more differently than they expect. They see them more differently than anyone would expect. That's the basic phenomenon of what we call system noise. That is, when you have a system that ought to be producing judgments or decisions that are predictable, they turn out not to be predictable, and that's noise. You also describe in some ways there are different kinds of noise. So if you're an asylum judge and I'm an asylum judge and we have very different subjective readings that can produce very different answers.
Starting point is 00:57:52 But it could also be that if you are reviewing a case in the morning and you are reviewing a case in the afternoon, it's possible that just within yourself, your own judgments can be noisy. Can you talk about that idea as well? It's not only possible, it actually is the case that when people are asked the same question or evaluate the same thing on multiple occasions, they do not reach the same answers. For example, radiologists who have shown the same image on two separate occasions and are not reminded that it's the same image, really with distressing frequency reach different diagnoses on the two occasions.
Starting point is 00:58:33 That we know. It's true even for fingerprint examiners, whom we really would not expect to be noisy at all, but actually they vary when you show them the same fingerprints twice. By the way, that's important. They do not vary in the sense that somebody would make a match on one occasion and would positively say it is not a match on the other. But fingerprint examiners are allowed to say, I'm not sure.
Starting point is 00:59:03 And between I'm not sure and I am sure that it's a match or it's not a match, there is variability. One of the things that you point out is that you don't expect that the lottery of who is reviewing your file is going to make a huge difference or that extraneous factors would play a huge role. The researcher Uri Simonson found that college admissions officers pay more attention to the academic attributes of candidates on cloudy days and to non-academic attributes when the weather is sunny. He titled his paper, Clouds Make Nerds Look Good. Talk about this idea that
Starting point is 00:59:38 extraneous factors, whether someone's hungry, what the weather is like, that can affect people's judgment too. Indeed, it's been established in the justice system. If you're a defendant, you have to hope for good weather because on very hot days, judges assign more severe sentences. And that is true, although judges are air-conditioned, but it's the outside temperature nevertheless seems to have an effect. It's been established in at least one study that for judges who are keen on football the result of their team on Sunday or Saturday depending on whether it's professional or
Starting point is 01:00:18 college will affect the judgment they make on the Monday and they will be more severe if their team lost. No! He missed the extra point wide right! That's a terrifying idea isn't it Danny, that you're sort of hoping that your judges football team wins the Sunday before your case is heard. Yes, absolutely and you are also hoping to find a judge who is in a good mood, to find a judge who has rested, has had a good night, who is not too tired. And your chances of being prescribed antibiotics or painkillers differ in the course of the day.
Starting point is 01:01:00 So doctors tend to prescribe more antibiotics toward the end of the day when they are tired than earlier in the day when they are fresh. And they are more likely to prescribe painkillers later in the day simply because it's an effort to resist the patient who wants painkillers. And when you're very tired and depleted, that effort becomes more difficult. So completely extraneous factors have a distressingly large effect. Noise in medicine often shows up under a different name. Medical mistakes.
Starting point is 01:01:36 Stunning medical news tonight about how many Americans have something go wrong when they go to the hospital. The astronomical number, one in three patients, will face a mistake during a hospital stay. And these are costly errors. One study estimating medical mistakes cost the US more than $17 billion a year. The doctors had discovered that Sarah didn't have cancer in the first place. She'd been misdiagnosed, and all the pain and treatment that she went through was for absolutely nothing. So Danny, can you talk about these two different dimensions of noise in the medical sphere,
Starting point is 01:02:15 the ways in which it might cause us to get diagnosed with conditions we might not have, but also for doctors to miss conditions and problems that we actually do have? but also for doctors to miss conditions and problems that we actually do have. The contribution of noise is that which physician looks at the data makes a difference. And there is a lot of that. That is, we know that physicians disagree on diagnosis and they also disagree on treatment. And that is a little shocking that there is that element of lottery. So errors could happen for many reasons, including luck, which is not an error in judgment, but where information was missing.
Starting point is 01:02:55 But in some cases, the errors cannot be described in any other way than noise, that is, different doctors looking at the same case, reaching different conclusions. It might seem obvious from these examples that noise is a big problem and that combating noise makes a lot of sense. Who could argue against reducing arbitrary decisions and inconsistent rules?
Starting point is 01:03:21 It turns out a lot of people have a problem with doing just that, and one of those people might be you. You're listening to Hidden Brain. I'm Shankar Vedantam. This is Hidden Brain. I'm Shankar Vedantam. We've seen how noise pervades many aspects of our personal and social lives. It can lead to wildly different estimates on our insurance premiums. It affects judgments doctors make about our health.
Starting point is 01:04:01 It can determine whether we get a job or a promotion. In their new book, Noise, a flaw in human judgment, Daniel Kahneman and his co-authors, Olivier Sibony and Cass Sunstein, show that noise also shapes what happens in the criminal justice system. It affects decisions that send people to prison or sentence them to execution. Danny Judge Marvin Frankel worked as a United States district judge and he made a name for himself by pointing out inconsistencies in the criminal justice system. He once wrote a case about two men convicted for cashing counterfeit checks. Both amounts were for less than 60
Starting point is 01:04:40 dollars. One man got a sentence of 30 days in prison. The other got 15 years. What did Judge Frankel make of such disparities? I mean, he thought it's unjust. He thought it's extraordinarily unfair, which it seems to be on the face of it. So he really felt that the justice system should be reformed to avoid this role of completely unpredictable, unreasonable factors that determine the fate of defendants. You know, Danny, I feel like in the last year, I've seen dozens of stories that talk about disparities of all kinds, including disparities in the criminal justice system.
Starting point is 01:05:26 And invariably when I read these stories about disparities, they talk about the idea that it's about bias, that it's about racial bias or gender bias or some other kind of bias. So when Judge Frankel comes along and says, you know, defendants are being given vastly different sentences, the very first thing that pops in my head is maybe these defendants were of different races and what we're really seeing is racial bias at play rather than noise. How can we tell the difference between racial bias and noise? It's actually easy to do because when you want to measure noise, you can conduct a kind
Starting point is 01:06:02 of study that we call the noise audit. And so you take professionals, for example, judges, and you show them a fictitious case. And you ask them to make judgments as they would normally. Now you know that it's the same case, they've all been given the same information. They should give you the same judgment. The differences among them cannot be attributed to bias. And indeed, what Judge Frenkel caused to happen, he caused many noise audits to be performed. He actually conducted some himself. And in the most famous one, 208 federal judges
Starting point is 01:06:49 eight federal judges evaluated 16 cases and assigned sentences to 16 cases. And this gives you an idea of the lottery that the defendant would face in that where the average sentence is seven years in jail, the probable difference between two judgments is over three years. So that seems to be unacceptable. So based on the work of Judge Frankel and others, Congress eventually passed a law that basically limited the amount of discretion that judges had. Talk about the effects that this law had on reducing noise.
Starting point is 01:07:26 Were there studies conducted to actually figure out if these were reducing noise? Yes, studies were conducted and actually you can look at many cases and look at the variability of judgments in many cases and you find that the variability significantly diminished, which indicates that their noise was in fact reduced. However, something else happened. The judges hated it. They hated this restriction on their ability to make free decisions, and they felt that
Starting point is 01:08:00 justice was not being served. So, even as the data was showing that the noise was reducing in sentencing, in other words, sentencing was becoming more consistent, many judges were upset that their discretion was being taken away. And Judge Jose Cabranes was one of those who spoke up. And I want to play you a clip of something he said in 1994. This was a discussion at Harvard University where they were talking about these guidelines that were aimed to reduce ethnic disparities in sentencing by limiting the amount of discretion
Starting point is 01:08:30 that judges had. Here is Judge Cabranes. These arcane and mechanistic computations are intended to produce a form of scientific precision, but in practice, they generate a dense fog of confusion that undermines the legitimacy of the judge's sentencing decisions. Dhani, I want to draw your attention to what Judge Cabranes is saying. When you limit the variability of sentencing, you're telling judges, for this offense, you have to do X, for that offense, you have to do Y. A lot of judges feel their hands are tied and they feel the art of law is being reduced to a mechanistic science.
Starting point is 01:09:10 Well, if it takes a mechanistic science to produce justice, then I think we should seriously consider some mechanistic science. What seems to be happening is that from the perspective of the judge, they feel that they're evaluating every detail of the case and that they are producing a just judgment because they are convinced that what they are doing is a just judgment. And somehow it's very difficult to convince judges that another judge whom they respect a great deal presented with the same case would actually pass a different sentence. That argument doesn't seem to have penetrated when Judge Cabrón has made that assertion
Starting point is 01:09:59 that in fact there is a problem and there is a problem to be resolved. He was in effect, as I hear him, he was denying the existence of a problem. Psychologists talk about a phenomenon called naive realism that in some ways explains why it is, I am bewildered that you would not see the world exactly the way that I see the world. Can you explain what naive realism is and how it speaks to the question we just discussed about judges not just reaching different conclusions but being bewildered
Starting point is 01:10:29 that anyone would reach a different conclusion than them? Well, you know, we feel that we see the world as it is. It's the only way we see it, and what we see is real, what we see is true. And it makes it very difficult to believe and to imagine that someone else looking at the same reality is going to see it differently. But in fact, we are struck by how different they are in the context of criminal justice. The variability in sentences is shocking. But when you're looking at it from the perspective of a judge who looks at cases individually and feels that he or she is making correct judgments
Starting point is 01:11:16 for every case individually, then it looks as if any attempt to restrict their freedom is going to cause injustice to be performed. But they're simply not accepting, I think, the statistics that tell them that another judge looking at the same case would actually pass a different sentence. So these debates about sentencing reform raged in the 1980s and 1990s, and eventually in the early 2000s, the Supreme Court struck down the guidelines that bound the way judges were operating, and sentencing reform essentially went away, giving discretion back to judges. Is what happened what I fear happened? Did noise come back into the system? Oh, yes
Starting point is 01:12:07 I mean there is evidence that noise came roaring back and there is also evidence that judges were a lot happier Without the guidelines than they had been earlier One of the ironic things that you and others have found is that even though there is this distinction between noise and bias When the noise came back after the Supreme Court ruling have found is that even though there is this distinction between noise and bias, when the noise came back after the Supreme Court ruling, black defendants were actually among those who were the most severely harmed by this. Is it possible in some ways they can be intersections between noise and bias? In other words, they can amplify one another?
Starting point is 01:12:38 Certainly. I mean, when you are constraining people and reducing noise, you're reducing the opportunities for bias to take place. So attempts to reduce noise and attempts to control noise are going to, in general, not invariably, but are very likely to control and reduce bias as well. If noise produces many of the adverse outcomes we see, if noise produces much of the unfairness we see, why is it that critiques of disparities invariably talk about bias? Turns out that's because of the way our minds work.
Starting point is 01:13:17 The brain is a storytelling machine and the story of bias caters to our hunger for simple explanations. I mean, clearly, bias in general is a better story. That is, you see something happening, it had the character of an event, it had the character of something that is caused by a psychological force of some kind. Variability, noise, is uncaused. Noise doesn't lend itself to a causal story. And really the mind is hungry for causes. And that leads us very naturally to think in terms of biases.
Starting point is 01:13:57 That errors must be explainable. So if I get a misdiagnosis because a doctor doesn't like the color of my skin, that might not make me feel good, but at least I can make sense of what happened. Once I settle on an explanation of racism or sexism or homophobia, I tell myself I have every right to get angry. When I discuss what happened with others, they'll get angry too. By contrast, a misdiagnosis produced by noise is, by definition, no one's fault. The error may have harmed me, but I can't lay the blame on someone's evil intentions.
Starting point is 01:14:36 Noise is the very opposite of a good story. It's meaningless, and that can make me feel even worse. Here's another problem. When I see a judge pass a really harsh sentence or a very light sentence, I can come up with a story of bias to explain this individual case. You cannot do that with noise. You cannot spot noise by looking at any individual case. You have to measure it in the aggregate. It shows up only when you look at the statistics, and many of us are uncomfortable turning to data as our guide to the truth. We prefer stories and anecdotes, and stories and anecdotes are better at illustrating the problem of bias. Stories and anecdotes are what the mind is prepared for. Statistical thinking is alien to us.
Starting point is 01:15:34 And statistical thinking is the only way to detect noise, because it's variability. It's sort of absurd to say about any single case that it is noisy. You say that if you have no idea of how it came about. But noise is a phenomenon that you observe statistically and that you can analyze only statistically. And that is not appealing. So there's an even deeper problem than the fact that noise is detectable only through statistics whereas bias, you know, you can tell a story about bias. So there's an even deeper problem than the fact that noise is detectable only through statistics
Starting point is 01:16:05 Whereas bias, you know, you can tell a story about bias for many people making decisions. The data is simply not even available So at a statistical level you can see an insurance company is demonstrating noise But many of the decisions we are making our decisions we make as individuals So if I want to propose marriage and I feel like proposing marriage on a moonlit night in the springtime, I have no idea if my decision to propose marriage on that evening is being shaped by noise or not. I don't have a statistical set of how I would behave under different circumstances. You know, the truth of the matter is that no one can tell you that this decision was noisy. What you can tell is that when you look at the collection of decisions of people deciding
Starting point is 01:16:47 to get married, that collection is noisy. There is no reason to believe that these steps which improve judgments in the statistical case do not apply when somebody decides to get married. If noise is present in the decisions where you can observe it, it's also present when you cannot observe it. Hmm. Some years ago I interviewed the researcher Berkeley Dietworst. He talked about how people respond when a mistake has been made by a human versus an algorithm. I want to play you a short excerpt of something he told me. People failed to use the algorithm after they'd seen the algorithm perform and
Starting point is 01:17:29 make mistakes even though they typically saw the algorithm outperform the human. In our studies the algorithms outperform people by 25 to 90 percent. So he's basically saying the algorithms are significantly better than the humans but when a mistake is made and algorithms of course can make mistakes and humans can make mistakes, he's saying that you prefer the human to make the mistake. And I think intuitively that feels correct to me. If I'm going to get a misdiagnosis when I go to a doctor, I would feel better if it's the doctor who's made the mistake than an unfeeling, unthinking algorithm.
Starting point is 01:18:02 I think that's absolutely true. And when we're looking at a road accident, we somehow feel less bad about it if it was a driver error than if it was a self-driving car that caused the accident. Algorithms, they make errors. The error they make, by the way, are different from the errors that people would make, and they look stupid to people. Algorithm make errors that people think are ridiculous.
Starting point is 01:18:30 Now we don't get to hear what algorithm think of the errors that people make. And we do know that algorithm just make far fewer of them in many cases. And you have to trade off the higher overall accuracy against the discomfort of abandoning human judgment and trusting an algorithm. Yeah. You know, this might actually be a subtext of much of your lifetime's work, Danny, but it seems to me that fighting noise requires a certain amount of humility. And it seems to me that humans are not humble. Well, they're not humble for a fairly straightforward reason. We do not go through life imagining different ways of seeing what
Starting point is 01:19:14 we see. We see one thing at a time, and it feels right to us. And, you know, that is really the source of the problem of ignoring noise. This is why it is so difficult to imagine it. I want to talk just for a brief moment about places where noise can potentially be useful. So let's say, for example, you have a company that's trying to innovate and come up with new ideas, or you're in a creative enterprise where you want to pitch different ideas for movies. In some ways, you might want to actually maximize the variability of the ideas you get.
Starting point is 01:19:54 So, noise is not always bad. Sometimes it can actually lead to good things. Yeah. We don't call it noise in those cases. We reserve the term noise for undesirable variability. There are indeed many situations in life in which variability is a blessing, certainly in creative enterprises, also evolution. So anything that allows you to select the better one of multiple responses, wherever there is a selection mechanism, Variability is a good thing, but
Starting point is 01:20:26 variability in the absence of a selection mechanism is a sheer loss of accuracy. And those are the cases that we talk about. So if you had a way when you have multiple underwriters of finding out who is doing a better job than whom. And using that in order to improve their training, that would be a case where you could make positive use of variability. But in the absence of such a mechanism, that variability just is a sheer loss.
Starting point is 01:21:04 When we come back, how to fight noise? You're listening to Hidden Brain. I'm Shankar Vedant. This is Hidden Brain. I'm Shankar Vedant. Noise is endemic. It's also very difficult to fight, in part because judges and doctors and police officers don't like to think of themselves as capricious.
Starting point is 01:21:34 We don't think of our judgments as being arbitrary, certainly not when it comes to really important decisions. Even when we are told about how noise is affecting our judgments and decisions, we hate to be shackled by rules. Danny, in 1907, Charles Darwin's cousin, Francis Galton, asked 787 villagers at a county fair to estimate the weight of a prize ox. None of the villagers guessed the right answer, but then Galton did something with their answers that got him very close to the correct answer. What did he do, Danny? Well, he simply took the average
Starting point is 01:22:14 and the average, I think, was within two pounds of the correct weight. And that led to a lot of research that was summarized in a recent book by James Surowiecki on the wisdom of crowds and the fact that when you take multiple judgments, independent judgments, and average them, you eliminate noise. This by the way is guaranteed to eliminate noise. So if you take multiple judgments there is no guarantee that it will reduce bias because if the judges agree on the bias
Starting point is 01:22:50 then the bias will remain when you take the average. Indeed it will be even more salient. But what is absolutely guaranteed is that when you average independent judgments you are eliminating noise. When you take four independent judges, you're reducing noise by one half. When you take 100, you're reducing it by 90%. So there is some mathematics of noise that lends itself to analysis that doesn't apply to bias. So it's really remarkable. The correct weight of that ox was 1,198 pounds and
Starting point is 01:23:26 as you said that was one or two pounds off the correct weight. And I want to point out that the reason averaging the responses produces a better answer is that noise is random. You're taking advantage of the fact that various estimates will be randomly high or low and that's why when you average them out you're going to get closer and closer to the correct answer. What happens when you have different people making the same judgment of the same object, and then you are going to average them, then the errors they make cancel each other out. But when people make judgments about different cases, errors don't cancel them out.
Starting point is 01:24:04 If you set too high a premium in one case and too low a premium in the other case, that doesn't make you right. That just makes things worse. So this idea that errors cancel out, you have to apply it quite precisely. They cancel out when your average judgments are the same thing.
Starting point is 01:24:23 And also the judgments have to come from people who in some ways who are independent of one another. If I'm seeing the judgment you make and then I make my judgment afterwards, my judgment really is just a reflection of your judgment, not an independent one. That's right. And you know what happens basically is when you have witnesses who talk to each other, the value of their testimony is sharply reduced. Because in effect, in the extreme, if you have one witness who is very assertive, all the other witnesses fit their story to his, then you have one witness, regardless of how many testify.
Starting point is 01:25:00 One of the most remarkable aspects of the wisdom of the crowd that you describe in the book has to do with How you can elicit the wisdom of the crowd just from yourself you cite research by Edward Vool and Harold Paschler that ask people to make judgments about the same thing Separated by a certain amount of time. What do they find when you average out these different estimates? well, for example, you know if if you ask people, what did the population of London? And you ask it once, and then you wait a couple of weeks, say, and you ask it again. The striking thing is that most people will not give you the same number on the two occasions. And the second striking thing is that the average of the two responses is more likely
Starting point is 01:25:48 to be accurate than either of the responses. The first response is better than the second, but the average is better than both. In one of the studies they conducted, they actually asked people to make estimates that were different than their initial estimates, and then they averaged out the estimates and they found that noise was reduced even further. Why would this be the case, Danny? Well, here what you're trying to do, and you can do it within an individual, is you're leaning against yourself.
Starting point is 01:26:20 You made one judgment and then you ask people to think, how could that judgment be wrong, and then make another? And that turns out to be, indeed, better than merely asking the same question twice. In some ways, this provides a solution to the conundrum I pose to Danny. If noise is detectable only by studying statistical averages, how do I reduce noise in decisions I am making as an individual? The answer?
Starting point is 01:26:53 Try to make the same decision over and over under different conditions. One way to tell if noise is behind my decision to propose marriage is to ask myself whether I would make the same decision under different circumstances, not just on a moonlit night in the springtime, but in the heat of summer or in the dead of winter. If I reach the same answer in these different settings, it's possible I could still be making a mistake, but at least I can be somewhat reassured that my decision is not the result of random, extraneous factors. Scientists are exploring lots of ways to reduce noise.
Starting point is 01:27:35 The researcher Sendhal Mulayanathan and his colleagues devised an algorithm to advise judges on whether to grant bail to suspects. These are people who have been arrested, but who have not yet been put on trial. Keeping them in jail can cause all kinds of hardship. People can lose jobs or lose custody of their children while they're incarcerated awaiting trial. It's costly for taxpayers to keep people in jail. But letting someone dangerous out of jail can cause harm.
Starting point is 01:28:04 Maybe they go on to commit other crimes. The researchers had the algorithm offer advice to judges about whether to grant bail. They found that if judges incorporated the recommendations, this could reduce the number of people in jail by 42% without increasing the risk of crime. 2% without increasing the risk of crime. The research goes further than that in that allowing the algorithm to inform the judge is actually not the best way of doing it. The research suggests quite strongly that when you have a judge and an algorithm that are looking at the same data, with some exceptions, it's better to have the algorithm
Starting point is 01:28:47 have the last word, and this is very non-intuitive. Yeah. Besides being actually superior in some ways in terms of judgment, one of the things that algorithms do better than people is that they're not noisy. They're actually much more consistent. Can you talk about this, that in some ways, one of the advantages that algorithms have
Starting point is 01:29:05 is even when their judgments might not be as good as humans, because they have less noise than humans, you're able to get better outcomes? Well, noise is a source of inaccuracy. And algorithms, by their nature, are noise-free. That is, when you present the same problem to two computers running the same software, they're going to give you the same answer, which is not true of different bail judges. So that advantage is in many cases sufficient to make algorithms superior to people. But I don't want to create the impression that our solution to the problem
Starting point is 01:29:46 of noise is algorithms, because even if it were the solution, there's just too much opposition to algorithms. So ultimately, we're talking about improving judgments. In some domains, algorithms can be used, and I think where they can be used, they should be used, but this is a long process, a slow process because human judgment is going to make the important decisions for quite a while. Isn't it interesting though, Danny, that when you look at the news and you see the news coverage of algorithms,
Starting point is 01:30:21 I feel like just in the last year, I've seen dozens of articles talking about algorithmic bias, about how algorithms in some ways can make judgments worse. And it is the case that you can have poorly designed algorithms. You can argue that the old sentencing rules that we had, three strikes and you're out, in some ways that is an algorithm, but you could argue the algorithm in some ways was too crude to capture what actually needed to be done. But isn't it striking that there's so little attention that's paid by contrast to the potential
Starting point is 01:30:50 good that algorithms can do? Because again, we're so focused with the story of intent of saying a bad outcome happened and algorithm caused it, clearly algorithms need to be thrown out the window. I mean, we do not want to accept the errors that blind rules will make. You know, I was talking to someone who designs self-driving cars, and they realize that self-driving cars, it's not enough for them to be a hundred times safer than regular drivers. They effectively have to be almost perfect before they will be admitted. And it's that kind of bias that is completely human and natural. We like the natural over the unnatural.
Starting point is 01:31:34 We prefer human drivers and human doctors to make mistakes rather than self-driving cars and medical algorithms. And that's just a fact of psychology. You talk in the book about something that you call decision hygiene, and others have talked about this idea as well. What is decision hygiene and why the analogy to public health? When you're thinking of dealing with bias is like a specific disease. So you can think of a vaccine or you
Starting point is 01:32:10 can think of medication, which is specific to that disease. But when you're washing your hands, you're doing something entirely different. You have no idea what germs you might be killing. And if you're good at it, you will never know because the germs are dead. And a similar distinction can be drawn between different ways of fighting errors.
Starting point is 01:32:35 There is a difference between procedures that are specifically aimed at particular biases and procedures that are intended generally to improve the quality of the judgment and decisions. And the way that this feeds back on the individual is that if there are procedures that are good for organizations and for repeated decisions, they should be good for individuals and for singular decisions. Mm-hmm. So if I'm a CEO of a corporation or if I'm a policymaker and I'm hearing this conversation about noise, can you give me two or three really specific suggestions on ways that I can reduce noise in my decision-making or in my company's decision-making or in my organization
Starting point is 01:33:20 or community? Well, I think the first step would be to ask whether you have a task in the organization that is carried out by interchangeable functionaries like underwriters or or emergency room physicians. They're carrying out the same task, making the same kinds of judgments, and you would like those judgments to be noise-free, to be uniform. So first of all, identify whether you have that case in your organization. If you do, we strongly recommend you measure noise. That is, you actually take those individuals, present them with similar cases, and observe
Starting point is 01:34:02 the variability in the judgments. And possibly that may lead you to want to do something about it. But the first step is just to measure noise, because our intuitions about the magnitude of noise are systematically wrong. Danny thinks we should learn from the saga of the rise and fall of sentencing reform. Once you detect noise in an organization, we should learn from the saga of the rise and fall of sentencing reform.
Starting point is 01:34:25 Once you detect noise in an organization, it may be wiser to avoid trying to fix the problem by asking everyone to follow rigid rules. As we've seen, people hate to have their judgment questioned, they hate to have their discretion limited, and they detest anything that smacks of mechanistic rules. The main thing to do if you're attempting to improve the judgment of people in an organization is to convince those people that they want their judgments to be better. If you impose it as a set of rules that all of them will follow, they will resist it, they will feel they are being robotized, and they're likely to sabotage whatever you propose.
Starting point is 01:35:10 This is well known in insurance companies that provide the underwriters in many cases with information or even with a technical price, with a suggestion about what premium should be assigned. And underwriters are very likely to completely ignore those and to follow their judgment. And basically, I would think, you know, it's obvious advice. If you have a group of people who are noisy, have that group try to find a solution to the noise. Have them develop procedures that will make them uniform, do not impose procedures on them, but work with them to make them more
Starting point is 01:35:52 uniform because actually they will recognize that they would like to be in agreement with each other. But letting them feel that what they are doing is what they want to do rather than what they are being forced to do. That is clearly a very important step if people really want to have organizations that improve their judgment. Daniel Kahneman, Olivier Sibony and Cass Sunstein are the authors of Noise, a flaw in human judgment. Danny, thank you for joining me today on Hidden Brain.
Starting point is 01:36:29 It was really my pleasure. Hidden Brain is produced by Hidden Brain Media. Our audio production team includes Annie Murphy-Paul, Kristen Wong, Laura Correll, Ryan Katz, Autumn Barnes, Andrew Chadwick, and Nick Woodbury. Tara Boyle is our executive producer. I'm Hidden Brains executive editor. This week we say goodbye to someone whose name you've heard many times in the credits of the show. Bridget McCarthy has been a senior producer with Hidden Brains since 2020 and she brought a tireless work ethic to
Starting point is 01:37:08 everything she produced. We're truly grateful to Bridget for her many, many contributions to the show. If you enjoyed these conversations with Daniel Kahneman, please consider supporting our work so we can continue to bring you many more interviews like this. To do so, you can become a member of our podcast subscription, just find us on Apple Podcasts and sign up for a 7-day free trial of Hidden Brain Plus. Or go to apple.co. slash hiddenbrain.
Starting point is 01:37:39 Subscribers receive access to exclusive episodes that you won't hear anywhere else. Thanks for your support. We truly appreciate it. I'm Shankar Vedantam. See you soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.