Hidden Brain - Making the Most of Your Mistakes

Episode Date: January 1, 2024

When we're learning, or trying new things, mistakes are inevitable. Some of these mistakes provide us with valuable information, while others are just harmful. This week, we kick off the new year with... researcher Amy Edmondson, who explains the difference between constructive failures and those we should try to avoid. If you know someone who would enjoy this episode, please share it with them. And thanks for listening! We look forward to bringing you many new Hidden Brain episodes in 2024. 

Transcript
Discussion (0)
Starting point is 00:00:00 This is Hidden Brain, I'm Shankar Vedanta. In 2009, British businessman Philip Davison Sebray was celebrating his wife's 50th birthday in the Maldives when he got a phone call. The caller asked for a business meeting the next day at 8 a.m. Philip explained that that would be a little difficult, seeing as he was 4,500 miles away from work on vacation. What are you doing away at a time like this? The voice at the other end of the line shouted.
Starting point is 00:00:30 Your company is in liquidation. Philip thought it was a joke in port taste. In an interview with Wales Online, he recalled at the caller assured him that it was no joking matter. Here's what happened. A British government agency had reported the demise of Phillips 134-year-old engineering company Taylor & Sons. The government agency, known as company's house, serves as a kind of registrar for British
Starting point is 00:01:01 businesses. It said that Taylor & Sons, created in 1875, was being shut down. Turns out, a government clerk had made a typo. The company that was going out of business was Taylor and Sons in Manchester, not Taylor and Sons in Wales. Philip felt sick. His company had been doing well. It had some 250 employees.
Starting point is 00:01:29 Within days, he later said in that interview with Wales Online, his contracts dried up. Orders were cancelled. Creditors demanded to be paid. The government agency did correct the mistake after some days, but a debt spiral had taken hold. In time, Taylor and Sons actually did go out of business. Not all errors are so consequential, but some are deadly, and many have unpredictable effects. Wouldn't we all prefer that governments, organizations, and companies
Starting point is 00:02:02 avoid making mistakes altogether? That's an understandable response, but it turns out that demanding no errors might be the biggest mistake of all. This week on Hidden Brain. To air is human. When people work on things mistakes are inevitable. This is true in our personal lives, in our workplaces, and at the level of public policy. Not all mistakes are created equal though. Some failures are trivial, while others can be disastrous. At Harvard Business School, Amy Edminson studies how failures come about and what we can do about them. She has surprising insights into how organizations and people should think
Starting point is 00:03:05 about the mistakes they make. Amy Edminson, welcome to Hidden Brain. Thank you so much for having me. Amy as a young scientist working on your first major research project, you spent months collecting data from doctors and nurses at two local hospitals. The stakes here were high. I understand that you were tracking medical mistakes. Yes, we call them adverse drug events. So that is when something bad happens to a patient that is deemed caused by human error. And so I would get a phone call from one of the physicians in the study who would say, there's just been an event. And so we're going to take a look at what happened.
Starting point is 00:03:49 And so I would hop on my bike and ride down to the hospital and I'd find myself in a conference room and we would then sit around and hear from the perspective of different people who may have touched or been aware of the adverse event, and we try to truly understand what happened. So for instance, in one case, there was a patient that received a drug called lidocaine, and they were supposed to get a drug called heparin.
Starting point is 00:04:16 Now the two drugs were labeled similarly, and they were in the same location even though they do very different things. Now in this case, the lidocaine would not hurt the patient, but the absence of heparin might have led to real harm. It didn't. So these adverse events didn't always involve harm, but they always involved at least the potential for harm. Now, you became aware that some teams at these hospitals were making more errors, more mistakes than others, but besides tracking mistakes, you also were examining how teams
Starting point is 00:04:52 functioned and you found that some teams functioned better than others. Tell me about the components of teamwork that you measured? Well, I used a classic team survey called the Team Diagnostic Survey, and it measured such things as the quality of interpersonal relationships in the team, the team's own self-assessment of how well they were performing, the team's assessment of whether and the degree to which they had the resources they need to do their job well, especially interested in their assessment of the leadership of the team and how good was that leadership. And these are a set of factors that had been previously recognized
Starting point is 00:05:37 as important to team effectiveness. And you were also looking at how much people cared about their work and whether people felt like everyone was pulling their fair share of the weight. Yes, and I think that assessment is partly captured in the quality of relationships measure. But there also was the measures of their satisfaction with the work, how happy they were in their job. And all of these measures tend to travel together. So in a good team and a well-led team, they tend to be high on all of these factors and then not good team, they're low on all of these factors.
Starting point is 00:06:13 So you now have two sets of data, information on high performing teams, versus low performing teams, and information on teams that make few mistakes or lots of mistakes. I mean, it seems quite obvious what the answer is going to be, but what was your intuition about how the quality of teams would relate to the mistakes that teams were making? Well, my intuition was that better teamwork would lead to fewer mistakes or adverse events for patients.
Starting point is 00:06:42 This just makes sense, and the more I understood about the nature of patient care in a hospital, the more I realized how interdependent it was. First of all, it's 24-7 operations. So there's shift handoffs. And second of all, most patients are seen by multiple different caregivers through the course of their stay. And so the quality of the coordination and the collaboration ought to really determine the degree to which high quality care is given. So of course, I expected better teams would have fewer adverse events or mistakes. So the day comes when you're ready to analyze all of the data. You have a small computer disk with all the medication errors made by each team. Paint me a picture of what happened
Starting point is 00:07:33 that day, Amy. Well, I synced up the data on the disk with the data that I already had on the team properties in my computer and started to run the statistical analysis to connect those two data sources. And I just ran some simple correlations and I saw right away that the p-value, which indicates whether or not the finding is statistically significant,
Starting point is 00:07:59 I saw that the p-value indicated that my data was statistically significant. So I was very excited. And then I looked more closely, leaning into the screen, and I realized that the sign was in the wrong direction. In other words, instead of seeing a relationship between good teamwork and low error rates. The statistical significance was saying there was a relationship between good teamwork and high error rates. How could that possibly be, Amy? Well, that was my question. And I set there. I think I froze. I was upset and I was scared. And so I set there looking at it. And my first reaction was, I must have made a mistake.
Starting point is 00:08:45 I must have put the data in incorrectly. So I walked very carefully through everything I'd done. I redid it. And nope, maybe my hypothesis had been a mistake, but I had not made a kind of data entry mistake. And so there it was, staring at it again, and again puzzled. It just can't be true.
Starting point is 00:09:10 It can't be that better teams with higher quality relationships more able and willing to collaborate effectively, to coordinate clearly could have more, not fewer, adverse drug events. And I suddenly had a thought, you know, maybe better teams don't make more mistakes. Maybe they're more able and willing to report them. And I suddenly remembered that I had inserted an extra question that wasn't in the original survey that is stated as follows.
Starting point is 00:09:45 If you make a mistake in this unit, it's held against you, and it's rated on a seven point scale, from, you know, not at all to a great deal. And it turned out that that single item was profoundly significantly correlated with the actual error rates. So that meant that when people agreed with this item, making a mistake would not be held against you, the error rates were higher. That's not, you know, a perfect proof, but it certainly suggested that there was something in the climate of the team that would make it easier to speak up about and report
Starting point is 00:10:24 error. So you hired a research assistant to actually go to the hospital and observe firsthand how teams at the hospital were functioning, testing this hypothesis that better teams in fact were more willing to report the mistakes, and that's why they were showing up as having committed greater numbers of mistakes. What did the research assistant find? Well, first of all, I think it's important to point out that I did not tell him what the survey data said, nor what the error data said.
Starting point is 00:10:55 He only knew that there was a study of error going on. And I said, I just want your impression of what it's like to work in these units. I want you to observe them. I want you to observe them. I want you to interview them when they're on breaks and just learn as much as you can about these different work environments. Off he went. After a week or so, he came back. He said, they really are very different places to work.
Starting point is 00:11:22 Some of them, he said, his words were just far more open. And others, he said, again, his words were authoritarian in nature. In some units, people would say things like, if you make a mistake in this unit, you get treated like a two-year-old, or you get put on trial. So, you really don't want to have made one. Or the nurse manager who is essentially the boss of the unit, if you're a nurse, would get angry and treat you badly. In other units, even though they're only maybe across the hall or up a floor or two, he found people saying things like, well, in this unit, it's really easy to talk about mistakes
Starting point is 00:12:01 because of what's at stake. Patients' health is at stake. So of course, you're never afraid to speak up and you're never afraid to tell the nurse manager about what you see. I'm wondering how this insights started to change your thinking about the nature of mistakes in failure, Amy. Well, to begin with, I realized you can't learn from mistakes
Starting point is 00:12:22 that aren't reported. When we think about organizations and teams and the goal of learning from mistakes and learning from failures, job one is to make sure we're actually talking about them honestly and openly. And so that started me thinking that maybe there are differences in work environments in what I called then interpersonal climate.
Starting point is 00:12:48 And if the interpersonal climate differs, that would have real implications for people's ability to learn from mistakes and failures. Amy was starting to see that errors and failure are not always signals or disaster and dysfunction. When we come back, why failures are not created equal, how to tell them apart and what we should do about different kinds of failure. You're listening to Hidden Brain, I'm Shankar Vedanta. This is Hidden Brain. I'm Shankar Vedantam. When airline pilots make grave mistakes, planes crash and people can die. When surgeons make mistakes, patients can bleed out on the operating table or have the
Starting point is 00:13:41 wrong limb amputated. When you leave something in the oven too long, no one dies, but you will have to eat burnt cinders for dinner. There is a reason parents, teachers, managers and chefs try so hard to stamp out errors. Mistakes are costly, unpleasant and dangerous. At Harvard Business School, Amy Edmondson studies the science of mistakes. She's discovered that we make a big mistake when we lump all failures into the same bucket. Amy if you told the average leader of a company that you could completely eliminate all mistakes
Starting point is 00:14:20 at her company, she would probably be ecstatic. But you say that we're making a mistake in the way we think about mistakes. Why is it problematic to try to completely eliminate all failure? You said it at the outset to errors human. We are fallible human beings and we will always make mistakes. I don't mean we'll make mistakes all the time, but there is always the possibility that a mistake will occur. So a better approach is to think about how can we be set up to catch incorrect mistakes before they cause harm. And in some ways from the hospital study that you were mentioning to me earlier, when you
Starting point is 00:15:00 send a signal that failure is not going to be tolerated, what happens then is not that the failure stop, but that the failure stop being reported. Exactly. It's one of these profound insights that I think way too few leaders or even just people in families recognize that when you insist that we must have, you know, error-free performance or error-free lives. The main thing that happens is not that error goes away, it's that you stop hearing about it.
Starting point is 00:15:33 There's another approach to failure. In some ways, this is the polar opposite to the first approach. And this idea is popular, or at least used to be very popular, among tech entrepreneurs in Silicon Valley. And this was to celebrate failure. Fail fast, fail early was the model. Was this a better approach than leaders telling employees that failure was unacceptable? I don't think it's possible to say one is better than the other.
Starting point is 00:15:59 They are better for different contexts. So the fail fast fail often is a fantastic approach for a laboratory or for an R&D group. The, you know, let's adhere to the highest possible standards and try our very best to get everything right is how we want to run an operating theater. Right. If you're running an airline company, I'm not sure you want to tell your pilots,
Starting point is 00:16:24 fail fast fail often. Of course not. Imagine being the head of a factory making Toyota automobiles who decides, let's fail fast today. No, no, no, no. So, rather than a blanket rejection of failure or a blanket embrace of failure, you say that we need to stop treating all mistakes as if they are the same. And you cite the social scientist, Sim Sittkin, who once made the case for something he called
Starting point is 00:16:55 intelligent failures. What are intelligent failures, Amy? Intelligent failures are the undesired results of forays into new territory that are driven by a hypothesis and are as small as possible. And so, in a sense, an intelligent failure is an experiment that didn't produce the result you had truly wanted it to produce. In some ways, what that implies then is that intelligent failures are almost always failures that take place on the frontiers of knowledge or discovery. Yes, so there's two kinds of frontiers. One is the frontier of knowledge or discovery
Starting point is 00:17:37 in that we're talking about a place where no one has been before. And the other is the frontier. It's just new for you. Let's say you pick up a new hobby, you decide to take a ceramics class. That's new territory for you and you can expect some intelligent failures along the way, even if it's not new to the world. I want to come back and talk at greater length about intelligent failure. Later in our conversation, you have a series of very useful insights into how people and organizations can use intelligent failure as an engine for growth and discovery. But it may be helpful before we do that to be able to spot two other kinds of failure.
Starting point is 00:18:16 These are the kinds of failure we should in fact do our best to stamp out. Now the first can superficially look like intelligent failure because these failures can also take place while people are engaged in complex tasks on the frontiers of human knowledge and discovery. In 2003, the Space Shuttle Columbia broke apart upon reentry into its atmosphere, killing the Seven Astronauts aboard. You conducted an in-depth analysis of what happened to the Columbia when it broke apart. It was something that you ended up calling a complex failure. Tell me the story of what happened to me. Well, the shuttle had completely combusted on reentry into the Earth's atmosphere on February 1, 2003.
Starting point is 00:19:01 It was later determined that the reason for that is something called a foam strike. Now what happens is to get the shuttle out beyond the Earth's atmosphere, it takes off with the help of a solid rocket booster. And that has the energy to bring it out into space. And that is surrounded by insulating foam. And occasionally, little bits of that insulating foam would break off just because of the pressure of the launch. And sometimes those little bits strike the shuttle and make little dents, you know, just little
Starting point is 00:19:34 nuisance problems that would lead to maintenance later on to fix them up. But in this case, there was a rather large piece of foam that dislodged and hit the shuttle on a delicate spot, the leading edge of the wing. And so it unfortunately made a larger hole, a hole the size of a human head. Now the hole that size in the shuttle, as it re-entered the Earth's atmosphere, allowed all the hot gases of the atmosphere in and led to instant combustion. Now, tell me a little bit about what happened in the days leading up to the launch, because at least in retrospect when people went back and did the investigation, they tried to follow
Starting point is 00:20:17 the breadcrumbs and ask, could we have known what was going to happen before it actually happened? what was going to happen before it actually happened. Well, yes, and in this case, unlike the even more famous challenger incident, no one had any worries leading up to the launch, which was January 16, 2003. But on January 17, the day after the launch, an engineer named Rodney Rocco was looking at the launch video, and he saw just a grainy spec on the screen that bothered him. Because he thought that grainy spec might be a foam strike, and the very fact that it couldn't really make out what it was, but the very fact that he could see a spec at all suggested to him that the chunk, if it were a
Starting point is 00:21:05 foam strike, it might be big enough to do real damage rather than just create a nuisance and a maintenance problem. And so that worried him. Now, that was about 15 days before reentry. So theoretically, NASA had 15 days to kind of figure out whether there had been a real problem and if so, whether there was an alternate plan to the simple reentry that was part of the schedule. In other words, could they find out A, is there really a problem and then B, if so, is there anything we could do about it in that 15-day window?
Starting point is 00:21:41 But unfortunately, Roka and his immediate colleagues were never able to get senior managers at NASA to take the problem seriously, to really believe that there was a problem. Was this partly because there had been other foam strikes that turned out to be fairly minimal routine maintenance kinds of issues? Exactly. So unfortunately, people at NASA had learned to equate the foam strikes that did happen with just maintenance. They're not a safety risk, but they had had so many of these little tiny foam strikes that they didn't think it was worth looking into. And in some ways this is sort of understandable, even though in retrospect we know this was a mistake,
Starting point is 00:22:29 I mean, if your shuttles have been returning safely despite these form strikes, it's quite understandable how people could have become plazate to them. It's completely understandable. Their own experience had taught them that it was fine. I have enormous empathy for everyone who was a part of that shuttle program, who believed it to be fine. Because I'm that way too.
Starting point is 00:22:54 I'm a fallible human being who overly trusts my prior experience and often fails to be curious enough about, well, maybe this one's different. Ooh, let me look into it. Let me see what I can learn. So it's tragic, but there are no bad guys here. You point out that complex failures are often not the result of one big cause, but rather a number of small factors that line up perfectly in this perfect storm, as you called it. I understand that in the healthcare arena, these kind of complex failures are
Starting point is 00:23:25 sometimes called the Swiss cheese model of failure. Explain that term for me, Amy, and explain how you use this analogy to analyze a case in which a young patient received a dangerous overdose of morphine. That's right. And the Swiss cheese metaphor comes from an erythearist named James Reason from the UK. And he uses this metaphor of Swiss cheese to try to explain the notion of complex failures. He says, you know, when your cheese has air bubbles in it, those are in a sense defects in the cheese, but they're not problematic until they line up and make a tunnel. Just rarely happens, but when it happens, then the error goes all the way through.
Starting point is 00:24:07 So in the case of this morphine overdose, I was able to analyze seven factors contributing to the accident. So to begin with, there had been an overflow in the intensive care unit where most post-surgical patients go, and this boy had just had surgery. So he was sent to the regular medical floor, which has less specialized staff. So that's one factor.
Starting point is 00:24:31 Now, that by itself would not lead to this kind of overdose. But unfortunately, there was a brand new nurse right out of school who was assigned to take care of him. And then there was an infusion pump that's used to deliver this pain medication. And it happened to be located in a rather dark corner, making it a little harder to see. And the nurse was also hadn't done this kind of programming before, so we asked for a colleague to help.
Starting point is 00:25:00 She stopped by to help, but she didn't do her calculations independently. She just looked over his shoulder and verified his. And then finally, the medication label was printed badly by IT and a little difficult to read. So that contributed to them not able to determine the concentration of the drug, exactly right. And so all of those holes in this Swiss cheese lined up and let this overdose go through. Fortunately, it was noticed very quickly and they called the physician and instantly delivered a drug to help correct the error. But it's the kind of story that is unfortunately common in healthcare, but especially in any complex system.
Starting point is 00:25:52 And in some ways they made it, it points out to me the importance of allowing people to speak up about problems that they're seeing and also to take a systemic view of problems rather than looking for the smoking gun approach to problems. It's absolutely right and our tendency is to look for that smoking gun. Our brains are used to looking for the single cause, the small part that broke rather than to back up and see how the parts are relating to each other and coming together in a way that created the failure or created the flaw. And it's a discipline to sort of realize that there's multiple factors
Starting point is 00:26:36 and in order to prevent complex failures, speaking up is essential. People need to know that their voices are welcome, because all you have to do is catch and correct one of the many factors, and you've prevented the failure. It is possible to implement procedures that ensure that errors are caught and corrected early. Tell me how this is done by the Carmaker Toyota. Toyota is one of the best examples of doing this well and probably the best practice that
Starting point is 00:27:18 is maybe even the most famous of the Toyota production system is something called the Andon cord. And that is a literal cord that any team member is encouraged to pull whenever they see something wrong, or even more importantly, whenever they see something that might be wrong. Now many people think once you pull the cord, the line instantly stops. It doesn't. When you pull the cord, what happens is that a team leader comes quickly over and says, what do you see? And you explain. And the two of you together diagnose what's happening. And most of the time, it turns out that you can either fix it or recognize that there wasn't a real problem and the line keeps going. But one out of 12 times, there is a real problem there. The line will stop and it won't start again until that problem is fixed. So that
Starting point is 00:28:13 prevents the complex failure of some small problem moving on downstream and we're pouring good money after bad at that point. Because when you stop the assembly line, when you're stopping production, it is costly. But I guess in the long run, the benefit is that the production becomes a higher quality production, and over time, you're starting to stamp out more and more mistakes. Exactly. So, yes, it's absolutely costly in the short term. If the line stops for a minute, that is literally the loss of one car sale. So you are allowing a frontline associate to cost the company several thousand dollars any time they wish. And of course, as your question
Starting point is 00:28:54 suggested, they understand that this is money well spent. This is not a cost. This is an investment because every time we can stamp out small problems along the way, we are less at risk for producing anything less than a perfect car with high quality that will serve that customer well for years and years. We've talked about intelligent failures and complex failures. In some ways at the bottom of your taxonomy of failures are what you call basic failures. You encountered one of these failures when you went sailing a few years ago. Tell me what happened, Amy. Well, I had signed up for an alumni regatta at Harvard. And when I got there, I realized kind of to my horror that everybody else there
Starting point is 00:29:45 was about two years out of school, not several decades as I was. But that was okay. I can do it. I'm a good sailor, right? You know, I'll do the best I can. So off I went and I was thrilled to not come in dead last in the first race. After the second race, there was a break. Where all the boats go back to the dock before the regatta continues. The dock was dead downwind, which means that if you're in a sailboat, the boom on the sailboat holding up the sail is as far out as it can possibly go. And if you're dead downwind and the wind shifts even just a little bit, the boom is at risk of flying over to the other side.
Starting point is 00:30:28 So any experienced sailor and I am an experienced sailor knows that. So we all know that when you're heading dead downwind, you had better be vigilant because you are at risk. Now in the Charles River where this regatta was, the wind is notoriously shifty. But the race was over, we're just heading into the shore, so I'm chatting with my crew, a little bit relaxed, and all of a sudden the boom flies across the boat with a little wind shift, knocks me out. Next thing, you know, I'm in the freezing cold Charles River. Oh god, you know, I'm in a freezing cold, Charles River. Oh, God, you're not got knocked overboard.
Starting point is 00:31:06 I got knocked overboard. It's some early May, the water is probably 40 degrees. Fortunately, my crew was a fantastic sailor. You know, quickly turns the boat around to come get me. I climb in the back, the stern of the boat, and then I just see it, all the blood everywhere in the hull that's coming out of my head. So, I've led to nine stitches in the side of my head, and it was a basic failure.
Starting point is 00:31:29 It was a small moment of inattention, looking away, not paying attention, just being overly casual when, in a sense, I'm operating dangerous machinery. And in some ways, I think what I hear you say is that the situations in which we tell ourselves, you know, I can do this in my sleep. These are the situations in which basic failures can often happen. That's exactly right. Anytime you hear yourself or someone else saying, oh, I can do this in my sleep, watch out.
Starting point is 00:31:59 You can't. I understand that a simple tool like a checklist can be very effective when it comes to preventing basic errors. We've actually talked about the power of checklist on an earlier episode of Hidden Brain. You've pointed out Amy that sort of just having a checklist is not enough. You actually have to do it mindfully. And you described the story of an airline crash that took place some years ago. Tell me the story of what happened. So it was a freezing cold January morning in Washington DC, and Air Florida flight 90 was headed for Fort Lauderdale, back home, I guess. And unfortunately, they crashed into the Potomac River. Why did that happen? Well, it turns out, yes, they use the checklist.
Starting point is 00:32:46 And most of our listeners are probably aware that the checklist includes the item anti-ice. When the first officer said as part of the checklist, anti-ice, the captain said habitually off. It's Air Florida. I think most of their flights would not be using the anti-ice machinery. The first officer went on to the next item on the checklist. That was a tiny but crucial and catastrophic mistake that led to this terrible failure. Because in fact, you did want to deice the wings before taking off on that very cold day in Washington DC. That's exactly right. You wanted to deice the wings and take the time that is required to do that to have a safe takeoff. Amy's work on how to generate fewer basic and complex failures makes sense.
Starting point is 00:33:43 We all want to see fewer errors in hospitals and in space shuttle missions. When we come back, in a world where failure is generally stigmatized, how to get people and organizations to do the hard thing, systematically identify places where they can fail intelligently. You're listening to Hidden Brain, I'm Shankar Vedanta. This is Hidden Brain, I'm Shankar Vedanta. Amy Edmondson studies the science of
Starting point is 00:34:23 failure. She's the author of right kind of wrong, the science of failing well. In her research, she has found that we make a mistake when we lump all mistakes into the same bucket. There are indeed many kinds of failure that we should actively try to stamp out, but taking an axe to all mistakes ignores the fact that some failures are actually useful. They are what Amy and her colleagues call intelligent failures. Amy, you found that intelligent failures happen in very specific circumstances. The first is that the failure happens in new and uncharted territory. I understand that you spoke to a prominent chemist, Jennifer Heemsstra,
Starting point is 00:35:06 and like many scientists, she does work on the frontiers of knowledge. I mean, she's trying to discover things that we don't know already. What did she tell you about what failure looks like in her lab? Well, she has a wonderful way of leading her very thriving laboratory. She tells her students and young scientists, we're going to fail all day. And she says failure is a part of science. And of course, she's right. And what she means is, if we are on the leading edge of our field, and that's where we hope to be, we're going to have some very smart, well-informed hypotheses, but many, if not most of them, in fact in her view, 90% of them will end in failure. They will have been wrong. But each of those failures is in itself an important discovery.
Starting point is 00:35:52 It lets them know what didn't work so that they can then quickly rethink and try the next one and hope that that might work. In some ways, it's like wandering around in a dark room with your arms outstretched and you're searching for the door, and each stab in the dark might not give you the door handle, but it tells you where the door handle is not. And eventually, if you make a sufficient number of failures, you're likely to eventually find the door handle. I think that's a wonderful image.
Starting point is 00:36:22 And you can take that image into so many different aspects of your life, whether that's finding a life partner or innovating in how you do your work or in the kitchen, trying a new dish. And eventually you'll find that door handle. You see that blind dates are examples of intelligent failure because such failures are inherently unpredictable. What do you mean by this?
Starting point is 00:36:47 If you are looking for a partner, particularly, let's say, a blind date, whether from a mutual friend or an app, you go out and you're going to meet someone for the first time. There's no real way to know for sure in advance whether this is going to be thumbs up or thumbs down. There is an opportunity here. You hope to meet someone and get along with them. You've done as much as you can to figure out whether this is a viable possibility. And so off you go, and if it's a failure, it's an intelligent failure. I understand that you know a couple of people in your own family who corded this kind of failure.
Starting point is 00:37:29 Tell me the story of your mom Mary and her friend Bill. Well, so my mother grew up in New York City in the same neighborhood as a boy named Bill. They were good friends. And they just stayed in touch, even after they went off to college. My mother went to an all-women's college called Vassar. Bill was then an all-male college called Princeton, and he had a friend and named Frank. And he said, Bill thought it would be fun for my mother to come down
Starting point is 00:38:00 on one of these weekends that were organized with dances and so forth. And he said, you know, Mary, why don't you come, I think you'll like this guy. So my mother came and spent the whole weekend at Princeton and she had a terrible time. She didn't like him one bit. He drank too much. He was too forward. She wished she had stayed home at Vassar doing her homework. So it was a failure. It might not doing her homework. So it was a failure.
Starting point is 00:38:25 It might not have been intelligent, but it was a failure. Then, Fass forward a year or so and Bill says, I have a friend I want you to meet and she's thinking, no way. And he says, no, really, I really think you're going to like this guy. He's the brother of a woman I'm dating and I really, really want you to meet him. And my mother probably intuiting the strategy of small losses, agreed to meet this other guy for a drink, right? Not a weekend away, but a drink and loan behold, this new guy named Bob and Mary hit it off right away, really liked each other, had a great deal in common and they ultimately married and those are my parents.
Starting point is 00:39:17 Wow. And you can see how the intelligent failure in the first date, in some ways prompted your mom to do something very smart in the second date, which is to limit the size of the potential failure. So instead of spending a whole weekend with Bob, she met Bob for a drink. Can you talk about how this is also one of the markers of intelligent failure, which is you try to make the failure as small as you possibly can? It's so important that we minimize the waste.
Starting point is 00:39:47 It's how much time, how many resources, how much of an investment do you have to make in an unknown outcome to get the information you need to then go forward? And of course, in a non-personal domain, you can see how organizations might be able to put this inside into practice. If you're training pilots, for example, maybe the smart thing is not to put them in a plane on day one, but to put them in a simulator. Exactly. Don't give them 200 living passengers who are depending on their skill to land the plane safely, put them in simulator. And in fact, you can give them all sorts of incredibly challenging scenarios and see
Starting point is 00:40:29 how they do. Can you think of any other domains, Amy, where organizations can try and help people fail, but fail small rather than fail big? Well, in every organization, the research and development group is trying to develop new products and services that will in the future be the source of revenue. And you have experiments. You say, well, maybe this will work. Maybe they'll like this. Maybe this technology will work. And you're more often than not, you're mitigating the risk of customers thinking badly of you because you're having a failure, because you're having it in the R&D department, not out in the marketplace. A third marker of intelligent failure is that we have a very clear goal in mind.
Starting point is 00:41:17 And a vivid example of this comes from the inventor Thomas Edison. Tell me about his intelligent failure or rather his intelligent failures, Amy. Thomas Edison was dogged in his pursuit of the inventions that he knew or he believed were possible and he knew could really change lives. So he's of course credited with inventing the incandescent light bulb. And he tried literally thousands of different materials and techniques that one after another, they didn't work. They had all sorts of problems. But he didn't give up, he just kept going. And when a lab assistant, I think, attempting to be supportive,
Starting point is 00:42:03 said, gosh, all these failures must be very hard for you, sir. He responded, failures, I haven't failed. I've just found 10,000 ways that don't work. This is the very definition of intelligent failure here. Yeah, he completely understood the concept of intelligent failure. You have a goal, you've done your homework, you've used the knowledge you have from your prior experiences, from available literature, from you know, everything you can get your hands on, and then you reach that point where there is no way to go forward other than by trying it out, acting, having an experiment, and then
Starting point is 00:42:40 lo and behold, again, you were wrong. But fortunately, you're wrong behind closed doors or you're wrong at a small scale. You know, you haven't caused real harm. You haven't blown anything up. Another factor involved in Integent Failures is something you alluded to a second ago, Amy. And that is that we have done our homework. What does this turn mean?
Starting point is 00:43:07 Well, it means different things in different contexts. If you're a scientist hoping to make an important discovery and publish in a journal, it's about being up to date on the most recent research so that you're really in new territory. If you're going on a blind date with a friend of a friend, you have asked that friend, well, tell me about this person, right?
Starting point is 00:43:30 What do they like to do? You've found it as much as you can before agreeing to go spend time with them. You mentioned the work of the chemist, Jen, Hiemstra a little while ago. She and her team really did their homework when searching for a way forward to isolate nucleic acids. Can you explain what they were trying to do and what they did? Well, they were trying to get RNA, the double-stranded helix of RNA, to separate,
Starting point is 00:43:58 and they tried various different reagents, which didn't work, and each of these experiments that fail is a disappointment. And yet, they doggedly continued. Then this young scientist named Steve Nudson in her lab went to the literature and found a rather obscure paper from the 1960s, and it was talking about a reagent called Léoxol. And because of the properties of Léoxol, Steve hypothesized that it might work in their goal to separate the strands of RNA and low and behold, it worked. So really, I think the picture I'm getting from UAE is that, you know, intelligent failures
Starting point is 00:44:43 are not about gamblers or adventurers. In fact, they're taking chances while trying to de-risk things as much as possible. Yes, they want to succeed, but because they have chosen to try to succeed in new territory, can whether that scientific research or a blind date, they're willing to do the work. You mentioned something else a second ago as well, which I think is really germane here, which is that you need a really strong stomach to tolerate repeated failures because if you don't have that capacity, it's very difficult to practice intelligent failures. Yes, and you could think of that as a part of wisdom and a part of building
Starting point is 00:45:27 characters that ability to withstand the setbacks that are just ordinary parts of our jobs, their parts of our lives, and it's essential to appreciate how necessary they are, especially if you are trying to pursue great things. If you're trying to make a contribution, if you're trying to live a full life, you have to learn to reframe the setbacks as necessary as part of being human and not as bad. I don't like it too. Oh, that's disappointing, but it's okay. Yeah, I've discovered 5,999 ways that you cannot build a light bulb, that these are all discoveries, not failures.
Starting point is 00:46:13 Yeah, I mean, if Edison was sincere, and I'm gonna believe that he was, he truly had learned how to think of each and every one of those failures as valuable information, as true discoveries in their own right. So that kind of cheerful attitude can seem polyanna-ish, but I disagree, right? I think it's actually scientifically valid.
Starting point is 00:46:40 And perhaps a larger lesson is that many companies and many individuals should be seeking out intelligent failures more often. I think especially once you're an adult and once you have maybe some success or some area of expertise, it is so tempting to want to avoid failures at all costs, right, to not want to get anything wrong. So that would lead you to basically close yourself off from adventure, right, to not pick up a new sport or make new friends or pick up a hobby or shift careers in some way that might be meaningful, but scary to you. So, but then if you have a framework that says, well, it may not go well right away, but it will be an intelligent failure because, of course, I'm not supposed to be expert at that yet.
Starting point is 00:47:30 And the goal will be to learn. It helps you think, I think, a more productive and thoughtful way about new experiences so that you can welcome them rather than be reluctant to jump into them. Amy Edmondson teaches at the Harvard Business School. She's the author of Right Kind of Wrong, the Science of Failing Well. Amy, thank you for joining me today on Hidden Brain. Thank you so much for having me.
Starting point is 00:48:19 Hidden Brain is produced by Hidden Brain Media. Our audio production team includes Bridget MacArthur, Annie Murphy-Paul, Kristen Wong, Laura Quarelle, Ryan Katz, Autumn Barnes, Andrew Chadwick, and Nick Woodbury. Tara Boyle is our executive producer. I'm Hidden Brain's executive editor. For today's Anson Kiro, we're bringing you a story from our sister show, My Anson Kiro. It comes from Susan Prescott. Her Anson Kiro is her 12th grade English teacher, Fred Demayo. One day, he assigned everyone a poem to recite in front
Starting point is 00:48:50 of the class. And I was terrified. I had a mild stutter, and I thought there is no way I'm getting up there in front of my peers and speaking. So I went home, and I told my mother how I felt and she wrote me a note asking me to be excused from doing the assignment in front of the whole class. So the day of the public speaking assignment, I stayed after school. So I did instead of giving it in front of my peers. I gave it to him one-on-one after
Starting point is 00:49:26 the school day. And we sat down and I recited my poem. And I don't remember if I started, but he looked at me when I was finished and he said, what was wrong with that? And I just sat there and he said, I liked listening to your voice. And I had never heard that before. I think in his mind, it was so minor and he wanted me to understand I have nothing to be afraid of. And I didn't realize how empowering that would be for me. And I
Starting point is 00:50:08 never thanked him. I graduated and I just moved forward like a 18-year-old person will do. When I graduated from college, the second job I had was being a corporate trainer. So I stand up in front of people and I speak and I do it all the time. And if I do stutter once in a while, big whoop! And I'd like Mr. Dimeo to know that he truly is an unsung hero because he played a big role in my very successful career in my life.
Starting point is 00:50:51 And that was life-changing. I don't know where I would have gone if I felt like I had to keep my voice quiet because I was afraid of embarrassing myself. I'd like to give him my thanks for that kindness. Listen us Susan Prescott. You can hear many more stories like this on the Myung hero podcast or at hiddenbrain.org. If you liked today's episode, please share it with one or two people who you think would enjoy it.
Starting point is 00:51:32 Happy New Year from all of us at Hidden Brain. I'm Shankar Vedantam. See you soon. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.