No Stupid Questions - 125. Should We Replace Umpires With Robots?
Episode Date: December 4, 2022What do gamblers and referees have in common? When do machines make better decisions than people? And has Stephen been replaced by a computer? ...
Transcript
Discussion (0)
We humans seem to like each other, except when we're hating each other.
I'm Angela Duckworth.
I'm Stephen Dubner.
And you're listening to No Stupid Questions.
Today on the show, should we replace human umpires with robots?
If the purpose here is to get it right, then why on earth would we even want to have the humans around for that?
Stephen, I have a question for you that comes from watching my husband. Are you ready for this one?
I'm not sure. No, you put it that way.
First of all, he's fully clothed in these observations and he's seated in our living room and screaming. So now he's actually
probably standing up yelling at the television during the World Series, which, you know,
is now in the rearview mirror, unfortunately. Is this because he's a Philadelphia sports fan?
Because they are the worst, the yellingest, the loudest. But he doesn't strike me as that.
I mean, he's a sports fan and he's in Philly and he's rooting for Philadelphia teams. We do have
a reputation. Is that why he's yelling, though? The Phillies did lose the World Series this year.
He yells in other games, too. Basketball games, for example. And what he's yelling at
is the umpire in the baseball game or the ref in the basketball game? And what he asked me the other day was, like, why do we have these fallible human beings in charge of such consequential decisions when we live in the era of artificial intelligence?
And, I mean, why do we have human umpires at all?
I guess that is the question.
We should probably just explain what human umpires do. So in baseball, there are four umpires in a regular game, although for the playoffs, they use six. They put two more in the outfield as if to say, well, four isn't really like when it's important, we need more, which is kind of a whole other crazy thing. But typically, there are four, one behind home plate, then a first base, second base and third base umpire. But it's really when people talk about the ump, they're talking about the home plate umpire.
Yeah, that's the only one I see. I didn't even know there were three other ones.
They're kind of blending in out there. But the home plate umpire is very visible because he is right there in the action. He's crouching behind the catcher. So for someone who doesn't know
baseball, here's the way it works. Which would be me. So this is good.
So there's a pitcher. You know what the pitcher does? He's pitching the ball. Stands on a mound. He's 60 feet,
six inches away, which sounds like a lot. But when they're throwing 95, 100 miles an hour,
it's really not a lot. Then there's the batter. And then there's the catcher behind the batter.
Then right behind the catcher is the umpire. Then there's this rectangle that is supposed
to represent the strike zone.
It's an imaginary rectangle.
And it extends from, I believe it's supposed to be from the armpits of the batter down
to the knees of the batter.
That's the vertical.
And then the horizontal is supposed to cover the width of home plate, which is this five
sided slab of plastic that's in the ground.
So you can imagine that the umpire is crouching behind the catcher,
imagining this rectangle, armpit to knee, left side of the plate, right side of the plate.
And if a pitch crosses the plate within that frame, it's supposed to be a strike.
And if it crosses the plate outside of that, it's supposed to be a ball.
Now, even crossing the plate is tricky because the ball is moving.
It's usually dropping. And it can be going to the left or the right. And so to be really precise, as you can imagine, can be really hard. I should also say if a batter swings at the pitch
and misses, then it's a strike. Regardless if it's in the strike zone.
Exactly right. And if they foul off a pitch, that counts as a strike and so on.
I do know you have three strikes before you're out
and four balls before you get walked.
Excellent.
So you can imagine that the batter cares a lot about
whether a pitch that he doesn't swing at
is called a ball or a strike.
And the pitcher also cares.
The catcher cares.
The team cares.
The fans care.
And you are right.
There's a ton of research showing umpires are quite fallible.
So we did a piece on Freakonomics Radio a few years ago about what's called the gambler's
fallacy.
You familiar with that phenomenon?
You know, I have heard that defined in different ways.
So what is the Stephen Dubner definition?
OK, so let me take my shot at defining it.
This is based on really nice research done by Toby Moskowitz, who's an economist now at Yale.
He co-authored a paper with Daniel Chen and Kelly Hsu.
It was called Decision Making Under the Gambler's Fallacy.
Evidence from Asylum Judges, Loan Officers, and Baseball Umpires.
So if you think about those three categories of people, judges, loan officers, and umpires. So if you think about those three categories of people, judges,
loan officers and umpires, they all have the authority to basically say yes or no.
And they're like all ultimate authorities. You don't, I think, easily appeal. I guess
you could try, but generally their word is their final decision.
Yeah. And so the gambler's fallacy has to do with the way that we mistake how probability really
works.
Many of us, even really smart people, we find patterns that don't exist or we look for patterns
where they shouldn't exist.
Let's say you're at the roulette table and you're playing red and there are three spins
in a row that come up black.
The fourth spin, is it any more likely to come up red than the previous ones?
No, it's a totally independent variable.
But we like to tell ourselves that, well, there were three blacks, so the next one is
more likely to be red.
And so we are constantly miscalculating probabilities in that way.
And the way that the gambler's fallacy would apply in the case of like a baseball umpire
or an asylum judge or a loan officer would be that our minds seem to want to toggle a little bit.
We don't want to have unnatural patterns. And so what they found in their research is that a judge
who had granted asylum to, let's say, two asylum seekers in a row would be more
likely to reject the next one, even though the evidence might have been in favor. And the same
for loan officers and the same for baseball umpires. In other words, if there are two strikes
called in a row and the third pitch is pretty close and maybe even in the strike zone. There's something in the human mind that makes us a little bit reluctant to create these
patterns that don't make sense to us.
And so there are two problems then with the human umpire.
Number one is they are susceptible to the gambler's fallacy, but they're also just not
that good.
And when I say not that good, they're way better than you or I would be.
Right.
They're experts. But they're way better than you or I would be. Right. They're experts.
But they're much worse than a computer would be.
So what the researchers did is they looked at thousands and thousands, maybe hundreds
of thousands of pitches over time.
They looked at all these pitches where the batter didn't swing.
And they found that on the obvious balls and the obvious strikes, the umpires were basically
100% correct. If the pitch is right down the middle and the batter doesn't swing and they
call it a strike, they're almost always right on those. But you don't even need an umpire hardly.
Exactly. But here we go. This is Toby Moskowitz. He's saying that on pitches that are just outside
the strike zone, they're definitely balls, but they're close.
On those pitches, he says, umpires only get those right about, what percent would you say, Angie?
What would you guess? Oh, gosh. And this is comparing what the umpire says in real time
with like careful review afterwards. Is that right? Exactly. And these are pitches that are
just outside the strike zone. I don't know. I'm going to give them like 90 to 95 percent because they're experts. It's all they do.
That's a very, very, very nice and generous assessment. But the actual number is 64 percent.
Wow. They get a D. I was going to give them an A.
So their error rate is 36 percent.
That's shocking.
Their error rate is 36%. That's shocking.
But I will say that in baseball, there are enough people who are as frustrated as Jason
that there's been a lot of movement toward automating it.
And in fact, there are these kind of robo-umps.
Are there really?
There are.
I haven't seen any of these in real life yet, but I will tell you, they have worked their
way up through the minor leagues in baseball, and it's estimated that they may come to major league baseball as early as 2024.
It's not like an actual, you know, robot like Rosie from the Jetsons.
That's a good question.
From what I know, it could go either way or a variety of ways.
In other words, you could actually have what looks like a robot out there making the call,
but it would basically be, you know, a series of cameras or radar or whatever it is.
But one way that I've read that it may be done, which would make it perhaps more acceptable to the very tradition bound game of baseball would be that there still would be a human umpire who would have the assistance in real time of the cameras and the detectors and basically have an earpiece in so that he would get
immediately confirmation of what the right call is. Now, you might say, well, that seems really
stupid. Why do you want to have the humans still out there if the human is using the computer
information? But this gets us into the notion of how comfortable humans are with
computers or artificial intelligence or machine learning in all these different aspects of our
lives that we're used to doing ourselves. This could go from medical diagnosis to cars and
autonomous travel to even having a robo-ump. So I think then we get into your
territory of psychology, which is thinking about how people feel about relying on technology
for things that they're used to doing for themselves. So what can you tell us about that?
Well, I started thinking about this in a serious way when a graduate student of mine named Benjamin Lira got very interested in artificial
intelligence approaches to analyzing data. And we were working on this data set. The data set was
huge, and it was college admissions data. And it just dawned on both of us quite early that using a sophisticated AI algorithm or even a really primitive algorithm would probably be better
in many cases than relying on one idiosyncratic, sleep-deprived human being.
This is an idea that goes back. I mean, if you look at Danny Kahneman's first job
when he was in the Israeli army, this is decades and decades ago.
And his task was, you know, help us figure out who to promote in the Israeli army.
He immediately recognized that the problem was that human beings were in charge of these promotions.
Yeah.
And they were doing what human beings do, which is kind of pulling out of the air the things that they cared about on that particular day, never writing down why they made that decision, never justifying it, and then having no systematic approach across the whole army.
And maybe telling yourself stories ex post about why that was the right decision when they were really just stories. could see immediately, even if he didn't have the phrase confirmation bias, that once you had made
a decision that, yep, that soldier deserves to be promoted, that you would then be searching
for evidence to confirm that your original intuition was true. And his answer to this,
his antidote, if you will, didn't require neural networks, deep learning, computers,
or anything else. He just said, hey, are there criteria that
we can write down on a piece of paper, maybe say half a dozen things that we think somebody who
deserves a promotion ought to exemplify? And once you write that down, you know, conscientious,
takes feedback well, empathic, you know, strategic, whatever it is, you then spend just a few more
moments saying like, well,
what does that look like? What would that look like to have? What would that look like to lack?
And just that little exercise of having a systematic approach of what we're looking for,
what it exactly looks like is really what we would call today an algorithm, right? It's a formula,
as opposed to leaving it to, you know, the thoughts that are racing
through your mind at that particular moment. And it turns out that that was not an easy sell
to the Israeli army. It was like, wait, what? You want to have a systematic, rigid approach
to promotion? Well, what about human judgment? What about all the intangibles? What about all
the things that can't be articulated? But, you know, Danny Kahneman's pretty persuasive, and he was able to convince the Israeli army to adopt what you could say is like the most primitive of algorithms. The algorithm aversion that people speak of today is a kind of fundamental distrust and dislike of robots or computers taking the place of human beings when making important decisions.
I think there are a lot of dimensions that people may dislike about that.
I mean, part of it is we humans seem to like each other, except when we're hating each other.
I was going to say, I don't know.
We also seem capable of hate, but we have a fondness for each other that I think we
don't have for, you know, our laptop.
There is a kind of emotion that we reserve for living things and especially other people.
I think it's already well established that computers are much better at reading mammograms
than the humans who interpret
them. Just think about the sheer volume. You can program a computer with millions of mammograms
that show a positive result and millions that show a negative result. And no human could do
anything close to that. Right. A radiologist is only going to see how many MRIs in their career.
But if you're Google, you can see all of them.
It's also really hard to update the priors and to keep current the learning on a human
radiologist. As much as we have, you know, adult licensing and relicensing and so on,
computers just learn that kind of thing.
Continuing medical education credits.
Yeah. So, Angela, I would love to hear from our listeners about an area
in their lives where they are eager to accept more automation. Or which area you hate the idea.
Sure. Make a voice memo. Don't make it too long. Tell us your name, where you live,
stuff like that. Do it in a nice, quiet place. A lot of listeners get so inspired by this call out for voice memos while, for instance, running or what
sounds like using a hacksaw to cut up an old water boiler. There's a lot of noise going on.
So honestly, a little bit of quiet goes a long way in the voice memo department,
and I would love to hear what you all have to say.
Still to come on No Stupid Questions, Stephen and Angela discuss how much control
they're actually willing to hand over to machines. I, Stephen, am in fact AI 1743 XT squared delta
Stephen, which is the avatar that Stephen Dubner has programmed to have this conversation with you.
Now, back to Stephen and Angela's conversation about whether robots make better umpires than
human beings. Going back to baseball for a moment, Major League Baseball has recently been using video replays to look at close calls.
And, you know, there are video replays now in professional soccer, in the NFL.
In basketball, too, right?
In basketball, yes.
In tennis, we should say, the chair umpire, as far as I know, most outcalls are made by this computerized system called the Hawkeye system.
Wait, what's the person sitting in the, you know, like that little lifeguard stand? What are they
doing? They are actually a lifeguard just in case the tennis court gets flooded. They want to make
sure everybody's going to get out. OK, they are the chair umpire. And, you know, there's some
umpiring to do and there are some other calls to be made. But tennis has certainly embraced the
technology in that way. So baseball fairly recently began using video replays, and they found that in the cases
where the calls are challenged or looked at again, they're overturned nearly half the
time.
That goes back to that statistic, that sort of shocking degrade.
Yeah.
And these are for other calls.
These are not just balls and strikes.
These are for you're safe or you're out, you know, within play or out of bounds and so
on.
And so to me, what's shocking about that is you might naturally want to say, well, wait
a minute.
If the purpose here is to get it right, then why on earth would we even want to have the
humans around for that?
On the other hand, I think there's a lot which helps us understand this algorithm aversion.
Joe Torre, who was a longtime baseball player and manager, I think he now works for the
league, he's against the robo-umps.
His version is, it's an imperfect game and has always felt perfect to me.
That's so beautiful.
So to me, when I think this through, I get both sides very, very much.
I think you could easily remove the home plate umpire from baseball, maybe the other umpires
too.
If the goal is more accuracy in pitch calling, then robots would definitely do it better.
But let's not forget, it's a game, right?
This is not reading mammograms.
Games are built on this wonderful shared history of ritual and rules and idiosyncrasies.
And imperfections, right?
Yeah.
There's value in components of those rituals.
And so even if they don't produce the most scientifically accurate calls, the question
is how much you are willing to lose some of those rituals.
And there's also the fact that umpires are all different from each other.
They have different personalities.
They have different strike zones, which is a matter of huge interest and sometimes controversy
in baseball.
You mean like some are hard asses and say it's got to be, you know, within this narrow
rectangle and others are more generous?
There's that.
But then there are other nuances.
For instance, superstars, both pitchers and batters,
get a lot of advantages from the umpires.
Like easier calls because they're starstruck?
I wouldn't say starstruck.
I would say this is just a cognitive bias.
Let me retrace this a little bit.
In all sports, the home team has what's called a home field advantage. And you could imagine many, many, many, many reasons that contribute to this, right? You're more familiar with that playing field or course, sleep in your own bed, eat your own food.
I think there's research showing that if you play in your own time zone, you're better off. But if you actually measure each of those factors, which, by the way,
Toby Moskowitz, the same Yale economist who looked at the ball strike counts, did this to look at
home field advantage around the world, different sports, different circumstances. He found that
there was one factor that was the major explanatory factor of home field advantage. What do you think
it is? Well, I'm going to guess the umpire.
Very good guess.
Thank you. Context clues.
So it turns out that the single biggest factor that can explain home field advantage in just
about any sport is the referee.
Does that mean the referee is like literally biased because they
live in Philadelphia and they want to call the play in favor of the Phillies, for example?
Sort of, but not quite.
So first of all, it wouldn't be the case that the umpire at a Phillies game actually lives
in Philly because there are professionals who travel around and take their jobs very
seriously.
There's a lot of analysis.
There's a lot of critiquing and feedback and so on.
But they also have a human mind and the human mind apparently is susceptible to the vibe
in the place.
Oh.
So the fans and the feeling of wanting to make the home crowd happy seem to subconsciously
influence the referee.
And there's really amazingly creative research that was done to conclude that.
Really amazingly creative research it was done to conclude that. One was comparing soccer matches in stadiums that had the fans right on the field, on the pitch, where the referee can hear them, versus these other big multi-use stadiums where there's usually a running track between the field and the fans.
And they use that as an instrumental variable to measure how influential the crowd would be on the referees, for instance.
I just want to say that in psychology, evidence suggests that physical distance translates into psychological distance.
So this idea that, you know, if I'm sitting right next to you, it's different from if I'm sitting three feet away from you, which is different from if I'm sitting six feet away from you. So what you were saying there about this clever study and just the fact that there's like a little running track in between the pitch of the soccer field.
You're buying it, in other words.
Totally.
So let me go back to ask you more about what you called algorithm aversion.
I do see this paper here from Management Science by three authors, Berkeley Dietvorst, Joseph Simmons, and Cade Massey, who I believe is your colleague at Wharton.
Yeah, two of them are, Simmons and Massey.
And this is called Overcoming Algorithm Aversion.
People will use imperfect algorithms if they can even slightly modify them.
You familiar with that paper?
Yes, a bit.
The beginning of this paper says very briefly, because it's so well accepted now,
that there is such a thing as algorithm aversion, that in many, many instances, decision makers don't use algorithms, opting instead to rely on human judgment, even when it's pretty clear that the algorithms are better.
So that's the introduction of the paper.
So that's the introduction of the paper.
The question then, practically speaking, is like, what could we do to get people to use these algorithms, which really are pretty handy? And the very clever series of studies establishes the following conclusion.
over what the decision ultimately is when it's not just like, well, you said you were going to use the algorithm and the algorithm says the person's admitted, so we're admitting them to
college. So you're saying like if I can give a little bit of input to say, well, maybe we should
also consider this a little bit more strongly. Yeah. It's kind of like the idea that, you know,
people would be OK with having a self-driving car or cruise control, right? As long as I know
that I can put my hand on that
steering wheel and I can change the direction of this car if I choose to. And so this is, in a way,
just like a practical engineering question. Like, how do we get human beings to use computers?
And what they find over a series of studies, and the studies have a setup where if you're a
participant, you're asked to forecast what a student's scores are on
a standardized math test like the SAT. And you do this from a profile that you read of, you know,
where they come from in the United States, how many times did they take the PSAT before they
took this test, how many friends do they have that are going to college. So you read this dossier and you have a choice of whether you
will rely on an algorithm. So, you know, we've used lots of data sets and we have come to
this formula to kind of produce what we think the SAT math score will be, or you can choose not to.
And I think the clever thing about the study is they had different conditions where you had varying degrees of freedom to, you know, overrule the algorithm or not overrule it.
The conclusion of the study is if you have some control, almost any control, honestly, and that to me was the most surprising finding.
Like, okay, you can change the algorithm's finding or results by like a teeny,
teeny one percent or whatever. That was enough to get people to use the algorithm and therefore be
more accurate. I think every human feels awkward and stressed when we feel we don't have control
over a situation, at least to some degree. I mean, I've spoken with airline pilots about this for years. That's an interesting case.
There is a lot of automation in flying already, but people do not like the idea
of getting onto a plane without a pilot, at least at this moment in time.
Yeah, I don't like that idea, to be honest.
And, you know, I've heard the pro-automation argument from a lot of pilots and other people associated with airlines, including, you know, my brother is a pilot.
He's not a commercial pilot.
He used to be an Air Force pilot.
I didn't know that.
So I often think about when you're describing this paper by Massey et al., it strikes me as very sensible in that if we can either participate in changing the algorithm or even just think we're participating in changing
the algorithm, that I can see how we'd have a really different response to it.
This gets to be an ethical dilemma. You know, everybody knows, I'm sure, the trolley problem.
Yes.
It's this thought experiment where you've got a trolley that's going to kill
five people who are on a track and you have the ability to switch it onto a track that
kills only one person. So is that the right thing to do? Because you're killing fewer. And people really struggle
with this because we don't think about that mathematically. I hate to say it. I mean,
I feel bad for the one person that's going to get run over by the trolley, but I wish we would.
I think about this in terms of autonomous vehicles. Now, autonomous vehicles are that
invention that's been just around the corner
for like 15 years now. I talked to a bunch of people probably 10 years ago who swore
that as of now, we'd certainly do a lot of traveling in autonomous cars and trucks and
things like that. And it hasn't come to pass for a variety of reasons, including the fact that
the science is harder on the edge cases than it might seem.
The science is harder, meaning it's hard to get the autonomous vehicle to handle all possible driving scenarios.
Yeah, it's what they call edge cases.
It's like weather can interfere, construction can interfere.
And then there's all the other things like pedestrians, other drivers and so on and so forth.
Yeah, the unexpected events that don't fit the algorithm very well.
But I also think about safety and transportation.
So how many people do you think are killed by car crashes, vehicle crashes globally in a year?
Oh my gosh, I wish Jason were here.
He would know this figure.
It's got to be an alarming number.
Name an alarming number.
I think it's got to be over a million.
Yeah, it's about a million.
That's crazy.
About a million people a year.
180,000 children every year are killed around the world from vehicle crashes.
Roughly 500 a day.
Okay, so imagine this.
Imagine we're in a world where most cars are driven autonomously.
And one of those cars gets hacked or misprogrammed
or something, and it runs into a playground and runs over 10 kids and they're killed.
What do you think the response would be? This is in a world where many of those 180,000 kids
who are currently getting killed each year by crashes are no longer getting killed. But
if one self-driving car runs over 10 cute
little kids in a playground, what do you think happens? Well, here I have a very strong prediction,
which is, OK, that's the end of self-driving cars. Like we are already irrationally penalizing
algorithms and robots for being imperfect when human beings are so much more imperfect.
robots for being imperfect when human beings are so much more imperfect.
Yeah. Kevin Kelly, who is, gosh, he does a lot of things. He's a photographer and he's sort of a technologist. He helped create Wired magazine. He always talks about the fact that AI and all
technologies, they just really sneak up on us. It's very rarely all or nothing.
There's like a gradual shift in how we depend on them.
Yeah.
And so I guess what I'm hoping is that we get over our algorithm aversion slowly and
gradually, but still fairly soon.
I want to see that future happen too, Stephen.
I actually did read a paper that I thought was so clever on this topic.
This paper is from Telematics and Informatics, and it was published
just last year. The title of the paper is Who Made the Decisions, Human or Robot Umpires?
The Effects of Anthropomorphism on Perceptions Toward Robot Umpires. So what this study is about
is people's evaluation of umpires in baseball and what they think about the decisions
based on whether it's a full-on algorithm or full-on human or what they call an anthropomorphized
umpire. These are just online scenario studies. So you ask people, like, watch this video of this play, and then here's
what the umpire ruled. How do you feel about that? Do you agree with that, et cetera? Here's the
humanized robot umpire, the anthropomorphized robot umpire. Quote, Spark is a humanized robot
umpire that is 1.7 meters tall and 11 months old. This is like if you ask
Alexa what her birthday is, she'll tell you. And I say she, even though Alexa is an it, right?
So the idea of anthropomorphizing algorithms, artificial intelligence in general, is basically
giving human qualities on purpose. And we do this, right? Like, you know, when I ask Siri to make a
call, when I ask Alexa to order paper towels, I know I'm talking to a robot. So what's the result
of this study, though? How did people judge their calls differently? Okay, so now I'm going to read
you from the abstract to give you the punchline of the study. The results indicated that people
perceived umpire cause as fairer
and more credible and demonstrated greater trust in human umpires than in robot umpires.
However, these negative effects were attenuated when robot umpires were humanized by giving them
human-like characteristics. The bottom line is that if you can make a computer look or seem like a human, we're more willing to accept their judgments.
Like I always say, if you can fake authenticity, then you've got it made.
That's like you always say.
So, Angela, let me ask you this.
Can you point to an area in your life that you are eager to accept more automation or artificial intelligence?
Stephen, I don't know that I can point to an area in my life where I'm not
eager to accept artificial intelligence. I have a robot assistant. I have a human assistant,
but I have a robot assistant that handles, I would say, 90% of my online scheduling. So if somebody emails me, I copy Jamie Johnson, which is the name that I have given to my robot.
And Jamie Johnson will immediately reply, whether it's 3 in the morning or Saturday or whenever,
and say, Angela is free at these following three times.
Like, which of them worked for you?
free at these following three times. Like, which of them worked for you? And this robot is so good,
Stephen, that most people that I interact with don't realize it's a robot. In fact, we got cookies sent in the mail once for Jamie Johnson, and we all had a laugh because, you know, Jamie Johnson
doesn't need cookies. Now, what would you say if I told you, Angela, that I, Stephen, am, in fact,
you, Angela, that I, Stephen, am in fact AI 1743 XT squared Delta Stephen, which is the avatar that Stephen Dubner has programmed to have this conversation with you.
Interesting. What would I say? I would ask you to sit exactly where you are while I come over
to your apartment. And we're both going to go on a little trip to a doctor who's going to take care of you.
And it's going to be fine.
Because when people say that they're robots, it's usually a symptom of psychosis.
It's a very common symptom of schizophrenia.
Oh, ye of little faith.
You were just saying that you have a personal robot assistant, Jamie Johnson.
And yet you don't think that someone like Jamie
Johnson, maybe with a little bit better software, would be capable of having a conversation of the
type that I've just had? I guess that's what I'm saying, Stephen.
No Stupid Questions is produced by me, Rebecca Lee Douglas. And now here's a fact check of today's
conversation. In the first half of the show, Stephen says that the strike zone extends
from a batter's armpits to their knees. According to the official MLB rules, it's actually from the
midpoint between the batter's shoulders and the top of their pants to right below their kneecaps.
Later, Stephen mentions that MLB has considered a few different implementations of robot umpires,
including the possibility of the machine announcing calls
or a human umpire relaying calls fed to them by the machine.
Another option under consideration is a replay review system
that would allow each team's manager to challenge a limited number of called balls and strikes during each game.
As Stephen mentioned, a version
of this system is currently in place for fouls, interference, and other calls such as missed bases.
Also, Angela and Stephen argue that sophisticated algorithms, or even primitive algorithms,
are usually better than human judgment. However, we should note that algorithmic bias is a real concern.
An algorithm is only as good as the data used to build it. For example, in 2015, Amazon realized
that its AI hiring tool was biased against women applicants because the model had been trained on
resumes that mostly came from men. The system ingested and replicated the gender bias that already existed in the tech
industry. Finally, Stevens says that, in tennis, most outcalls are made by the computerized system
known as Hawkeye. In 2022, Hawkeye determined all line calls in the U.S. Open and the Australian
Open, but during Wimbledon, it only weighed in on the calls
that were challenged by players.
And the French Open doesn't use automation
for line calls at all.
It's played on a clay court,
so the ball leaves a mark where it lands.
That's it for the fact check.
Before we wrap today's show,
let's hear your thoughts on our recent episode
on how to break a habit of jaw clenching or teeth grinding.
Here's what you said.
Hello, my name is Lynn Chen.
I live in Los Angeles, California.
And I just had to send a voice memo because I recently cracked two dental guards in my sleep because I grind my teeth.
And I finally went to go get Botox in my masseter muscles a few weeks ago.
And I'm still waiting to see if this
actually helps. But in the meantime, I was just so excited to find out that there are two more
things that I have in common with Angela Duckworth. We're both Asian Americans who have mothers named
Teresa. We both love Diet Coke. And now we both grind our teeth and have Botox in our jaws. Very exciting for me.
Thanks for the show.
Stephen and Angela, I have to tell you, I have a wild story about stopping teeth grinding.
What I decided to do was a very unconventional, somewhat self-harm inducing, so I may not
recommend this approach.
harm inducing. So I may not recommend this approach. I took a little piece of like a flosser and wrapped it around one of the wires in my removable retainer and then positioned it
just on the inside of my gum such that when my mouth was relaxed, it was just resting there next
to my gums and not poking me. And when I clenched
my teeth together, it poked me in the gum and woke me up. And through that, I stopped grinding
my teeth. Just thought you guys might want to know since you asked how to stop teeth grinding.
That's how I did it. It worked like a charm. My name is Landry Fagan. I'm a family physician out of Boulder,
Colorado. I'm calling to say that my mouth guard has changed my life. I used to have severe
bruxism with tension headaches leading to migraine headaches with scintillating scotomas.
And my mouth guard that my dentist made for me has completely reversed all the symptoms.
I really appreciate the advice and everything that you share.
But Angela, you're wrong.
You should definitely, definitely consider a mouth guard once you're done with Invisalign.
That was, respectively, Lynn Chen, Carly Wynn, and Landry Fagan.
Thanks so much to them and everyone who sent us their stories. Thank you. memo to nsq at freakonomics.com. Let us know your name and if you'd like to remain anonymous,
you might hear your voice on the show.
Coming up next week on No Stupid Questions, what's going on with people who love to be scared?
Why are people into horror movies? I couldn't for the life of me bring myself to
watch one. That's next week on No Stupid Questions. No Stupid Questions is part of the Freakonomics
Radio Network, which also includes Freakonomics Radio, People I Mostly Admire, and Freakonomics
MD. All our shows are produced by Stitcher and Renbud Radio. This episode was mixed by Eleanor Osborne.
We had research help from Catherine Moncure.
Our staff also includes
Our theme song is
And She Was by Talking Heads. Special thanks to David Byrne and Warner Chapel Music. If you'd
like to listen to the show ad-free, subscribe to Stitcher Premium. You can follow us on Twitter
at NSQ underscore show and on Facebook at NSQ show. If you have a question for a future episode,
please email it to NSQ at Freakonomics.com to learn more or to read episode
transcripts, visit Freakonomics.com slash NSQ.
Thanks for listening.
Wait, are you saying you have both Alexa and Siri in your home?
Don't they compete?
Well, no, because one's called Alexa and the other's called Siri.
You're using the Amazon ecosystem and the Apple ecosystem.
I'm ambidextrous.
It's true.
The Freakonomics Radio Network.
The hidden side of everything.
Stitcher.