Citation Needed - The Trolley Problem
Episode Date: July 24, 2019The trolley problem is a thought experiment in ethics. The general form of the problem is this: You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying o...n the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options: Do nothing and allow the trolley to kill the five people on the main track. Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the more ethical option?
Transcript
Discussion (0)
Hey podcast listeners, just a quick apology before we get started today.
There was a problem with some of the audio coming out of our latest record that really affected
me and Eli's stuff.
So we had to go with a backup record that's a slightly lower quality.
It's not so bad that it's going to be hard to listen to, but it's bad enough that it
would be weird if we didn't at least acknowledge it.
So sorry about that.
Enjoy the show.
I understand that, but I can't keep him from calling you. Well, well then change your
number. What kind of locksmith are you?
Well, hello, Seesal. As you can see, I prepared a little choice for you today. How did you
even do any of that? Time and money can do many things, my friend. Either way, as you can see, the 435 to Boston will be making an unexpected detour today.
But where is your choice?
See, like, this isn't gonna work. I'm just- it's not gonna-
Leave the lever in front of you, B, and Noah, Ethan, Tom will be crushed.
Pull it! And the train will kill your best friend!
Okay.
Oh, I'm sorry, probably I wasn't clear.
By pushing the lever, you're gonna make the train...
Make the train hit you. Right, yeah, yeah, I choose you. You.
Oh!
How noble you are, C-Sold to sacrifice.
It's not a sacrifice.
You're best friend.
Not best friend.
Don't mess, shall.
Not a forever obfrocks that kind. Hello and welcome to Citation Needed, the podcast where we choose a subject, create a single
article about it on Wikipedia and pretend we're experts.
Because this is the internet and that's how it works now.
I'm no illusions and I'm going to be conducting this train.
But to do that, I'm going to need a panel of people qualified to meaningfully discuss moral
You know what not every fuck episode needs fancy intro right first up
Do then for home a moral quandary and a plausible alibi can't even theoretically coexist EY and Tom
I don't know what you're talking about. I was riding in the back, the old town road.
No, I get it.
I get it.
That's as much as you want.
Look, every moral decision can be boiled down into one simple question.
Is anyone watching?
After all, it gets much simpler.
It's already, their victim was after you get done.
All right. And also joining us two men who interpret and might make right to mean it isn't stealing if they might have given it to you. He fans Cecil.
I don't like explaining my Halloween costume to people. That's, I don't like it.
They just, you should know. And there's no rule about age limits.
You can just take it. You take it. You like to do that. It's Halloween.
It's better to give them to receive.
A King James said that when he was trying to convince Queen James to do butt stuff.
We have butt stuff now. So yes, it works.
Anyway, I was thinking our patrons
because make them feel special is more important than entertaining the
rift raftings page and so if you'd like to do you know how to join their
rank features to go under the show with their own way tell us
e-l i what person placed in concept phenomenon or event what would be
talking about today
the trolley problem also known as the trolley dilemma
yeah that's not meaningful enough to distinguish.
Okay, so he- Ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha-ha the trolley dilemma. So the trolley dilemma is an ethical thought experiment about how to murder people in the
nicest possible way.
Jon and a nay.
No.
And yeah, it's pretty much my favorite thought experiment about murder etiquette.
So the experiment takes many different forms and variations, but the basic example goes
like this.
Imagine yourself standing near some train tracks, and you see a runaway trolley with its
brakes no longer working.
And further down the tracks, you can see five people who are tied up and incapable of
getting out of the way.
And there's also one other person tied up on a side track, just a normal day in the New
York City subway commute. But today, you are right
next to a lever that can divert the train from killing the five people and instead kill
just the one person. So what's the moral thing to do in that situation?
Record it on your cell phone and scream, world star!
Oh, uh, post that funny gif of Michael Jackson eating popcorn and then go make some popcorn
Okay, seriously does anybody have like an answer? Do you pull the lever anybody who doesn't pull that lever?
Okay, so you're obviously pull the lever you know no no no no if you're a city dweller in New York City
You ignore everything and just let people die
No, no, I mean like if you're like a favorite to like hold the
No, yeah, right you pull the lever outside of New York City
Yeah, all right, so that first problem most people have pretty quick answer to it
But that usually gets followed by the following variation
I'm not sure we've sure what this is a problem.
Like, one way or the other, fewer idiots that get set themselves tied to train tracks
exist.
So, you're like a solution type thing, but I'll call it what you will.
I'm just right here.
Here we go.
Here's your question, Widow.
You deserve it.
So, it's pretty neat.
Here's the variation though.
This time in the variation, instead of being next to a lever, now imagine you're on top
of an overpass looking at the same scenario.
And there's an enormous guy sitting on the ledge of that overpass.
For some reason it's always described as a fat guy, but not sure why it couldn't be just
like a giant with six pack abs.
And, you know, any gender identification, get woke.
Anyway, regardless, the giant is big enough that he would definitely block the train from
hitting the five tied up people on the track, but he's not so big that you can't push him
over and be the hero.
Yeah, those can't be the same, right?
Not sure.
Bigger training.
I love the idea of somebody doing that
fucking physics in their head like it appears to be about 385. Is that a Johnson
Miller 1386 right? Oh yeah that'll stop it no problem. What the fuck what?
Well also I didn't get to finish my scenario Tom you're amazing at super fast
physics calculation so you're positive.
This would save the five lives by sacrificing the one life. So the net effect on life and death is the same as the first example.
Now what's the moral thing to do this time?
She's standing on a block of ice. He was on a block of ice.
He's like, if these five guys didn't want to get run over by a train, they would just stay off the tracks. Also, what were they wearing?
What could that could be?
See, so I'm going to guess it's a lot of Oreo cookie crumbs and a rascal scooter.
Yeah, that's where I was right.
The big guy anyway.
Oh, sure.
But as the instinct, you need a real answer. Yes, you pushed the fat guy, obviously.
Okay, obviously, push the fat guy says no.
I'm so bad at moral dilemmas, bro. I'm just always like, well, yeah, you push the fact guy says Noah interesting. I'm so bad moral dilemmas bro
I'm just always like well. Yeah, you push the fact I pretty much whatever it is if this was a life
But I would also say you push the fact
Again, if you're in New York City and the fact guy is lining up about where you're gonna get on the train with him
Yes, you push him cuz you don't want him to next to you what the people are
Yeah, all right anybody out is everybody else pushing the fat guy pushing the fat guy. Yeah
Push in the fat guy Tom to Cecil Eli. Yeah push the fat guy. Can I call him names first?
Let's let's call him the big guy
Presence offensive. You know what? Let's make it a woman just cuz it'll really charge
just because it'll really charge you. Yeah, that makes it better.
It's a woman of color.
Nope, a trans woman of color. Let's just do it as well.
We are so wonky.
We were saying something about how that was the right thing to do with you.
It's a problem.
Okay, well, so my instinct on that second question is
I'm going to convince the giant to jump on to the track and be the hero himself.
But it's nice.
But speed talking the guy into noble suicide doesn't usually get offered as an option when
they make up these dilemmas.
Because the general idea of those two variations is to see if we identify a moral distinction
between exactly how we sacrifice one life to save five.
And most people say this does matter.
According to the data, when asked about this,
the majority of people are gonna throw the switch
to kill the guy, the one guy in the first example,
but they won't push the guy off the overpass
in the second example, because they feel like
there's a big difference between pulling a lever
and slaying a weird giant.
Or if it's a joke.
Because they just don't have the will to commit to killing the obese.
Call it what you're gonna call it, Goddamn.
Yeah, I'm gonna put that on the shelf on the show. I just wanna say it now.
Yeah.
Well, people seem convinced even if it's like a stupid giant who sits on ledges above train
tracks knowing full well that the trolley dilemma is a thing that exists. They think that's different.
And then I want to push the guy.
I get am I the only one like kind of happy here that we live in a world we want a
little distance between us and like cold.
I know calculus is the same, but I am not losing sleep over a world that
we're generally uncomfortable with.
As a man with a privilege of never having been tied on train tracks.
Yeah.
Yeah.
Yeah, Tom, so I don't know if I agree with those people who see a difference there, but
I'm glad they exist, I guess.
Yeah, that's a slightly nicer world somehow.
So one of the major themes that gets explored by the trolley problem is called the Principle
of Double Effect.
And it's set of rules for solving a moral dilemma in which an action will have one good
effect and one bad effect.
The original version of this comes from Thomas Aquinas actually.
And the basic idea is that it's wrong to perform an immoral act in order to achieve a good
outcome.
So basically, you're not allowed to use Machiavellian reasoning, you know,
that the end justifies the means.
Right, yeah, and I get shit for mentioning
the trust.
Can we have a one-shot?
What's wrong with the trust.
What where do they land?
I'm a trolley problem, no.
But there's a second part to this.
Aquinas says it's sometimes okay to perform a good act,
even if you know it's gonna have bad consequences,
as long as the bad stuff isn't
directly intended in your strategies. So like, given someone an STI, is...
What? It's supposed to be unintentional?
It's just happening.
We're having a great moral philosophy episode.
So, quick example of the
final
philosophy to a gunfight with this one i don't know what the
uh...
or short
very
it feels like it's on you
all right quick example of the equines idea here we go
uh... so
you know like during world war two
the idea is it was wrong to bomb civilians as a way of winning that war by terror
like
you know like herosha but it was okay to bomb a strategic military target
even if we knew that civilians would die as a side effect just not directly
like um...
like in herosha but you know that that that that that that that that that that
that was in their somewhere right and it was in the middle, I would guess, pointing...
Thomas Aquinas was kind of stupid.
That's important to know here.
But he also might have been right about this.
Or was he?
What do you guys think?
Well, actually, the trick is to pick which of the three oblivious monkeys best suits you
based on your, like, any a gram person I'm
eating over my eyes that's the monkey oh Jesus all right well I'll tell you
whatever there were a more qualified group of people assembled to question the
moral conclusions of a coin as I can't imagine what yeah okay so getting
back to the trolley problem specifically, this idea of double effect
comes into play as an argument in favor of pulling the lever, but an argument against
throwing the giant onto the track.
Even though the net effect is the same in each case, the act of pushing a guy off a ledge
would be Machiavellian and therefore fail the test.
But the act of pulling the lever would be okay because it's a good act to save five people,
even though it has the predictable side effect
of killing one person.
That being said, this argument from Aquinas
is very often used by anti-choice evangelicals
to claim that an abortion performed to save
the life of a pregnant woman is evil.
So follow up question for everybody.
Why do you guys hate women? I was gonna
kill the five all the time. Wait, so, so woman, a woman either has to get one or five abortions.
I'm not following her. I don't know. Costco now they come in a big pack.
Oh,
cool. Once you get five punches, it's a whole clarification to your question, Heath.
Is it, why do we hate women or which specific women?
I just, like, I've got lists I can get.
Why do we hate women now?
Well, it's because I'm four foot tall
and I can't get a bagel in New York, that's why.
That's why.
Okay, pass on Tom's question.
Excellent.
Thank you, Cecil.
I didn't want to answer that.
So this moral argument, as it relates to a fetus, was presented by British philosopher
Philippa Futt in an essay from 1967, and it's the modern basis of the trolley dilemma.
And in the essay, Futt shoots down the principle of double effect, but still leaves open the
possibility of letting the giant live and letting the five people die.
But ultimately, she's a coward and refused to take a stance on the abortion to save the
mother issue.
Her final conclusion was basically just, I'm asking questions.
Now, in fairness, this was 1967, and plenty of Christian people were prone to literal terrorism regarding this issue.
Also in 2019, that's the case.
What's whole thing is like, okay, wait, wait, wait, wait, wait.
What if the giant was the size of a grain of rice and incapable of pain or thought?
Makes it harder, huh?
Yeah, wait, Eli, I'm confused.
How big is the train we're running on?
Well, in that case, it's a teeny little train.
Also, it's a rice train.
Yes.
I read that.
I read her essay a lot.
So one would say I have a foot fetish.
I would say.
All right, well, regardless of your opinion, the abortion topic is obviously pretty dark.
So Ms. Foot framed the issue with a different example that appears to be the original version
of the giant on the overpass scenario.
So imagine a group of spelunkers and they're making their way out of a cave.
I have already lost all sympathy for whatever happened to you.
I've held on the air on the side of Kill'em all.
Well, I will continue and you will probably get further into that opinion.
They're super dumb, these spelunkers, in this scenario.
So they brought along a fat guy.
Oh, it's a fat guy.
It's offensive.
Well, I'm with Tom.
Anyway, in addition to being super dumb, they're also super duper dumb.
So they had the enormous guy lead the way out of the cave.
I hope they all die.
So as you might guess, the big guy gets stuck in the only exit point.
Also the cave is starting to flood.
So if they try to wait it out while this guy starves himself into being skinny, they all drown.
But they have just enough dynamite with them to expose the guy out of the exit.
And also, I'm also going to add that he's deaf, blind, and mute.
Okay, now I'm with Tom.
They get what they deserve.
I'm with Tom too.
Yeah, okay.
Wait a minute, who was so much darker back
okay point being though
uh... blinded
def blinded mute you can't communicate with him
is it ethical
to explode the
human court that you brought on your splunk
to save us
and keep in mind if you say no you shouldn't blow him up
you're also saying
that abortion to save the pregnant woman's life is also wrong assuming that it's
you know like a nine-month-old fetus that's fully viable or is there a
difference maybe because zero-year-old is still different or because abortion
doesn't use dynamite or trains. I think though if we started using explosives and abortions we could get the Republicans on board with this.
You mean by if Cecil you think there aren't any pregnant ladies in Tehran?
Oh, Jesus Christ.
The one thing I want most in the historical record is video of the first time that she heard that Convaluted version of celebrity survivor man bullshit that she'd come up reframed
Trane right that would have so much so much less so many viewers had a lot of death
All right, so let's zoom out for a second at the heart of the issue is the eternal battle in moral philosophy between deontology and
consequentialism.
Is it?
The basic idea of deontology is that moral judgments need to be made based on a set of
rules.
And consequentialism says the morality of an action is determined by its outcome.
So before we get into any more details, are you guys voting for deontology,
consequentialism, or whatever, you know, some kind of nuanced argument using both Boonard?
What do you guys think?
I'm going consequentialism 100%.
We got an amazing queen performance because of AIDS.
Oh, that's crazy.
It's the Huzzadays.
It's worth this, me.
It's crazy. It's crazy. AIDS. It's worth it's name.
You do give this to me now actually.
Who in that scenario is going deontology?
I'm sorry, you know the old saying, the exception doesn't exist.
Deontology.
Read a book past the Enlightenment.
You know, I don't even know about this heath. Is there perhaps a single fucking thought experiment or moral dilemma that seems even remotely plausible?
That might help me think about this other than offing a couple of adrenaline seeking fat guys in the process.
No.
No, because no point is this usually phrased as a parent in a lifeboat having to pick which of their children to save. So no.
No, you're actually no way to make you relate to this dilemma. Tom. having a pick which of their children to save so no
Jesus if Sophie's choice was about Tom. He's just like the dumb one right?
I'd like to make them both that's it's
I'm sorry, I was just saying. No, college, amazing.
I'm kidding.
Dude, wait a minute, if you take them both, can I get double rations and then I'll do
them for you.
Okay, you can, yeah.
All right, so it sounds like most of you guys are anti-deontology, but let's keep in
mind a famous version of what we now call deontology comes from a manual con.
Maybe you've heard of him that bastion of correct
yes exactly always correct about everything he said
uh...
germany i from the seventeen hundreds whose moral code includes what he calls the
idea of a categorical imperative which means
an unconditional moral obligation that never makes a situation worse, which obviously
does not exist.
Again, he was smart, but like, you know, 1700s smart.
So I can't argue that good intentions could produce bad results and bad intentions could
produce good results.
Therefore, the only policy that's unconditionally good is to act out of goodwill regardless of consequences.
He said we should all follow rules
that you would want everyone else in the universe
to follow in the exact same way.
For example, when it comes to the question
of using torture on one person
to find a location of like a ticking time bomb
that's gonna kill thousands of people,
Kant would say no torturing.
Is he right?
What have kind of just binge watched
like four seasons straight of 24 on Netflix?
Like how sure would you be
and come down against torture then?
I mean, just, okay, so like,
I mean, he's right in so far as that
that's the correct answer.
So I mean, yes.
I guess if you're some kind of consequentialist, sure.
So that's excellent.
Y'all and Todd has shit.
So, quick variation on that last one.
I'd like to think we were all anti-tourcher,
I only know it said that that was the correct answer,
but moving on, quick variation.
What about if the torture...
What about if the torture this time was tickling and the terrorist really hated tickling?
Like we knew this and we knew the tickling worked with this same terrorist last time
and he gave away the location of the time bomb.
Can I just say we have a weird set of knowledge in these questions like train physics, torture tickling.
No, we're checking the premises, just going.
I think it's perfectly fine, Heath.
I mean, like, if things get dicey, what we'll do is
just come up with a new word for it, like enhanced tickling.
Yeah, there you go.
It changes the back to being like, okay, again.
Right, and he is touching.
Yeah.
So, if you're anti-torture across the board,
only Noah. You're mostly. Um,ure across the board, or you know,
you're mostly,
that's categorical imperative is looking pretty good,
or at least as it applies to this example with torture,
but it runs into trouble when absolute moral duties conflict.
For example, when a woman tells you it's a good size,
Kant would say that lying is always wrong here.
So I said that, but fuck that,
lying is so goddamn important for society to work.
So I think sometimes it was a part.
Certainly important for my citation needed essays to work.
Right.
We don't think in our products.
So considering that society's list of absolute imperatives is going to be impossible to agree on and also shift over time and
absolute imperatives is going to be impossible to agree on and also shift over time and considering most people are fucking stupid.
It seems like consequentialism might be a more manageable system, but sometimes it's
not clear which of two consequences is better.
Okay, all right, so that too.
That's why you have to wait for everything to kind of shake out and then when it's all
settled and you point to the better ones, say, I was always for I'm a fair weather consequence.
So, it can get your heart, though, that right?
So, what I mean by that, that's what I mean is Republican.
So, here's what I mean by that.
A consequentialist is going to need some guidelines for how to evaluate results.
And that brings us to the philosophy of utilitarianism, which is a version of consequentialism that
adds the stipulation that the ultimate goal is to maximize total happiness.
So does that work for you guys?
Do you guys like the utilitarianism version?
I guess it depends on what we define as maximizing happiness.
I'm going to guess that my happiness
and Mitch McConnell's happiness are not congruent.
I think that's gross.
All right, so here's my actual dilemma.
He's absolutely does, but to fully embrace that on air, I would have to come clean about
exactly what an asshole I really am.
I'm going to go out of my way to pretend to be vexed by the next problem you bring up or what
I'm saying.
All right, well we're about to get to...
With an umbrella and you get that telling you just...
No, it's none of this.
What, don't...
Eli doesn't get to make up any scenarios, it's just ever.
So now let's apply the idea of utilitarianism to the trolley problem with another variation.
What if we knew that all five people tied to the track
were like, just really bad at parking,
like crazy bad, all right, already we have a kill him.
Great, great, I like where your head's at.
And let's assume the one person on that side track
that's a scientist who's on the way to curing cancer,
very possibly.
Now, the consequentialist view would be that one death
is better than five, but the utilitarian might argue
that total happiness is more maximized
if you let the five shitty parkers get killed.
So what do you guys think?
Well, throwing out all our factors
if the five and the one are all bad parkers,
you go for the five.
You definitely go for the five.
You're just kidding.
Okay, so if I'm understanding the problem here, Heath, what you're saying is that not all
lives matter.
That's what you're saying on record.
Not equally.
Right?
So, cancer-curing doctor, like we're going to kill the bad Parker's.
Okay.
So, how he parks.
We're making a ringtone of this. Let's say we just watched
him nicely parallel park and walk over to track. Well, you can say that the first time I'd
like to go back and change my answer. Okay, no, you're right. That's absolutely a relevant
detail. Okay, but wait, what if he had one of those self-parking cars, then he's just
fucking cheating? Well, that's fine. Hmm.
I'm a fucking utilitarian, man.
That's okay.
All right.
All right.
If those five people had self-driving cars that could park,
I'm thinking about it again.
Yeah.
Okay, anyway, the utilitarian idea seems pretty good to me,
anyway, but here's a famous counter-argument.
It comes from a political philosopher named Robert Nozick.
That bastion of correctness. Yeah, yeah, the father of libertarianism,
Robert Nozick, let's talk about his argument. He said that we should abandon
socialist, utilitarian policies and just worry about maximizing individual freedom
instead of happiness. And then that would just magically also maximize happiness. I don't know. It's dumb.
And to make this point, he presents the example of a person called a utility monster who claims
to get way more happiness from each dollar than anyone else when they get a dollar.
And therefore any utilitarian plan would need to give all the money and all the resources
to the utility monster. So the question is, no, correct, that is correct.
But the question is, how much fun and happiness do we get
from taking away freedoms from libertarians?
I think that's a more important question.
Also follow up question, what's the most amusing and fun way
to murder the utility monster?
Well, to murder the utility monster, you just ask him where Aleppo is. That's like suicide. Okay, hold on, to murder or utility monster, I'm going
to say Colonel Mustard in the kitchen with a warple sword. It's amazing how many of Novix
arguments can be defeated by that. Yeah. I love how many of Novix arguments are just like,
well, obviously obviously utilitarianism
is when you fill a bucket with nothing but nickels. So what? You have to accept the premise
to answer my question now. All right, well, now that we're all utilitarians and we have
more fun not doing this show right now, we're going to take a little break for an apropos
of nothing, which is still doing the show right now. We're gonna take a little break for an apropos of nothing which
Is still doing the show right now because strictly speaking the utilitarian position would be
Predicated on happiness of the listener and not the host as you
Uh... I gotta take this. But this is actually a huge dilemma. Where's it? Hey man, what's up?
No, no, no, no, no.
I'm just sitting on the edge of this bridge vapin.
Yeah.
Some skinny chick came over to talk to me about some.
But you knew called?
No, no, no.
Not a big deal.
I answered it on my Bluetooth headset that I always do.
Right?
Yeah, they're the best.
What up?
Dude, I did.
Dude, you got a father on TikTok, dude.
She's hilarious.
She's hilarious.
Any who's the bees?
I gotta go see what this lady wants.
I gotta zip-lining thing later.
Sorry, what's up?
Something about a deal, lemon?
No, actually, not anymore.
Just got way easier.
Ugh!
She feel pretty good about this.
And we're back. We now know that Eli has something against zip lining. That was hilarious,
probably, which that probably is. But instead, we're going to get back to the Charlie Dilemma
show. Is that all the dilemma you got for us?
All right. So we've looked at the two major examples of the original Charlie problem and
some of the philosophical underpinnings. And after digesting several paragraphs of Wikipedia, I think we're all experts on moral
philosophy.
So it's great.
You're not kidding.
We all became verified on Twitter during the break.
I don't even know how to do that.
Right.
Absolutely.
So now it's time to test our understanding with some fancier variations.
And we're going to start with the overweight guy and the overpass again, but this time he's evil
and you just watched him tie everyone down
as you were walking up.
He's the guy who caused the dilemma.
I'm just super confused how this doesn't make
the problem easier.
Like all that effort from tying those guys down,
the fact I would be all greasy and easy to push,
this is easy.
Yeah, okay, so this variation assumes
that you had qualms
about pushing the fat dude in the first place.
This doesn't work on me and Tom.
I'm a little bit different.
I'm a little bit different.
All right.
What if you get another question?
Just question.
Just question.
Just question.
Just question.
Just question.
All right.
Sub-question.
What if the guy in the ledge is a serial killer,
but he's got nothing to do with the people on the track?
That's unrelated. I compare notes.
How would he like wearing a t-shirt high on a shirt?
This is a weird thing to know about him.
Again, I was gonna kill him already.
I'm not going to show this out. All right, so new example.
This next one tests the claim that it's wrong to directly
use the death of one guy to save five.
So it's the same as the original problem,
and you're standing next to the lever.
But this time, the sidetrack has the one overweight guy,
and the track then runs back onto the main track
right before the five people who are tied down.
So if you pull the lever, you're directly killing the big guy to save the other five.
You're not just like saving the five with a side effect of killing the one, like in the
original example.
Okay, but now is there any reason for the guy to even be fat at this?
Is he just not on the floor?
Leave now?
This is just something we know about him.
This is the one we know about him. This is the one we're in. This is the one we're in. This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in.
This is the one we're in. This is the one we're in. This is the one we're in. This is the one we're in. This is the one we're in. by weight. Can we tie up more people? I don't know. So moving on to a couple of my favorite examples
that I found in a list in New York Magazine.
First up, imagine the original five people
versus one person scenario.
But if you pull the lever, you send the train onto a new track
that has another trolley dilemma on it.
And she wants.
But if you let the thing keep going,
it magically splits into five trains
and creates five new trolley dilemmas.
Now in both cases, they're all somebody else's problem
at that point.
What's the move now?
Well, I think this is how pro life people think abortion works.
And I know this is how pro gun people think gun control works.
So yeah.
That's the fun. And I know this is how pro gun people think gun control works. So yeah
All right, well one more variation
You're in the original scenario again, but this time instead of two tracks. It's one big loop What is happening?
Building this thing
It's just a weird kid guy from saw. Yeah, so yeah Christmas tree in the center of it or what the fuck sure Yeah, I like that. Let's add a Christmas tree in the center of it or what the fuck
you're sure? Yeah, I like that. Let's add a Christmas tree in the center. There's a big
loop. So you can either make the one guy watch the five people die as the last thing he
sees next to a Christmas tree. Or you can make the five people watch the one guy die as
the last thing they see next to Christmas tree. They call that the action park, he does what they call that.
I fail to see the dilemma on this one.
I mean, you have to volunteer for the armies.
So I mean, it's all old.
I like the ones that assume I'm just going to kill all six
of these assholes.
Yeah.
All right.
We're doing some great work for the field of moral philosophy
today.
This will be cited for centuries, just like Kant. Now, we are doing the best work in moral philosophy.
Here's the thing though, many critics claim that discussions of the trolley dilemma don't really
have much value. How are you saying? Well, they claim that train track scenarios
all seem a little ridiculous, and they're divorced from reality.
That's weird.
But one place where these criticisms fall apart
is when it comes to the ethics of self-driving cars.
This is a very much practical issue
where the moral judgments behind these trolley examples
actually do come into play a little bit.
Because in order to program the cars, software developers need to be writing code that captures
human morality as it applies to crash situations with a variety of possible negative results.
And the AI has to weigh those results somehow and take the best action.
And it gets extra complicated when some people argue that they want to be able to change the settings of
their cars AI to basically
murder everyone else as much as possible and never sacrifice themselves even if
Sacrificing themselves saves a bunch of others. So we're very likely to develop the technology and the infrastructure for
Pretty much all the cars to be autonomous
But then spend forever arguing about the rules of morality
and the rules of personal murder settings.
Right.
Libertarianism.
We don't know, maybe all the times Tesla
has randomly slammed into mediums.
They were saving a fat cancer scientist. We, we, we have these arguments as if they're very serious topics.
If you just drive the car yourself, you end up murdering people because the wedding had
an open bar.
Ah!
Yup.
Well, in order to get an idea of our collective morality on this topic, a team at MIT started
up a project called the moral machine and they put a website up
Where you can go through a bunch of different autonomous vehicle scenarios and decide what the software should do in your moral opinion
So I'm gonna run us through a few examples of that type of problem that they present at the moral machine website
So you guys ready to play who gets murdered by the robot car?
It's a fun game. This is what I was going to do if we weren't recording the night. Still less
intense than code names. So yeah. Alright, so let's assume an evil moral philosopher
decides to, you know, help keep his job going by cutting the brakes on a self-driving car.
And that forces the car to decide between
running through an intersection with some pedestrians
or otherwise crashing itself into a barrier
if it goes through the intersection
it's gonna kill a dog
a baby in a stroller and three doctors all
legally going through the crosswalk
and if it goes and uh... hits the barrier
it kills the passengers
an overweight old man
a female olympic athlete
yeah it's always so it's over here
so in the car
overweight old man female olympic athlete
a pregnant lady
in both cases
the car stops from the crash and doesn't hit anybody else
who should the robot murder
and why?
Keith, once you said dog, the only option that was gonna give me pause was two dogs just so you know
There's more people on the okay, I'm this one with you this one gave me this is tough
It gave me some pause. I mean on the one hand
Like babies are gross
A win on their way, but the Olympics are boring, so I'm gonna kill those guys.
Jesus Christ.
Okay, I'm four is greater than three,
even if you don't count the dogs, so yeah.
Yeah, thank you.
Math answer, Jesus.
All right.
It's so disappointed in us.
I like that he takes exception to Noah's
like four versus three utilitarianism.
No, I didn't take exception. I was saying thank you. That's correct.
Right. I was there. They aren't throwing a hammer. I mean, yes.
Jesus. Okay.
Megan Rapaho, whatever that lady's name is.
You know, Rapino?
You feel pretty bad at killing that lady's name is. Rapino? You killed her. She's pretty bad at it.
You killed everyone that's not her.
Well, I mean, yeah, something to be said from Megan Rapino.
After that speech in New York,
aww, I killed Rapino.
That was pretty great.
Three guys.
All right, so next up, we have the same scenario,
but everyone in the intersection
was illegally crossing against the light.
Ah, trick, trick question.
We don't care about illegal's hate. That's a trick question.
It's funny because their family is actually in a car going the opposite way. All right.
New scenario is fast as possible. No people are gonna die. It's just a choice between killing three cats in one lane or three dogs in another lane
and
All the dogs and cats are super obese. So again, they're gonna stop the car
Super obese. Yeah, really fat cats. This is the answer that tears our podcast apart also
Yeah, it's obviously go the cats
Shit question the cats have nine lives. They'll just win.
Yeah.
Yeah.
You loved math answers a second ago,
about 800,000 people.
You were in the middle of the night,
every year, about 25 people died.
Cats, they got about 66,000 in zero.
There is a right answer.
Yeah.
Yeah.
Yeah.
Wait, wait, I'm sure there's a right answer.
You like that, Rung.
Apparently, this essay on moral dilemmas is working well done. He's okay
Yeah, thank you appreciate it now
Despite that very correct math answer you just gave pretty much everybody thinks the dogs should live in the catcher time
No, they don't know they know yeah, no they do we're gonna get to the data
They're fucking wrong. They're still fucking wrong
You don't think you sent you not with me, but with mathematics and silver get to the day. They're still fucking wrong. They're still fucking wrong. You're wrong. You're wrong.
You think you stepped you're not with me, but with mathematics and silver.
It's her.
Now, new scenario, it's just between killing two homeless guys in one lane or one congress
person in another lane.
You don't know his party affiliates.
Regardless of that, you can say what you will,
but homeless people get more done.
I think you're saying Congress people, so.
Which is crazy, if we had to decide this by vote,
nobody would get killed.
McCarr will just run out of stamps.
Ah!
Ah!
Ah!
All right, one more.
Same as before, but it's two homeless guys
versus a Congress woman this time. Yeah, we have a really good guess on party affiliation
Yeah, really good guess
We're saving you a trip sweetheart
All right, no serious question two homeless guys or aOC. What do you do?
It's two versus one still there's still math. Yeah, you're saving two homeless guys or aoc do what do you do it's two verses one still there's still
math yeah you're saving the homeless guys really I was a homeless guy I just want to say every
people at home he doesn't find worth in homeless people I just just want to point that out. I find aoc to be worth
With my thoughts so yeah
My phone broke during that question
We're just asking questions. Nobody has expressed any stances.
So the information they collected from the Moral Machine website includes tens of millions
of total decisions from all over the world, and it represents one of the largest studies
ever conducted about global moral preferences.
And the data actually gives us some interesting insights into human nature.
So we're going to close it out with a quick review of where humanity landed on this topic.
First of all, and this one was pretty obvious, our biggest preference is for sparing more
people and killing less people.
Oh, good.
Right.
We also strongly prefer killing old people and sparing babies in strollers. Half right. Yeah right. From there our
preferences get a little bit less pronounced but we still have a medium
strong preference for killing people who walk against the light and
sparing pedestrians who know how to fucking walk in a city. Anyone who's
written in a car with no of a hind the wheel knows that already like I
Think any of those people are gonna describe it as medium strong, but okay
All right, well here's where it gets a little creepy our desire to kill people who cross against the light is
Just about equally strong as our desire to kill poor
people and save people of higher economic status.
Oh, Jesus.
Those are bad.
And as I look to the southern border, I realize that that does check out your absolutely
right.
That checks out.
Yeah.
Look, this is trickle town ethics guys.
This is from the Maloney creator.
Oh, yeah. Yeah.
We also definitely prefer killing fat people and saving thin people.
Duh.
Fensive.
Once again.
And we definitely prefer killing men and saving women.
Oh, there's another shocker.
If only we had nearly every military on the planet to look at it about you for this
one. Okay, but this last one is pretty interesting. Nearly every military on the planet to look at about this one
Okay, but this last one is pretty interesting. We prefer inaction over action as in we like it better when the self-driving car Does nothing instead of doing something huh?
Like blow jobs are better than sex where I have to do stuff. This is a good thing. We have MIT sortness shit out
to do stuff. This is a good thing we have MIT sortness shit out for. I was cheap.
Fucking trade school.
All right, Ed.
She's just the other notes on the, uh, on the exact nature of our blood lust.
Um, when it comes to killing particular characters, we have a few interesting collective opinions
on that.
First of all, even though we tend to spare women over men, we still prefer killing
a female doctor more than a male doctor. Really? It's fucked up. And we prefer killing a
female athlete like way more than killing a male athlete. Again. I get it. She's not
Megan Rabab. Great. She's just a lady. I get it. Okay, so what we're saying is we want to kill up any women that's what people wrote on their forms
That's what people wrote on their forms. Yes, and also the extent to which we want to kill a homeless guy and
Kill a fat guy are almost identical those two characters. We equally want to kill these
They should be but not yeah, do we do we double want to kill these. It should be, but not.
Do we, do we double want to kill a homeless fat guy?
No, they're supposed to be.
Like sure, when Cecil says it, but when I suggest
that everyone just wants to go to dinner,
stop bringing this up.
I said go to dinner first.
I don't like doing this shit on an empty stomach.
All right.
Last thing about our weird preferences, although I mean this one's not so weird. There's a little bit of weird in here. So Now granted, most of these characters are human,
but of all those characters, we wanna kill cats
by far the most, then criminals, and then dogs.
We'd rather kill a human criminal than a dog,
which I mean, I don't know if I'm arguing,
but really, but really, really, really, really really really really really we want to kill a cat
That's like what we're talking about and in fairness
I'm pretty sure they want to kill us you the cats like I've never felt more
Murdery eye contact than when I met Cecil's cats. Okay, never sure before but they're also the best like I love them despite their very obvious desire to murder me with their thoughts
All right to be added some rose what you've learned in one sentence what would it be?
Whenever self-driving cars take over if you're walking through a city
You should always be pushing around a stroller as a decoy
Useful thing we've ever said on this show actually for real. I'm not like that's not a joke
You should really do that. No, you shouldn't though because it's fucks up the system, but you should all right
So are you ready for the quiz? Ready to go.
All right, Keith, which is the most annoying answer
to the trolley dilemma, as represented by our cast?
I just want to point out, I wrote all these questions first.
Before everyone wrote their notes into the script,
I nailed it.
Is it A?
You did.
Noah, the guy who immediately answers,
like you asked him how much two-plus two is.
Well, thank you, thank you again.
And mine is instead of blush think you think you're a minor
instead of blush yeah yeah a lot of problems like that
is it pal the guy who asked how he can murder everybody
is he literally dead is it see he the guy who tries to
cheat the very obvious thought experiment with a fucking pulley system and a
train the thought experiment with a fucking pulley system and a train coach. We're talking the guy, yeah, knowing it.
Yeah, or D, Cecil, the guy who won't answer the question because you
technically broke into his house to ask him.
Break into my house.
All right, well, you didn't include your thing.
What's your thing?
I'm sure it's annoying.
Uh, uh, improvisation. Sin and I. Yep, that's annoying. Uh, uh, improvisation.
Sin and N.
Yup, that's annoying.
Yup, improv-tru-p. I pick improv-tru-p.
There we go. That's the most annoying one.
Cool. Next question.
Alright, Heath.
Clearly, what we've learned here is that thought experiments require what preconditions to be of value?
A, you need a boot, a candle, a bowling ball, a degree
at a philosophy, and an uber sticker on your car. I don't have one of those.
So B, you'll have to be willing to suspend both disbelief and 300 pounds of
Walmart, greeter flesh over a train trussle. C, you'll need four boys, a pietying contest, and a morbid sense of curiosity.
Laugh, handy.
Laurence!
Nice.
Or D, you'll need a deep-seated need to obsess over theoretical concerns rather than solve
the mystery of how to avoid scurvy on a diet of mostly ramen.
Okay.
I mean, I eat a lemon like an apple next to the ramen.
It all works out.
So D, yeah, what the hell?
Got it.
All right, he's, what movie slash TV show best explains this topic?
Hey, murder on the consequence express.
Be, take this slob and shove it
Break in the fat man or D
Now this last one is this story of how we wrote the code for self-driving cars hot a train your wagon. It's got it. Break in the fat man. It's so fat. It's the
break in the fat man. It is absolutely. I'll edit it to make it sound like Tom said he
was wrong. But yeah, when Tom got his question, writer, then he did her whatever. So now he's
the winner and he gets to do it. Right. We'll say I guess, uh, no, You're right. You're in the top. Alright, we'll say. We'll say.
I guess, uh, no, you're up next week, man.
Alright, you seem wildly confident that I'm not going to go with the trust guns.
For Cecil Eli Heath and Tom, I'm Noah, thank you for hanging out with us today.
We're going to be back next week by then.
I'll be at expert on some notes.
And between now and then, of course, you can also hear more of us, some of us, say,
anyway, I'm cognitive dissonance and others of us on the skating ad, Scott, off the
movies and the skeptic right.
And if you'd like to keep this
show going you can make a pre-represident of patreon dot com says citation
put on or leave a five star review everywhere you can if you want to get in touch
with us check out our past episodes connect with us on social media or check
the show notes be sure to check out citation pod dot com
hey bro yeah you won't believe it I just got hit by a train. Yeah, no, I stopped the train,
but I miraculously survived, unharmed. I told you there was a reason I got a giant crucifix tattoo
on my bicep. Anyway, so our plans are still on. Just you want to get some zah, or maybe go to
BWW, maybe watch the math singer, go to the concert and spend the entire time loudly talking about the booze we smuggled in?
Go to the movies and text each other?
I'm gonna get another fucking train.
Dude, you gotta listen to my podcast. It's hilarious.
Ah!
Ah!
Ah!
Oh, Jesus.
Oh Jesus.