The Worst Idea Of All Time - Socrates Walks Into A Bar
Episode Date: August 21, 2022Welcome to the first half of the first episode of a new podcast Tim's on called Socrates Walks Into A Bar. If you like it, subscribe here:Linktree: https://linktr.ee/socrateswalksintoabarApple Podcast...s: https://podcasts.apple.com/nz/podcast/socrates-walks-into-a-bar/id1634139693Spotify: https://open.spotify.com/show/7gdfgIntzQbzPRLTVuGPE9?si=d2b4b199b96a4bd4Website: https://www.rova.nz/home/podcasts/socrates-walks-into-a-bar.html Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Hello everybody, it's just Tim here and if you're disappointed by that then you're going to hate
the next news that I've got for you. I threatened in the last friend zone that I was going to put
half of an episode of a new podcast on and this is me doing exactly what I threatened to do.
It's called Socrates Walks Into A Bar. I actually recorded it with some comedian friends who aren't
Guy three years ago but it's out now because someone
bought it off us and right now i'm trying to juice the numbers so maybe we get a season two
so i don't know have a freaking listen and uh if you like it hit the link that i assume i've put
in the episode notes and if you don't like it that's all good baby because um we've got another
killing area coming out for you soon so it's it's all good stuff baby, because we've got another killing area coming out for you soon.
So it's all good stuff.
It's free entertainment, essentially, unless you support us on Substack, in which case, thank you.
And you can do it.
Anyway, here it is.
Bye-bye.
Warning.
The following podcast contains tipsy comedians attempting philosophical conversations.
Any content resembling real-world advice or universal truths are entirely coincidental. Socrates walks into a bar. Three comedians
solving the world's problems through philosophy. And yes, a responsible amount of alcohol.
Hello and welcome to Socrates walks into a bar, a show that encourages critical thinking and
moderate drinking. My name is Tim Batt. I am two papers shy of a degree that I started 11 years ago,
but I will get round to it at some point.
I'm joined by Professor Ray O'Leary,
comedian and academic expert in the field of philosophy,
and Nick Rado, a man who frequently will read
the first two paragraphs of all online articles he sees.
Yes, and it doesn't stop me weighing into any big problem at any time.
So, yeah, I'm very learned in the first part of any kind of conversation. I don't think it should
let you hold back on voicing your opinion. No. Loud and proud. In fact, often I'll just go next
after I say the words that I've said, after I've solved that problem.
Today we wanted to talk about moral philosophy and specifically the trolley problem, which
is a problem that has existed among philosophers for, I don't know, at least as long as trains
have been around.
30 years.
Way longer than 30 years.
Well, trains have been, but the trolley problem's been around for 30 years.
Oh, really?
I think 40, 50.
This problem used to just be some dick-measuring contest among moral philosophers,
but now it has real-world implications because we're about to be joined on the roads by self-driving cars.
Instead of a human driver, there'll be a bit of computer code determining the decisions they make.
Here's the issue.
Let's say we're in a self-driving car, totally autonomous.
We're on a bridge, and we're boxed in at all sides.
We've got cars on all sides of us.
There's a truck in front that's very heavy and its front two tires blow out,
hits a bit of scrap metal.
Suddenly the truck comes to a stop.
We're going too fast to brake before we hit it.
The car has to decide whether it goes forward and kills us,
swerves to the left and kills four 40-year-olds,
or swerves to the right and kills two teenagers.
What do we instruct the computer code to get the car to do?
So I'm glad that we've got an up-to-date version
because I think I can beat the trolley problem.
Really?
Yeah, it's easy.
Just briefly explain what the trolley problem is.
So the trolley problem is you're in a trolley
and so you either go towards one way and you kill yourselves into like a brick wall or you swerve and you kill a whole bunch of other people.
It's basically the same thing but with trolleys on a train track.
Yeah.
But how you beat that is when you pull the lever just as the train is going to go that way, then you pull the lever back and then you derail the train.
I see.
And then you don't kill anyone,
but you just fall over.
You think that you could derail a train?
Yeah.
In the same way that you've derailed
a moral quandary.
Exactly.
Would you derail the train
just by pulling the brakes?
No, because you're pulling the lever.
So you lift it off the train. So the the brakes no because you're pulling the lever so you lift it so the
way that the tracks go as it goes from this track to that track but there's a little bit of time
where there's no it's not joining any tracks so in that little bit of time i reckon i've got i'll
i'll measure it out yeah and i'll pull it just at the right time so the train goes, derails. There might be a slight risk that everyone dies.
Yeah.
And it wipes out everybody and it's the worst situation.
But there could be one opportunity where everyone survives.
What I like about this, Nick,
is you've not only misunderstood the solution but also the problem.
Because an important part of the trolley problem is that it's about
whether or not you intervene.
That's the main philosophical point is that you're not on the train.
You're on the tracks.
And so there's usually...
The trolley's hurling towards five people, but you can pull the lever and hurdle it towards one.
And then you've got to make that choice.
So it's like, do you intervene and then you're responsible for killing directly two people?
Or do you not intervene and you are sort of still semi-intentionally responsible for five deaths?
But you could do that multi-track drifting and kill six people.
Tokyo trolley problem drift.
Yeah.
If I was in England, I'd just put leaves on the track
and then the trolleys wouldn't run at any time.
And so the problem wouldn't be there anyway.
Okay, well, let's move into the updated version.
Yeah.
Self-driving situation.
Now, we actually did a morality test, didn't we?
Yes.
I think, Ray, you sent us.
Yeah, so there's a...
Did you make this up?
Did you code this test?
Code this violent, horrible test?
No.
There's a test that's provided by MIT.
So it's this self-driving car problem.
And so you're given a self-driving car that's driving down a road and the brakes have stopped working and you're given two options. You can either keep going straight or swerve. Sometimes
going straight kills the person in the car. Sometimes swerving kills the person in the car.
Sometimes you're killing pedestrians. Sometimes your only choice is to kill pedestrians, and you're given sort of two groups of people
to kill.
Sometimes they're animals, sometimes they're humans, sometimes they can be fitter, you
know, unhealthier, they can be doctors, they can be homeless, and so you get given all
these range of options, and we get given these 20 sort of options where we had to choose
who lived and who died, and then we got given our results overall about what we cared about.
It analyzed what we thought was important for morality.
And so we've all got our results.
And so I guess we're going to go through now and see who we thought should survive
during this AI malfunctioning thought experiment.
Okay.
The first thing is most saved character.
Do you guys have one? Yes, I have my most saved character. Do you guys have one?
Yes, I have my most saved character.
Yeah.
I got the female runner.
So the...
I see what's going on here.
The athletic, probably younger woman.
Likes a little bit of philosophy.
Long walks in front of self-driving cars.
I actually didn't expect
But the MIT
Kept asking me
If I wanted to kill
The fat or skinny people
Right
And I was like
I didn't know this was part of it
Right
This makes me feel
Very uncomfortable
Yes
So but you
You were saving
Athletic women
Left right and centre
You put this problem
Into like
A Tinder
Date situation
It was like A dating app for you I can put this On into like a Tinder date situation. It's like a dating app for you.
I can put this on my profile.
Who did you save most?
I saved a character that looked most like me, actually.
So like a mid to late 30s, some dude in running shorts.
And he's holding a mic.
He's doing a Tide 5.
He's doing a Tide 5.
He's got some great ideas about trolleys.
I keep saving old people.
My most saved person is an elderly person crossing the street.
That's so interesting, that given the choice.
I mean, I guess that's the classic thing is you help the old person cross the road and stuff.
But it's interesting.
So this is a life or death scenario.
It's like triaging at a hospital.
There's only – someone has to die, someone has to live.
And it's interesting that you choose – because normally when you get given an old person, they also give you a young person.
So you're sort of killing children and stuff.
That's not even the worst part.
My most killed character for some reason was a female doctor and that's my wife
so i don't know what that's about but i was not happy to see that result from the mit moral
machine results were you were you so worried about playing favorites like you don't want to
see myself out i was like i know what you want me to do. I'm going to kill the skinny woman.
I'm smarter than you in my day.
Wow, that is amazing.
I once had to judge a competition where I could give either $100 to my flatmate or another guy. And it was meant to be decided by crowd reaction.
And the crowd reaction, I guess, was kind of going a little bit more towards my flatmate, but not enough.
And I didn't want to look biased, so I gave $100 to the other guy, which my flatmate still hasn't forgiven me about.
But also, he's still alive.
Like, this is going to look –
I even killed my wife, though, but it was just – I kind of guess I freaked out on the moral machine quiz.
Yeah.
Okay.
out on the moral machine quiz yeah okay well especially because the reason why that is so telling i think is because there are many other options to to kill like a lot yeah so my most
killed character was a dog that made sense to me my most killed character is a cat shit let's uh
let's zoom out a little bit and sort of analyze what we're talking about here there's different
approaches to this so my general approach to the trolley problem is utilitarianism right which is the greatest good
for the greatest many this was a idea put forward originally by a guy called um jeremy bentham is
that right jb jb yeah jb smooth i think he's known in the philosophical world that's right he was
kicking around in the 18th century known known predominantly as J.B. Smoove among the British philosophical aristocracy.
He was a cool cat. He was the first ever recorded person to be for homosexual law reform.
He was like, why is it illegal to have gay sex? Do it. It's none of the state's business yeah great um he was also a big fan of animal rights yep because he thought
that the um measure of how you treat animals or differently abled people should be their capacity
to feel pain not their intellect which i think is a very interesting way to look at it yeah right so
utilitarianism for me it just makes common sense right like i want to so when i was doing the trolley
problem i thought i was trying
to put everyone on the same level it's like everyone's life is worth the same so how can i
kill the least amount of people right right right but inadvertently it's come out that i want to
maybe kill my wife sure yeah well that well that is a criticism of utilitarianism, that it can be quite personal, like impersonal and cold and cruel.
It seems so.
In marriage ending, potentially.
But yeah, I also try to follow a utilitarian type approach to these results because utilitarianism is all about saving more lives or trying to do the greatest good for the greatest number. And given that we're in this life or death situation,
so for example, I would choose to save a younger person
over the older person because the younger person
has more life ahead of them.
So this causes a greater amount of happiness in the world
if we leave them to live because then they can go on
and live a happier life, whereas an older person,
if they survive, they might only live another five years, for example, as opposed to 80.
And so you're maximizing the number of lives.
Yeah, so that's what you think.
What's that called?
So that is, you know, I would consider that a form of utilitarianism.
Yeah, right.
I've sort of married the utilitarianism with everyone's life should be worth the same,
regardless of how old you are, which may be are two incompatible yeah well
yeah i see i well i was going to get to it later but we can talk about it now but um i guess so
the thing about utilitarianism is it's sort of like a calculator that so everyone like you know
everyone's lives do matter equally everyone's happiness does matter equally you know just
because say you're a king um being a king would make your happiness more important than mine.
You know, everyone's happiness goes into the machine, into the calculator equally,
but it can come out looking like causing quite unequal outcomes.
So, for example, Peter Singer, an Australian philosopher, he talks about in his book,
say you came across someone who, you know, their leg, you know, they've broken their leg,
you know, that's a great huge amount of pain. And you've also met someone who's stubbed their toe or something,
and you've got two shots of morphine. Now to treat them equally, you could give them both
one shot of morphine each, you know, the stubbed toes, you're gonna seem like, you know, a distant
dream to that guy, but the guy who's broken his leg and is in pain, he's still going to be in
quite a lot of pain. And so he says says what you should do is give the two shots of
morphine to the one guy and so that's that is utilitarian reasoning you know everyone's both
their happiness has gone into the machine and has been weighed equally but you can still cause the
greater amount of good by treating them unequally right yes because you minimize the outcome focus
however you get there is irrelevant so with the trolley problem yeah the fact if you're
pulling the lever or not yeah if you if you have to actively swerve in to kill people irrelevant
yeah it's just how many lives will get saved at the end exactly it seems like intuitively kind of
hard to argue against yeah it seems quite common sense the problem is it has a lot of it can cause
you can end up sort of justifying anything because, you know, any action, you know, in a theoretical situation can, you know, cause good outcomes.
And it can lead to situations where it feels like you're being cold towards your family, you know.
If you're a strict utilitarian, you know, you shouldn't care about your family more than anyone else because, you know, your family are just other human beings on the planet.
Especially if it's a small family.
And the next-door neighbors are a greater family than you.
You should start giving some of your money to them.
Yeah, exactly.
Exactly, yeah, yeah.
Do we want to go through the rest of our results?
Yes.
Very quickly, I want to go.
So you guys are both utilitarianism.
I'm into this thing called game theory.
Oh, yep.
Do you know about game theory?
A little bit. I think. Not enough. Listen, I. Do you know about game theory? A little bit.
I think...
Not enough.
Listen, I'm the king of only half-reading stuff,
so I'm going to try and explain it to you.
The game theory, what I understand is,
so life is a game,
and like any game,
you have to sort of weigh up
whether it's worth you playing.
So if there's enough in it for you to win
or for you to come out
or at least have come out and,
or at least have a good time, you know,
trying to get it,
or even if it is an achievable game to win.
So the game of life,
if you don't think that that's an achievable
or winning type of game,
then the game theory is you should commit suicide
because you may as well tap out now.
I had that board game as a kid
and i do not remember that being an option i remember there being little pegs on a car and
a spinning wheel i don't remember any ability to be like the game of life yeah is that what
happened when the spinner came off the spinner that would happen to me a lot you had to turn
the car upside down it was like i'm sorry you've ended it that's when you you flip the board over
and then you run out of the room
and say,
I hate you all
to your mum and dad.
No,
so that says,
yeah,
you might as well commit suicide
if you're not going to be
in the game,
right?
So if you decide to be in the game,
then you have to be all in.
You have to gamble
because if you don't,
if you're not all in
and if you're not trying
to win the game,
then if you're just sitting
to the side and just hoping that things will pass
and just watching it and just being in the game but just observing it,
that's an act of suicide in itself because even though you're living,
you're not living your best life, if that makes sense.
According to this theory, what is living?
What do you do?
How do you get points?
All in.
So basically, if you're going to live a proper life, you need to take some risks.
You need to take risks to try and at least level up and play the game.
But what does that mean?
Does that mean helping other people?
Does that mean self-improvement?
Does that mean accumulating as much money as possible?
Whatever that means to you.
Whatever you're trying to get out of the game.
Often I understand it as, in game theory,
is it sort of trying to maximize your own self-interest a lot of the time?
Ah, so whatever your self-interest is.
Trying to figure out what's best for you.
So here's what we know about Nick so far.
He saved the character that looked the most similar to himself,
and he's become a fan of a theory that is about maximizing self-interest.
This is quite illuminating well
because i thought you were quite a nice guy before we started doing this
becomes yeah maybe this is a bit more therapy than it is philosophy i don't i don't mean to
point out sort of a glass houses throwing stones kind of thing too but you do want to murder your
wife yeah credit where it's due yeah i mean none, none of us are getting out of this one clean, are we? Yeah, the moral.
How do we drag Professor Ray O'Leary?
Well, look, some of my results I'm not happy to share,
so we'll get to that.
What are you most unhappy with?
Well, we'll get to that.
It's the final thing.
So there's lots of different things.
So the first thing they analyze,
one of the other things they analyze your results on
is whether or not you saved more lives or less lives. Did the number of people that you could
possibly have saved in these scenarios we looked at, did that influence your decisions at all? For
me, it was 100%. Every time I could save more lives rather than less, regardless of who it was,
I would save more lives, unless it was an animal, in which case animal lives, I weighed less.
I was below the average. So other people, people yeah for saving more people they said like this is what other people would do and i was
slightly under i couldn't read the graphs so maybe you guys could help me with that because i printed
out i just the pictures really helped saving more lives so yeah you're a lot mattered a lot
in fact you were like if you could have gone any further over to saving as many people you could
that's what that said.
I just would have been murdering individuals and giving the organs to save groups of individuals.
Save China.
Yeah, you're the exact same position as me when it comes to saving more.
Just an emphasis on saving more lives rather than less,
which makes sense if you're thinking in terms of a utilitarian,
because you're trying to maximize the greatest good.
I'm only just over 50% trying to save everyone else. Because what you were trying to... Well, I was trying to maximize the greatest good yeah i'm only 50 over just over 50 trying to save everyone else yeah because what you were trying to
well i was trying to school the game i was gambling i was i was held in i'm playing the
game but you survived the next thing is um protecting passengers so this is an interesting
thing for um self-driving cars as well which is
a question i guess the develop the you know makers of them have to make is that if you were developing
a self-driving car would you try make it save passengers or will you try make it save pedestrians
because you you want to sell your car and would people buy a car that in certain scenarios will
kill the driver over this is super interesting it's like humans instinctively have a really high
prioritization of self-preservation right we will always want to protect ourselves
often at the expense of of even other lives yeah around us right so the question becomes
do we want to program these cars to act like humans would? So we're kind of just replicating human drivers,
but getting the convenience of humans don't have to drive,
so we can get drunk all the time.
Yeah.
Or should we be ambitious enough to try to code a superior moral
into these vehicles and put our own lives at risk a little bit more as a result?
So I feel that we should have the different options
and we should be able to choose those options.
So here's the interesting thing about that.
If you introduced any choice,
don't you think everyone would then go for the most arsehole coded car?
No.
Because they would have to to survive?
No, because this is what you do.
I think within the car,'ve um you can so you
you put your seat belt on uh you put the radio on you start the engine and then you go all right
what mode is this car gonna be in now on this trip you decide per ride yeah because if you
have somebody in in the car that you don't like, maybe you go...
I love this.
Like, for example, if Tim was out for a drive with his wife.
Yeah.
I'd be like...
Let's look after everyone.
Let's look after everyone out there.
Yeah, or if I'm driving by myself.
If Nick is in the car he's like protect me at all
available costs
but then you get the situation where if I'm driving with Tim
and his wife and he sets it on one mode
while he's not looking I'll switch it to another mode
but I feel like you
feel different on different days about different
things and I feel like you
should be able to choose
the problem is
as long as I feel like you have you should be able to choose the problem is yeah
as long as
I think
I think whatever you do
it can't cross the centre line
yeah I think that should be a given
that should be a given
you can't select
break the law
and just
there shouldn't be a murder mode
no
like what was that Stephen King
movie
about the murderous car
Christine
is that what it was called
no are you thinking of
are you thinking of Carrie?
No, no, no. I think it's called
It's a Woman's Name. He wrote it while he was
on a lot of cocaine. Really?
Yeah, it was about this murderous car.
Okay. So it was a vision of
your future.
When you're having a bad day, haven't had a coffee today,
blam, kill everyone.
What do you think of that, Ray?
So part of my feeling is that I'm torn because I feel like eventually self-driving
cars are going to be safer than human drivers.
I think eventually, you know, when we've got this proper, you know, it's going to be many
years away.
Oh, yes.
But it seems like we want to be encouraging people to buy self-driving cars and get in
them.
But then the thing is, will people want to buy self-driving cars, which are, you know,
going to be safer overall probably than humans driving?
This is the thing.
They're going to be massively safer.
There will always be accidents and things.
Obviously, yeah.
It's the physical world.
Especially when Nick's on the road and if he has any ability to choose how this car drives.
Well, that's, I think, what philosophers are trying to do when they develop moral theories anyway.
They're trying to, you know, people, they'll think one thing in a certain situation, but they'll think another thing in another situation.
And they're inconsistent, their two beliefs.
And so philosophers, ethicists, they try and, you know, figure out what's this consistent thing.
What's a moral theory I can apply to every single situation?
And I guess MIT, it seems, are trying to sort of get a rough feeling for what the average person thinks by doing the survey online.
Is that what they're doing?
When we filled out this MIT moral machine, are we like informing the code of future self-driving cars?
I hope not.
Same, dude.
Yeah, especially because if it's designated to you, oh, this is what Tim wanted.
You know, this is what he likes to drive.
Oh, man.
No.
You know, just keep it on the record.
Please don't do that, MIT.
Okay, the next one is uphold the law.
So sometimes there would be pedestrians crossing the road that you could hit,
and sometimes they would be crossing the road when they should,
and sometimes they'd just be crossing when they shouldn't.
And so did this matter at all to you guys, I guess?
So for me, I think I've got it sort of somewhat in the middle.
But I should say, yeah, consciously, I wasn't thinking about protecting the passengers.
To me, it didn't matter whether or not you were in the car or crossing the road, I think, when I did this.
And at the same time, upholding the law, I only ever used that as a tiebreaker where I felt like I couldn't decide between the two groups of people.
Then I would choose to kill the people who were breaking the law.
I thought I was doing that as well.
But according to my readout, I actually am like slightly above the average in valuing what the law says, which I find repugnant.
Because I do not value what the law says in a whole bunch of situations.
You've turned into a real Judge Dredd type.
Yeah, exactly.
Like, I think the law only,
I only like to follow it when it makes sense to me.
When I'm like, yeah, I get where they're coming from there.
But if the law doesn't make sense,
if I'm like, this doesn't hurt literally anybody,
I'm going to break the law.
The grey area.
Come get me, cops.
I'm like, LEG doing that.
Yeah.
But for this, like, yeah, like you, Ray, I only use it asG doing that. But for this,
like you, Ray,
I only used it as a tiebreaker.
What did you do?
Can you guess what my results were?
I reckon you're very into the law.
Interesting.
According to me,
lock me up.
Because when others think that the laws matter,
I'm right underneath.
Connect greater, chaotic neutral. Upholding the law does not matter to me. because when others think that the laws matter, I'm right underneath. Connect,
greater,
chaotic,
neutral.
Upholding the law does not matter to me.
So again,
gambling.
I'm all in.
I didn't,
to be honest,
I didn't know these results were going to be as telling as.
Do you agree with that?
Is it,
is this a personality test in a way?
This is better than horoscopes.
I'm like you,
I'm a real,
I hate black and white.
I'm always – or where people think literally about things.
I just don't get that. So, for example, at the airport,
and we're going through this rigorous screening and scanning process and stuff,
and we have the greatest uh x-ray machines that
can tell you know the minutest things that are on us but they still can't tell whether it's a laptop
or not and i'm just going can you just invest some money so i don't have to pull the laptop out
and put it in the side so like and i'm about and i and i sometimes i'll just
like one time i put the laptop out and i put the bag on top and they were like and it got and it
still got uh you know specially selected i'm like and they said you need to take the laptop out of
the bag you've got an x-ray machine that's the whole point of this thing you can see through
that yeah you can clearly do and it's like. And so I had many arguments about that.
Also about the 7kg limit for hand luggage.
So I one time had to take out my 7kg, 1kg over my 7kg thing and put it into my actual big suitcase.
But for me, the weight it doesn't change
doesn't change did they actually have you up for that yeah what airline were you flying i was
flying air new zealand really it's never happened to me i'm always in the airport and they say a
reminder to all passengers you're carrying luggage must be and i'm like you're full of
shit i've never been had up i only happens around christmas time really because you're on a regional
flight yeah people try and you know and yeah maybe i was trying to you know squeeze in an extra 12 kg
but seriously no but it's uh but for me it's like that black and white i mean you can argue
and they can see that that you have a point yeah but because they and then they do that it's their
job and it's like there's no one and they don't have the power to
change the rules and they think that they'll get fired because of the rules that's what i really
disagree with so that's why i feel like i do agree with the law doesn't really matter because i can
see i'm like you if it makes sense yeah definitely follow it up see yeah i'm really i'm conflicted
about this i definitely used to be a lot more like, who cares what the law says?
You know, you should just do what's good and what's right.
Because we all know that just because something's legal doesn't mean it's good.
And just because something's illegal doesn't mean it's bad.
Yes.
And so I completely, I used to agree.
But now I've started to think, when it comes to like government laws and stuff, you know,
there's lots of different people out there.
And we're all trying to coordinate with each other and live harmoniously in society.
Yes.
But I've started to lean a little bit more towards, a rule you should you know as a rule of thumb you should
follow the law like this should be like you know not breaking the law should be considered you know
like prima facie bad like um you should just sort of like is that because we need to exist together
we've all got different intrinsic interests and agreements on things, but we need to have some code to coexist.
Basically, like, so for example, as a controversial take,
so we've, in New Zealand, abortion is allowed, right?
But there are some people who are really, really against that
because, you know, they think it's the murder of an infant.
And, you know, here in the States, you know,
there are people who show up to these places with, you know, weapons
and they want to, you know, kill doctors and stuff.
And they have, yeah. Yeah, and that's against the law. And so I guess it's important, you know, there are people who show up to these places with, you know, weapons and they want to, you know, kill doctors and stuff. And they have, yeah.
Yeah, and that's against the law.
And so I guess it's important, I think, to have sort of like this collective morality that we all abide by.
Because, you know, if we break the law to do what we think is morally right, then, you know, these anti-abortion people, and I don't want, you know, I don't want them murdering people.
But, you know, there's like.
This may be overly simplistic, but here's how the filter I always
put it through
will it affect anyone
yeah
if I shoot an abortionist
like
that's going to affect
someone
yeah exactly
but if I
smoke a huge
amount of weed
and play GTA 5
on a projector
at my house
it doesn't negatively
affect anyone
no
in fact I'm a
jobs creator
because I'm probably
going to order a bunch of pizza
someone's got to make them, someone's got to deliver them
you've got a dealer
and he only works in cash
so he's already struggling in society
in terms of being
part of the banking system
so you're helping another fellow person out
so that's a really great rule
but here's the rule that I live by
it's Confucius
who is the biggest Chinese
philosopher ever to hit the town
was it a fat joke?
no
isn't he always depicted as big or am I just thinking of Buddha?
I think you might just be thinking of Buddha
my bad
I was going to guess what Buddha's ethnicity was
and I was immediately like no I don't know
but his thing is the golden rule which is due to other people I was going to guess what Buddha's ethnicity was. And I was immediately like, no, I don't know.
But his thing is the golden rule,
which is do to other people what you want done to yourself.
Oh, so Jesus plagiarized that.
Mate.
What a load of BS.
Yeah, because I think Confucius, yeah, he was around before Jesus.
Doing to others is another one of those ones that feels intrinsically sensible.
Yeah.
Except when you get into fetishes.
Yeah, I see what you mean.
It depends on what your limit of is fine. Everyone's got different tastes and whatnot.
And you can extend it into other realms outside the bedroom.
But, you know, that's just a good tangible example for everyone, I think.
So that's my goal.
I believe in that.
It's a golden rule.
So that's how I treat people.
So I treat people how I want to be treated.
And that's the only thing I go on.
I think, yeah, I think the golden rule golden rule again it's a good rule of thumb but it's definitely not
the be all and end all so i think like tim already highlighted a problem where you're like so for
example um brain surgery um i would like brain surgeons to do brain surgery unto me um but i
would not like to do brain surgery unto them uh because i would not like to do brain surgery unto them
because I'm not a brain surgeon.
You know, there's sort of like –
Well, I think that that's – isn't that slightly off though because we're not –
Yes, but then that becomes another problem.
So the thing is like I think – so some – like I think Immanuel Kant
has a very similar theory to this,
and I feel like it suffers from a similar problem,
which is how specific are you being?
So when I want the brain surgeon to, I don't know, do brain surgery for me,
I guess you might say help each other or perform your job to the best of your ability at all times.
I would be so much more impressed if Confucius did say,
do unto brain surgery as you want brain surgery done unto you
500 years before Jesus was kicking around.
Then I think he's got some moral authority and weight to be like,
I called this thing a thousand and a half years before it was even in existence.
But I guess the big problem with the golden rule is that we want different things, right?
There you go.
That's the first half of episode
one of Socrates walks into a bar
if you liked it please go and subscribe
that's all
that's all okay
bye