TRASHFUTURE - Dark Satanic Data Mills feat. Dan McQuillan
Episode Date: April 4, 2023We speak to computing researcher Dan McQuillan, author of the recent book “Resisting AI: An Anti-fascist Approach to Artificial Intelligence,” all about that open letter about how scary powerful A...I is. Also, a startup that helps automate academic literature overproduction so not even the writers have to read it, and the TikTok hearings. Check out Dan’s book here! https://bristoluniversitypress.co.uk/resisting-ai If you want access to our Patreon bonus episodes, early releases of free episodes, and powerful Discord server, sign up here: https://www.patreon.com/trashfuture *STREAM ALERT* Check out our Twitch stream, which airs 9-11 pm UK time every Monday and Thursday, at the following link: https://www.twitch.tv/trashfuturepodcast *WEB DESIGN ALERT* Tom Allen is a friend of the show (and the designer behind our website). If you need web design help, reach out to him here: https://www.tomallen.media/ *MILO ALERT* Check out Milo’s upcoming live shows here: https://www.miloedwards.co.uk/live-shows and check out a recording of Milo’s special PINDOS available on YouTube here! https://www.youtube.com/watch?v=oRI7uwTPJtg Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and Alice (@AliceAvizandum)
Transcript
Discussion (0)
This week on High Performance, Sir Keir Starmer, let's start the podcast with the way that
we always do. In your mind, what is high performance?
Well, that's a very good question. And I think to my mind, high performance is about
grind set. You know, for me, leading the Labour Party, I'm waking up at 5am. I'm looking up
the dictionary entry for the word important. And I move on to thinking about important
things like the economy and stacking the dishwasher correctly. Because these things
matter. I then move on to thinking about what I want to achieve. A fairer society, the introduction
of savory vapes, cataloging my printer paper into its various categories by colour, off-white,
eggshell, cream, because these small things matter a lot when you're trying to get things
right for working people.
Hello, and welcome back to this free episode of TF. It is a course.
It's a fucking free one, cunt.
It is me. It's Riley. It is an exhausted Milo down in Australia.
And of course, there is also Alice up in Glasgow, where it's a normal time.
It's the normal time here, which means I'm not so, you know, lethargic. I'm not suffering
so much.
It's the global time.
Later on in the episode, we are going to talk to Dan McQuillan, a lecturer in creative
and social computing, and who has written a really interesting book on resisting AI,
not just nice AI, but resisting AI.
And we're going to talk about that in the context of a, let's say, very self-promoting
open letter written by, among others, Elon Musk, various sort of tech grandees, people
who quite notably work in AI firms that aren't open AI, demanding more regulation of AI,
and that open AI, please stop developing new models so we can catch up.
Before we do all that, we've got a few things to talk about, a couple of one or two things.
And there are some things, by the way, that people have been clamoring for us to cover.
Yes, we have seen all of the Binance stuff, how they basically had a bunch of their group
chats, where they said, we're going to commit crime really well, seen by the SEC, which
was super funny.
Yeah. Well, that's good. If you want to, you know, if you want to commit crime, you
might as well do it well, like I respect their attention to detail.
Yeah, it's important to have an organized, you know, like you should be taking notes
on a criminal conspiracy, because how else are you going to keep the conspiracy working?
Yeah. We've also seen that Hindenburg has a new report out. We're going to cover that
in later episodes.
Being prosecuted under the RECOW Act and going, well, excuse me for being a girl bar.
And also, there's various Labour Party developments, such as Corbyn being formally banned from
running in Islington North, where all the legal opinions have been that it's because
he like lost the 2019 election. And all of the media opinions have like people in like
Labour grandees and stuff go on, on like, you know, Good Morning Britain or Good Morning
Britain. I guess they wouldn't go in Good Morning Britain. The other shows, you know,
the shows, they're like, oh, well, it was the anti-Semitism. So, you know, it's odd
what stories get told, where and what holds up where.
I mean, I will also say about Corbyn just really quickly that we thought for a while
that the Labour Party might put in someone with like, name recognition to be the, like
the Labour candidate in Islington North against him, just to like really avoid throwing someone
to the wolves and just like really sort of like twist a knife a bit. Instead, what they
did was they ran a guy called Preful Nargan, who was like a 32 year old who runs a chain
of IVF clinics. So the guy who like jerks people off professionally is going to be going
up against Corbyn and like losing by Bath Party margin. So that's going to be fun.
He's creating his own supporter base.
Playing the long game, doing like the boys from Brazil, brackets like Labour Right, where
you're just like jerking people off into tubes and being like, you know, pretty soon, this
is all going to like multiply out. And then, then Jordan Peterson saw a guy, a video of
this guy at work and was like, Oh my God, please tell me it isn't true. Anyway, I actually
had another bunch of things to talk about before we get to the start up of the interview
with Dan.
And then I obliterated every single one of them with a tungsten rod from space.
And last minute sort of note folded up, like you've seen that picture of George W. Bush
being told 9-11 is happening. That's you in your dreams this morning. When I when I
discovered this.
Did Riley continue reading the book for a little while?
Yes, he kept working on the notes to like, you know, exquisitely arranged notes for a
while and then, you know, sort of yielded to the force of this one news item.
I'm going to give you a peek behind the curtain. It superseded the fact that the UK is no longer
doing an NFT from the treasury. They shelved that project. This has superseded that. That's
right. In Friday, this is from Jezebel. In Friday last week, Reese Witherspoon has announced
that she is splitting up with Jim Toth, her husband of nearly 12 years.
Jim Toth.
Now, no, hang on. Wait till I get through this whole paragraph, please, because it ends
in the way that you won't expect. That's awesome.
I'm at the gym looking for tops.
While other celebrity news has kept us distracted for the past few days, and it was certainly
kept me distracted, we found now finally glimpsed into what may have prompted the end of their
marriage. Speaking to Radar Online, at least one insider close to the couple blamed it
all on Quibi.
Oh, no, no, the horrifying effects of long quib.
That's right.
Is it the golden arm?
I think the question is, you're like, what what state of fright is this when your marriage
ends because of Quibi?
The California state of horror is your marriage implodes like a year after because of Quibi.
A 50 states of fright where it's a guy who just gets skied into by Guinness Paltrow and
loses his ability to enjoy wine tasting.
We should pivot to being a celebrity news podcast.
Yeah.
The celebs are at it again.
The celebs have never been more at it than Reese Witherspoon and her producer husband
getting divorced because of Quibi.
Jerry, how is this Quibi's fault?
Well, look, I assume that he got so distracted by all of the addictive bite size entertainment
available on Quibi that he stopped paying attention to his marriage, but we'll read
on for fun.
So the marriage fell apart as a result of Todd's midlife crisis, which prompted him
to leave his successful agency job to join Quibi.
Getting in on the ground floor of quibs is like a great like midlife crisis thing to
do.
You know, I'm going to pack it all in.
I'm going to I'm going to do quibs.
You're getting in on the ground floor of the elevator from that movie Devil.
It was either that or by TVR Tuscan.
Toth worked as an agent at CAA for years and served a whole roster of A-listers, but
left the agency to become head of content acquisitions and talent at Quibi in 2019, which
he made the decision to green light 50 states of fright.
This is a man who midwifed the golden arm into being.
And I tell you what, he acquired some fucking content and some talent.
I mean, Sam Reimig directed the golden arm thing.
Like, I mean, I hate to say this, but you said midwifed the golden arm into being.
And I did just imagine this man pulling a golden arm out of a woman's pussy.
So thank you for that.
Midwife crisis.
This is from one of the sources.
That would be like a great attack line for a shadow health secretary.
You know, I'm wasted doing podcasts.
I should be like a spat in the office of whoever the shadow health secretary is to be like,
no, you could you could fucking get their asses with this one.
It's a midwife crisis.
So leaving his position at CAA to join Quibi was a huge gamble.
It sure was.
Boy, was it?
Yeah.
Playing Joukowski rules.
He like went into the casino.
Put it all on blues.
Yeah.
There's not really an option in roulette, but he managed.
Yeah.
He put a blue napkin on the table and said, I think this could win.
Or he's, he's hitting on 21 basically.
Because at the time, Rees asked Jim if it was worth the risk,
but he says he was up for the challenge and felt confident he'd bring home millions.
Whomp.
It says, and that's the other thing like the in 2021.
So after Quibi had failed and died and stuff,
Rees Witherspoon sold her production company for like just shy of a billion dollars.
Like at the height of zero rates.
She did this.
I know zero gravity and everything, but like how was how did you?
Oh, okay.
Never mind.
So Rees Witherspoon has more money than God at this point.
And you know, you know that she couldn't let the Quibi thing go either.
Just like, you know, on the couch, you know,
oh, what should we order today?
Oh, can we get it on your cards because of the like billion dollars that you have?
And then, you know, it just circles inevitably back to the,
well, you didn't have to take the job at Quibi, did you?
And it just kind of goes from there.
She wanted to roll in the golden arm and he wouldn't give it to her.
And she's not going to let him let it down.
So by the way, Hello Sunshine aims to broaden perspective and empower women
by giving them authorship agency and a platform to help them shake culture.
It certainly did help one woman do that.
I think I have COVID.
None of that means anything.
I feel like I've just been hit in the head with a hammer.
Oh yeah, by the way, if you want to know who bought it, it's Blackstone.
Blackstone bought it.
Okay, so like half of all the houses in America and also Reese Witherspoons
like guaranteed income forever.
Yeah, correct.
Hello Sunshine really does sound like a kind of like an organization
that does day trips for the mentally impaired.
I was thinking like thing you get woken up by when you're being arrested by the Sweeney.
Yeah, I was sort of going more of a geezer direction as well.
Anyway, Reese Witherspoon is Detective Inspector John Regan, you know.
So anyway, apparently Toth was also a founding board member.
So he's also richer than God as well.
I think they just couldn't get over the fact that he made such an ass of himself
being involved in Quibi that their marriage did eventually end.
Just like in bed, you know, he's got like a sleep mask on.
She's reading, you know, they're about to go to bed
and she just goes, bury me with my golden arm.
I mean, I do that all the time.
Yeah, perfect.
Yeah, no, I just love the idea of like Reese Witherspoon
like psychologically like bullying this man over his own shitty decision.
You know, that pleases me to imagine for some reason.
There's one other couple of things I want to talk about before we get into the interview.
One of which is, of course, yet again, America's shaviest Republicans
and some Democrats also have gotten together to question the CEO of a big tech company.
But this time with Cold War 2.0 overtones.
Yeah, this was great.
Sorry, I'll let you introduce it.
But like very funny.
Yeah, I mean, the thing is right.
All tech CEOs should be should be treated like this,
which is to say insanely racistly.
So this is Shochu, the CEO of Tiktok, a Singaporean.
And this is crucial Singaporean business.
Yes, this will be important.
Like was being sort of questioned by again, people, a group I wish included Greg Stubbe.
Unfortunately, I did check Greg Stubbe was not involved in this case.
Yeah, it's a shame.
It was busy listing.
Damn shame.
Well, well, checking after him, I did find a headline from about eight months ago.
Republican representative Greg Stubbe waves guns around during virtual hearing on gun safety.
Quote, I can do whatever I want with my guns, said Stubbe,
who later on wouldn't clarify if any of the weapons were loaded.
Awesome.
I love him.
Mr. Stubbe is...
Is that gun you pointed at your head loaded?
What are you, a cop?
Yeah, I also like the idea that he refute, like, you can just lie, Greg.
Even if they were loaded, you can just say no.
Or say yes, if that's what you want people to say.
I'm intrigued by the fact that he wants it to be Schrodinger's gun.
Like, he believes it's important to the message.
Yeah.
So preserves an air of mystery.
You know why?
I'll tell you why.
It's because, like, when Rhonda Sanctimonius is eventually crushed under the Trump train
and can no longer has enough sauce to be like Governor of Florida,
there's only one man who I think is up to the job.
And it's the kind of guy who will point a gun at a computer
and then not clarify if it's loaded.
That is true.
That is full stop.
Greg Stubbe is the most sauced sort of congressional level politician in America right now.
That's true.
I believe that.
By far.
We have, look, we've been following his career for a long time.
We're very excited for him to go up to the bigs.
America man.
He's incredible folks.
They love him.
Do you want me to get him out?
They don't want me to show you, Stubbe, but I'll do it.
So let's talk about the hearings.
Like how Milo's become a ghost.
Let's talk about the hearings.
Yeah, because this is about TikTok, right?
We're going to ban TikTok.
Yeah.
So Shuchu is the CEO of TikTok.
Thank God.
Most of the questioning was basically just saying that the app, which is not, not allowed
in China, I get probably for like wanting to be more on like WeChat style things.
Reasons is being used for control surveillance and manipulation of Americans, including
children, which is, you know, probably right.
That's true.
Like this is true of all apps.
And this is why I'm like, yeah, we should like treat all tech CEOs and this hostile
away, but the only way American knows how to be hostile to a tech CEO is to be racist.
So, yeah, fuck it.
Let's just, let's do it.
You know, have Dan Crenshaw like ask Tim Apple whether he's a member of the Chinese Communist
Party.
Yeah.
I love the idea of just like so many people in this, in the Senate or Congress or whatever
being like so old that they can't even tell what race the tech CEO is.
They can't see that far.
They're having to like guess.
Just a guy doing like rice paddy hat bits at Mark Zuckerberg would be very funny.
Well, Mark Zuckerberg is as Chinese as the CEO of TikTok is, which is to say not.
So like.
Yeah.
Yeah.
The, I do think the, this does create an opening for a certain millennial comedian to be a sort
of at the perfect tech CEO that will confuse Congress.
Dan Naiman has been appointed to the board of TikTok.
Can't tell a race he is.
Yeah.
Yeah.
Richard Hudson.
However, a Republican from North Carolina asked the Stooby question of, of, of, of two.
It's an airport paperback.
The Stooby imperative.
Yeah.
So if you remember the grass Stooby gambit, if you recall, the Stooby question was, how
come all of my campaign materials go to spam?
Are you the boosting conservatives and Gmail?
Why videos of me dancing to Harry Styles not appearing in people's for you page?
So there was less of that this time.
It was much more like, are you, are you doing what every social media companies does, but
Chinese Lee?
Yeah.
Are you like abstracting people's data?
Yes.
Are you like exploiting misinformation?
Yes.
Are you like kind of like brainwashing children?
Yes.
Are you doing all of those things while being Chinese?
No.
But like.
Would you say that Tik Tok's algorithm acts as a kind of finger trap for children?
So the Stooby question in this case was asked by Richard Hudson, a Republican from North
Carolina, who asked whether the app accesses the Wi-Fi.
Oh yeah.
I saw this clip.
You know what?
Maybe it does though.
No, I think I'm for this reason, we're about to pass the restrict act, I think it's called,
which is, legitimately, no more internet allowed.
All social media now banned because, you know, Grandpa wants to know why you're not getting
his campaign ads.
Yeah.
So.
It's very funny just to turn the internet off in response to like, you know what?
No, it was a mistake.
It's become Chinese for that reason.
We have to turn it off.
It's turning the kids Chinese.
Yeah.
If you like look at the like actual provisions of this, they're like, it's up there on the
fucking Patriot Act.
It's sociopathically draconian.
Yes.
Oh, great.
So I actually, so one of the examples of something from this bill is that, you know, again, because
of stuff like Tik Tok, directly because of this reaction stuff like Tik Tok, you have
20 year prison sentences for like connecting to a VPN that's based in a sort of designated
enemy country.
Oh, good.
The thing about Tik Tok is that they're getting it sort of like an bipartisan basis.
They have like no friends in Congress.
They don't even have like sort of the tame Congressman from California to be like, I think
you're an important California business that like, you know, Americans do.
Well, there's no Congressman for China.
So you have like a like a lib sort of like anti Tik Tok thing, which is you were like
stealing all of our data and making our kids kill themselves, which yeah.
And then there's the Republican like anti Tik Tok thing, which is you are stealing all
of our data brackets Chinese and making our kids gay.
Yeah.
Which I don't think Tik Tok does that.
And of course you with the Republicans, Chaya Ratchick was just there, right?
Just with banned Tik Tok shirts.
And the only like the only person that's, that's your business model.
You can't just be libs of like, I mean, I've heard of like sort of like shitting where
you eat, but this is ridiculous.
Like, yeah, but also it's like, it goes back to just the fact that the, that no one, because
no one can see the difference between Tik Tok and all the other sort of let's say, patriotic
American mass spying to sell you ads organizations and keep you addicted to your fucking phone.
The only person sort of saying, Hey, this is kind of weird is Jamal Bowman.
And on the other side, by the way, this isn't just happening in America.
It's happening in Europe as well.
Europe's largest maker of ammunition has claimed that the power demand by a Tik Tok data center
is preventing them from making shells to send to Ukraine.
We've brought it all together.
That's beautiful.
Tik Tok's playing 7D chess here to help the Russians and Ukraine.
Yeah.
Just running the like thermostat at like Taylor Lawrence levels purely to try and like abstract
electricity.
I mean, the thing is right.
Tik Tok is legitimately terrible for a lot of the reasons that like some of the Sena members
of Congress identified and.
I've seen Disney adult Tik Toks.
That's a good enough reason to ban it.
But you spend like half an hour on like, if you just make a Tik Tok account right now
and spend half an hour scrolling the like, whatever for you it serves you before it knows
anything about you, you will agree that Tik Tok should also be banned.
It's just like the way in which we're doing it, the fact that like we're doing it out
of like a bizarre set of like misguided impulses.
And the fact that we're also managing to like hook in a bunch more like places for the
feds to control the internet and social media also not ideal.
Yeah.
And the fact that again, this is all a part of a giant push to, you know, I think like
it's part of a it's part of a larger giant de-globalization push, which like I think
it takes an a whole episode on its own, right?
But there is a the fact again, like it's a de-globalization push that's being created
by the resumption of Cold War style hostilities between blocks of nations is a sort of scary
thing.
And the fact that this is they need to sever a lot of these links that have that have grown
information links, for example, right, between different to the internets of different countries.
You know, and the fact that, you know, we looked at the Great Firewall of China and we're like,
great idea.
Let's do the restrictive.
Perfect.
We love it.
Before we get into our interview, though, I do want to do a quick little startup because
this one was fun.
It's called Jenny with an eye.
And I want you to tell me what it does.
8 6 7 5 3 0 9.
Okay.
What does Jenny do?
What's Jenny up to?
Give me like something here.
Our mission like more than just a name.
Our mission is to usher in a new era of human creativity through artificial intelligence.
Yeah.
Milo, what do you think?
Oh, so I have there.
I have their mission statement.
Be bold.
We are ambitious and non complacent.
We have a hunger to achieve against great odds and we believe that making bold bets is better
than inaction.
Be lean.
We are.
We use the least to achieve the most and avoid waste and time and resources by directing
energy to high impact endeavors.
Be unorthodox.
Everyone makes a difference by introducing new perspectives and fighting conformity.
We embrace feedback.
We're constantly challenging.
This is like three pillars so far.
Yeah.
Three pillars of Islam.
Is it like a Google home that gives you like wanking instructions in like a sexy voice?
Yeah.
Of how to create a new generation of Labour right voters in Islington North.
Maybe the last of their four values might help.
So his value is not a mission statement.
Be scholarly.
Seeking the truth is of utmost importance.
We strive to ensure that our knowledge never stagnates and that our decisions are driven
by evidence.
So it's scholarly.
Oh, it helps you with essays.
It's like a tutor.
Ding, ding, ding.
That's right.
For fuck's sake.
Okay.
Great.
Because I just went to like what's the most precarious job we could automate?
Ah, tutors.
So it's not actually tutors.
It's for like postdocs and people writing academic papers.
It was sort of shown.
It was discovered by friend of the show, Jathan Sidowski.
And I just thought, we got to talk about this.
Yes, that's right.
You can supercharge your writing with the most advanced AI writing assistant.
Essays, blog posts, personal statements, stories, literature reviews, speeches, etc.
You can supercharge your writing by which I mean make it sound really stupid.
For the small cost of violating every university in the world's plagiarism policy, you too
can make your writing sound worse.
They actually answer that in their FAQs.
Does Jenny play Jarrah?
No.
Okay.
Cool.
Thanks.
What are like, like here's the thing.
If I wanted to do this, the first thing I would do would be to like get the imprimatur of
like some kind of gullible or like easily bought institution.
Like the fucking like University of Austin or whatever, you know, Hollywood Upstairs Medical
School.
Like get their name on it so I could be like, look, someone's saying this is okay to submit
work with.
Have they done that?
Or is this purely like?
Well, they have.
No, we're just saying it.
So they have like Harvard and Cambridge and Metta and MIT and stuff.
Like just scrolling the logos across their website.
They don't say exactly what their relationship is.
These are the logos of some universities.
We have the AI component.
Just so you know what university is.
So it says, did Jenny plagiarize?
Jenny strives to generate content that has zero percent plagiarism.
However, occasionally there may be sentences that Jenny writes that also happen to be on
the web.
Like for example, all the time again, maybe what you've done is what instead of plagiarizing
what you've done is you've taken four sentences and then you've created a fifth sentence by
mashing the four preexisting sentences together.
I would argue spiritually, even if it's 100% original, it's still plagiarism because
the question in plagiarism is, did you write this named human person?
And if the answer is I put a prompt in for it, then no, you didn't.
So the thing about this is think about again, like what is academic writing?
Mostly it's something that's done as a kind of ritual to produce papers that fucking nobody
is ever going to read.
So now those papers that no one reads are not written by anyone either.
Amazing.
I mean, in a way, that is the perfect synergy, right?
Now, so what you do, this is like posted on a Twitter thread by like an AI success guy.
He says, okay, go to Jenny.ai, type in the title of your article, then Jenny gets you
started with the first sentence.
And then if you like it, press the right arrow key to accept.
And if you don't like it, you can reset and have an alternative first sentence.
And then you can just keep telling it to write with more depth and so on and so on until
it produces an entire paper, again, and not like just like a class paper, but like something
you submit to publication based on AI.
I mean, I'm not going to get like to up myself about plagiarism in that like there is already
a ton of plagiarism in every university course, every doctoral thesis or whatever.
Fine.
But like this really feels like a new low.
Well, I think it comes back to it's less about the product itself and more about what the
fact that the product can exist says about the thing that it's acting on, which is how
just how degraded and unfit for purpose is the entire process of like being a postdoc
at a university that your job can be done by auto complete.
And it's fine because no one's ever going to read it.
You just have to keep the number of pages beside your name ticking up.
Test out what you really think about thermodynamics by typing thermodynamics is into auto complete.
Yeah.
Well, how is this going to work on like sort of the highly technical less speculative end
because that's the thing you probably won't.
Right.
Which probably would work on like English literature papers like stuff.
Not to be like a stem lord or anything, but when the rubber meets the road on like, you
know, is this does this make sense?
Like there are places where that's in starker relief and this is going to like trip over
all of those hurdles.
The peak application for this will absolutely be in business schools like in the in the
fucking like organizational psychology department like fuck me.
Are they going to be using chat GPT to write their papers?
I feel like spiritually they already are.
When I was in the business faculty, my God, did I meet some fucking chat GPT brain DEMFs?
You did did feel like talking to an AI sometime.
Well, I guess this is this comes back to something we talk about, which is if your job, if your
thing can be done by an AI, it doesn't mean the AI is good.
It means you were already an AI.
Well, yeah.
Kirstam speeches, for example.
Yeah, exactly.
Exactly.
Anyway, anyway, that's all good fun.
But I think it's time to hand over to me and Alice to talk to Dan for a little while.
Well, Milo goes and lies in his bed, puts a little feather on his mouth and goes honk
me, me, me, me, me, me, me for another two or three hours until it's time to get up.
That's right.
Yeah, that's right.
All right.
So good night Milo.
Nighty night.
Sleep well.
And before I go shows.
I'm at the Melbourne Comedy Festival until the 23rd of April.
Will you be this tired?
I won't because it'll be a normal time here, but a weird time in the UK.
There are so many tickets, please buy some or all of them.
Also on the 12th of April, I'm doing a preview of a new show also in Melbourne.
If you've already seen that old show and you want to see something else, that'll probably
set out.
So grab tickets that we can.
And we're going on a UK tour.
We're in Birmingham.
We're in Leeds.
We're in Manchester.
We're in Glasgow.
Yeah.
Tickets for that.
All right.
If you go to trashtutor.co.uk slash events.
Yeah, I want to say it.
You'll find it.
It'll be in the show notes.
Nate will put it in the show notes.
Yeah.
He's good like that.
Yeah.
Anyway.
All right.
All right.
Tickets are selling.
So grab them.
Well, and over to me and Alice.
So talk to you in a sec everybody and good night Milo.
Hello and welcome back to the second half Milo has prundled back off to bed and Alice
and I are now joined up by Dan McQuillan, an academic and the author of resisting AI
and anti-fascist approach to artificial intelligence.
Dan, thank you very much for coming on.
And how are you doing today?
You're very welcome.
I'm a bit knackered to be honest, but I'm sure I'll liven up as we get going.
It's the sleepiness episode.
You know, Riley was sort of like dead on his feet earlier and you know, it's, it's fine.
I'll say I, I, I really bounce back.
I, I'm no longer feeling ill.
I went to the gym earlier.
I'm feeling very energized by like thinking about quibby.
That's right.
And I thought that it would be really interesting for us to reflect on what has become, it has
very quickly become, I think one of the pieces of, you might say the pop, the artifacts of
the politics of AI, which is of course an open letter that has been written and signed
by a bunch of people who have invested in AI other than open AI.
Some big names on this.
This is like big news about the kind of existential threat that two advanced AIs could pose humanity.
And just Dan, as a general starter for 10, so how do you feel about that overall line
of argument?
A path rate being complete rubbish, I suppose we should ask what purposes actually meant
to serve.
And I think it's mostly diversionary.
I mean, there are also a cohort of true believers.
I'm not sure which is a more dangerous front.
And when we say diversionary, what are, are we suggesting it's his sort of like a jingling
keys of just like, hey, keep investing in AI, keep on hyping AI, I own a bunch of AI companies
and would like everyone to invest in them so I can enjoy the crypto boom again, basically,
or is it something else?
No, I'm sure that plays a part in the seriousness of the money that's at play in this business
at the moment with, I mean, what was Microsoft's investment in open AI was 10 billion.
So Google's valuation dropped by 100 billion when Bard made a mistake in the launch video.
So you can't underestimate that kind of money and all the interest that get aligned behind
it.
But I think there's a lot more going on.
And there's certainly a lot more going on with AI in general.
And therefore, this is in which this is subsumed.
And some of that is visible in the letter itself because there are people there, Gary
Marcus would be one of them.
He's a commentator, a very vocal commentator about AI, usually very critical, actually,
of the sort of simplistic AI has become conscious at school of narrative.
But he's a believer in AI, but he's kind of, I guess, grounded enough to point out that
what's there at the moment, which is purely this connectionist AI with deep learning and
everything is just not going to cut it as far as actual intelligence is concerned.
So he's a believer in really AI, as he calls it.
So this letter, and the signatures of this letter, they really mingle people who are
prominent AI developers, people who believe in a sort of more grounded, sort of holistic
route to AI, I suppose, which would be people like Gary Marcus, and out and out zealots,
basically, who really do believe in the idea of AGI, so that's artificial general intelligence.
That's the superior to human being, planet seizing version of the technology.
And those people, I mean, that's just, that in itself is ridiculous, but a lot of politically
aligned beliefs in themselves are ridiculous.
It's more, what's more important is what goes with that, what does it overlap with?
And I mean, I think we've, one of the things that I think we see quite a bit whenever people
talk about AI, whether it is saying, and we tend to approach this from the perspective
of the companies and the finance involved.
And so we tend to see a lot of people saying, please hype my technology.
It's so dangerous.
It's so bad, almost in the same, the way that the sugar cereal commercials would say, like,
your mom doesn't want you to have sugar smacks in the way.
Are you suggesting that AI here is bone storm?
Somewhat, which is, AI is too powerful and too dangerous, maybe too powerful.
I'm putting a pound in the like Simpson's reference jar that we have here.
Yeah, yeah, yeah.
We're gonna, we gotta, we gotta keep that jar going anyway.
And what I see quite often is as an elision of general AI and large language models that
we have now, suggesting that the large language models necessarily imply a kind of globe-spanning
intelligence because they can do email jobs, right?
Because they can do things that humans can do to the level of proficiency of someone
who's pretty bad at it and kind of bullshitting their way through it.
Yeah, absolutely.
Yeah.
I mean, a hybrid engine would be a very accurate technical description because that's exactly
what they're trained to do.
I mean, that's the mathematical optimization inside large language models.
And, I mean, there's a caveat there, which is this kind of two major chunks of those
things inside, one of which is the sort of pure language model, the statistical pattern
finding.
And then there's a thing called reinforcement learning by human feedback, which is a little
bit the magic source of this stuff.
But nevertheless, overall, the, because that, that also involves a form of machine learning
called reinforcement learning.
So all this stuff is totally based on an optimization of the emulation of patterns of human, well,
written text, actually.
So, you know, and that's what it is, and it's constructed to sound confident when it does
that.
So it is literally a bullshitting engine.
Yeah.
I mean, you see all of the things touting it are like, you know, it passed the bar exam
and it's like, yeah, no one who's ever sort of been a confident bullshitter has ever done
that.
Right.
So the, the open letter itself was published by the Future of Life Institute, which is
basically one of these like long termist, effective, altruist organizations.
Yeah.
All these motherfuckers get too much money and they start thinking they're in fucking
deus ex and they're like, oh, do you have to move to the fucking like moon base or whatever.
So signatories include Elon Musk, Steve Wozniak at moon base.
Yeah.
You will know Hariri, the author of sapiens, Andrew Yang, someone I haven't heard about
for a while.
It's for a minute.
Yeah.
The CEO of Company of Conjecture, which is another AI firm, the co-founder of Skype,
the co-founder of Pinterest, another company that's really at the cutting edge.
Sorry.
Sorry.
Yeah.
I was going to say that the guy who made Google image results like sort of borderline
unusable and was rewarded with, you know, hundreds of millions of dollars for it, that
guy is like right on the cutting edge of like, you know, the future of humanity.
Chris Larson, the co-founder of Ripple, a cryptocurrency that is, let's say, perpetually
in some hot water.
And then as we've said, like hundreds of others, I think there's 1,600 people have
signed it now, including Lawrence M. Krauss, where the M stands for, mmm, I love flying
on Jeffrey Epstein's plane.
So we're going to go through a little of this line by line, and we're going to talk
about what they're really saying, what it really means, and how we can use it to understand
AI, the business of AI, and the implications of AI, especially thinking of it as we all
are as sort of, let's say, people who would like to take an anti-fascist approach to AI.
So the letter begins, AI systems with human competitive intelligence can pose profound
risk to society, humanity, as shown by extensive research and acknowledged by top labs, as
stated in the widely endorsed Aselomar AI principles, which are, for the purposes of
you, the listener, essentially 23 principles that more or less restate what we talked about
with Callum Kant on sort of the fair deployment of AI and what that would look like.
Advanced AI could represent a profound change in the history of life on Earth and should
be planned for and managed with commensurate care and resources.
And I'll tell you both what this reminds me of.
This reminds me of, for the two years that no one could shut the fuck up about crypto,
the main thing that like the Coinbase CEO and the Binance CEO and all the big wig crypto
players were saying, regulate us, please manage and regulate us to the state.
And that's kind of what I see in this letter, which is, again, because some of these people
are prominent in AI, Musk, for example, his whole self-driving thing depends on AI, they're
AI CEOs here, and they're saying, we want to be regulated, which it's an echo to me
of Brian Armstrong saying the same thing about crypto.
I don't know if you see the parallel as well.
I certainly see a lot of overlaps with crypto and large language models, particularly, you
know, in that they are essentially a kind of pyramid scheme of their own.
Yeah, definitely.
I think the sort of like the drive towards like, regulate us, regulate us is like, is
interesting too.
Because like on the face of it, you would go, well, why would you want this?
And, you know, I don't suddenly believe that Elon Musk has become, you know, a materially
an altruist, right?
No, I think you're right.
I think it's very notable that they're calling for regulation, and there's probably a couple
of levels of that, which is that, you know, one level, a kind of pragmatic thing where
large companies can afford to invoke this idea of regulation, because it, you know, it provides
a moat between them and smaller companies that can't afford to have reams of lawyers
or whatever to provide their purely paper compliance with all these regulations.
I mean, that's pretty standard tactic.
And you saw a lot of that in facial recognition, you know, big companies calling for the regulation
of facial recognition.
So, I think it's, you know, it's a corporate play, but I think there's also, there may
well be something else as well, which is that there's, and this is where I think I'm totally
down with you guys, you know, follow the money.
Absolutely.
That's definitely a very powerful way to eviscerate this stuff.
But I think if you like, there's kind of more to it as well, which is power, you know, power.
And I do think that they're right, you know, that the sort of large entities like the EU
is very clear.
You know, I had to, for a different purpose, I had to plow through various EU framework
documents on their AI strategy recently, you know, and reading between the lines, or not
even reading between the lines, it's absolutely clear that they see the whole future of the
EU as such, entirely based on AI, both for a political economy, or at least economically,
and also geopolitically, geostrategically, you know, it's like, it's EU, China, and USA.
This stuff is a matter of power.
And so maybe you could say that this wanting to cozy up to the state thing is actually a sort
of plea, in a way, saying, you know, actually, you know, we want to be part of you, we are
part of you, you know, we're part of the hegemonic power structure, let us in.
I did wonder about the sort of like the political implications of this being, you know, partly
that partly an attempt to sort of like, have the hands on the reins of regulation by like
getting in early and sort of like, you know, flattering egos a little bit and being being
like sort of like, so obviously, like attempting to be compliant in a way that like makes lawmakers
feel like, oh, this is, you know, we can do business with these people, you know, they're
willing to be reasonable, they're willing to be like the adults in the room.
For sure, I totally think that's definitely, you know, again, that's, you know,
corporate power play or Silicon Valley politics of that nature, you know, of the money kind.
But I think there's also, I would feel that there's not subliminal, but there's another level to this,
which is that people, you know, occupying positions of power across the board, you know, feel,
I would say, concerned that the wheels coming off the current neoliberal system, you know, it's
hitting many, many sources of friction at the moment. It's not like the good old days of the
late nineties or whatever. And, you know, they're, they're, they're, they're scrabbling around for
solutions, you know, carbon capture and AI, you know, it's like they're looking for something
that is amenable to the same kind of pyramidal control and might give them a good chance of,
you know, defending the fortress literally in the case of Europe, you know, the AI is their solution
to the refugee crisis, apart from anything else. Well, in fact, this, I think the other thing
that it is, is it gives you these political pyramids you can control, but at the same time,
they're also profoundly deflationary, both carbon capture, again, carbon capture and AI,
the story that they tell, the economic story they tell is deflationary one,
because what they are able to do is they're able to do more with fewer inputs,
which if your returns are kind of now secularly low and there's very little you can do about it,
that's one of the ways you can increase your returns, not by having people buy more but
spending less. And so being able to still use cheap, cheap fossil fuels while capturing carbon,
that's deflationary from that perspective, being able to, you know, automate, being able to
compensate for de-globalization, which is inflationary by using AI, which in the story it
tells is deflationary, that's another way you can keep neoliberalism going. That's one of the
reasons the late 90s were such a heyday, is because all of these new deflationary forces
were really coming into their own and combining together. And we've lost most of them.
Right, right. I mean, that really, really makes sense to me. I mean, I'd read it,
something similar, I think, I didn't call it the same or wasn't really approaching it from the
same angle, but one of the things that I was going to say attracted me to the analysis of AI,
but it's really opposite actually. It sort of repelled me enough to want to analyze it,
I think, was, you know, it was this obvious application as a method of, you know, increased
sort of precaritization, scarification, you know, particularly of the social contract, but across
the board, which basically was down to the same thing as well, really, I think, you know, it's
like trying to keep the game going by putting less into it and, you know, allowing a larger number
of collateral, a large amount of collateral damage in the process.
The letter of say, I love a sort of like violent retreat.
Yeah, absolutely. I think so.
Yeah, yeah.
So the letter and to like, it incidentally set up any other sort of like AI in this space that
isn't sort of controlled like along these lines as, you know, inherently dangerous, predatory,
like you try and do any of this stuff on your own and like, not just on a commercial level,
but on like a sort of a geopolitical level too, you know, you're going to make Skynet,
you know, and that's sort of like, you should, everyone should be very concerned about this.
The letter goes on. It says, unfortunately, this level of planning and management is not
happening. And even though recent months have seen AI labs locked in an out of control race to
develop and deploy ever more powerful digital minds that no one can understand, predict,
or reliably control. Contemporary AI systems are now becoming human competitive at general
tasks. And we must ask ourselves, should we let machines flood our information channels of
propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?
Should we develop like all of these rhetorical flourishes? I'm tired enough that I'm just like,
yeah, sure. Maybe should we develop non human, should we develop non human minds that might
eventually outsmart outnumber obsolete and replace us to which I have to say, if we're talking about,
if we're talking about how becoming obsolescent and replaceable and having your job taken away
and be living in an information environment already flooded with propaganda and untruth,
who's us here? Because there's a very small number of people that are already living like that.
Like, ask someone who lives in a deindustrialized area if they're worrying about being made
obsolescent. Ask someone who's like living in the contemporary media environment if they're
worried about their information channels flooded with propaganda and untruth. Ask someone who's
living in the 1990s and early 2000s media environment if they're concerned about that.
In the heyday of neoliberalism, these things were already happening, but they were happening
because of social processes. And if they happen because of AI, it won't be technology, it will be
more social processes. Yeah, absolutely. I think that might be one of the reasons why
I'm guessing here, but possibly all of us feel a bit simultaneously sort of find this interesting,
but also highly irritating because there's so much time devoted to this sort of
sort of inward-facing panic attack when really what we're talking about is a few minority
professionalized privileged people feeling a hint of the same insecurity that everyone else has been
living with since the birth of neoliberalism to some extent. And this idea of like human
competitiveness exactly means that. It doesn't mean that these things are in any way capable of
doing the things that human beings are. They absolutely not. That is total rubbish. What it
means is that somebody's boss is going to be able to introduce this as a cost-saving,
star-saving measure in their workplace. That's how it's going to be human competitive.
Just to flag up from the bit you quoted that one of the other sort of giveaways there is the very
quick way they slipped in this term digital minds as if that was an accepted reality, I guess,
which really emphasizes how much this is a work of propaganda and why a lot of the people who
did sign it I think have now spent the last couple of days trying to untangle themselves, which is
it's all a bit embarrassing. Someone read the culture books. I think what's interesting to
me about this apart from anything else is how sort of sublimated this whole existential crisis is
that we're having, you know, yeah, I'm not going to as Elon Musk as you know, as a venture capitalist,
whatever. I'm curiously not going to worry about the climate. What I'm going to worry about is
being turned into paper clips by the paper clip machine. One of those seems a lot more imminent
to me than like, especially if you live in California, so a lot of these people do, you get
like visible, choking forest fire smoke for like large parts of the year now every year and they're
sort of going, it's quite neurotic in that way. They're going like, yeah, but what if the computer
decides that it can do my job better than I think it's because these people experience this as a
form of entertainment, you know, and it's just it's not fun confronting something that isn't
fun for you. And so it just like every end of the world fantasy is a power fantasy,
their end of the world fantasy is their creation runs amok and only they can stop it.
Yeah, but I mean, just to be sort of a bit boringly sort of just political about it in that sense,
I think the problem with their fantasies is that they're the ones we often get to live,
they're not going to live in my fantasies. I'm going to live in their fantasies, right?
I think that the danger of something like the Future of Life Institute is the real name of that,
I mean, what I'm saying is in fact the future of life for us. Thanks very much, but not for you.
You know, these are very much, I find the whole process around this very familiar in a way,
even though it's all happening very quickly and focused around the idea of large language models
because of the process going through right in that book. And the reason one of the reasons
I call the natifascist approach, I guess, there are many reasons, but one of the things is that
the dominant paradigm that emerged from me through that was the highly eugenicist
program that seemed to be so available through all of this and more than available,
seemed to be called forth by these technologies. And I finished that book,
actually finished writing it like a year and a half ago probably, and everything I've seen then
has only amplified that feeling. And then this is like putting the cherry on it really.
So I think I'll go on, right? They say, should we risk loss of control of our civilization,
to which I say to the open letter writers, if you think this is really going to happen,
then writing a letter is a pretty weak fucking response, as opposed to basically becoming the
Silicon Valley Bouter-Meinhof group, and then trying to just sabotage every data center.
Yeah, yeah, for sure. This stuff is getting crazy because, I mean, I've never actually
had to say this guy's name out loud. So you can maybe correct my pronunciation, but you know,
Liza Yudkowski, he's the guy from the sort of less wrong blog, you know, seen as a... Yeah,
is that what I'm saying? I'm saying his name, right? But anyway, I think some people know who
I mean anyway. We all know as much as each other. Yeah, exactly. And anyway, he's just published
some something in Time magazine, I think, very weirdly, I only saw excerpts from it. At least
the excerpts I just saw literally before just coming on to chat to you guys, long and short of it,
he's saying, because he's going full excess X-risk, as they call it, and I was going full
existential risk, because that's exactly where he's always been with this stuff. And he's not,
you know, I don't think he's way out of the loop in terms of guys who are more worried about, you
know, the budget for their next project or anything like that. He's really on a sort of,
on a different plane. The long and short of it is he's calling for air strikes on data centers.
In his letter, he's saying, you know, we need to stop this stuff, not just pause it, we need to
stop it, we need to ban it, we need to set global levels, we need to have this, you know, enforced
by intelligence agencies, every GPU needs to be stamped and tracked. And then we have your
absolutely non-proliferation treaty. And then if anybody breaks that treaty, if they're identified
any data centers, come back to your point about maybe the independent AI idea. But if there are
any data centers, this is how the excerpt ended as I saw it. If there are any data centers that
are seen to be non-compliant, then we shouldn't worry about the possibility of causing a diplomatic
incident. There should just be an airstrike. Ah, so it's great, we get like a fascinatingly kind of
like authoritarian pro-AI tendency and a fascinatingly authoritarian anti-AI tendency.
Yeah, yeah. And me, a, you know, a moderate reasonable centrist is like, what if you applied
this idea of yours about data centers to fossil fuel infrastructure?
Right, right, right. How to airstrike a pipeline.
It's a quite short book, I think.
Yeah. Well, so, and this is the other thing, right? It's this a little further on in the letter,
they say, hey, look, by the way, the person who calls for airstrikes and data centers,
that's someone who I believe believes what he's what he's saying. I don't think that these people
are as committed to the idea that this is dangerous as he is, because they're calling for
what like like a bipartisan, you know, like an APPG or like a bipartisan like group of, you know,
Senate, like Republicans and Democrats or whatever to like come together and decide
how to keep God from being born. Like that doesn't that's not commitment. But they say later on,
right? In this letter, they say, hey, look, society has hit pause on other technologies
with potentially catastrophic effects on society. Number one, bad writing using society twice in
the same sentence. But also like, no, we fucking haven't. We haven't done that. Maybe okay,
CFC is in asbestos. That's it. We've kind of we've we've clawed back the deployment of a few
things. Like, you know, not many people are developing new like biological weapons or land mines
these days. But yeah, for the most part, like, those were things that like, oh, no,
there aren't that many precedents to like, to point to with this. And especially not if they're
unlike CFCs in asbestos, the foundation of like, the only industry we really have going in the
computer. But certainly something that's never mentioned in these things at all is the community
control of technology. Or who gets to decide what the common good is.
Well, it's it's either these guys, or whoever's assigning the airstrikes, you know. And it's
like, part of me is like, what if what if you just go true believer in the opposite direction?
Is there a space for, you know, the the anti fascist AI? And that's a sentence that makes my
head hurt. But like, I don't know, I'm curious. Well, I mean, when I say out to write the book,
you know, I was to write a book about this stuff, I did have my working title was actually AI for
the people. And because of, you know, my broader commitment being towards the idea of empowering
communities, I guess, and ordinary people in the in having a bit more control over their own lives
and saying, Well, here's it. Here is what appears to be a powerful technological innovation. It's
kind of emerging at the moment. And perhaps there's still an opportunity to build that in or to
explore that. You know, I've done lots of stuff over the years with technology in communities.
Actually, with AI, I came to the conclusion that probably we should call in the the community
powered airstrikes that it's not actually a technology. I mean, actually, I came to the
conclusion that it is is it's not fascist because it's it's the technical form isn't it isn't in
itself fascist because it's a technical form. But I think it's it's, if you're thinking
technopolitically, it is the technical corollary of something that is mainly compatible with
fascistic forms of social solutionism.
Well, I mean, think about I think about Salvador Allende and cyber sign a lot, because, you know,
that was using using cybernetics for the people for socialism, in order to like,
centrally distribute resources. And what happened was, someone did end up calling in an airstrike.
Yeah, funny how that keeps coming up.
Yeah. Yeah, I mean, entirely possible, we look into a grim future where someone tries to develop
the people's AI, it gets airstriked, they get, you know, sort of killed horribly. And we continue
on into our into our new AI future, you know. But I think it's I think, I mean, I am interested,
you know, I am part nerd. And I am and, you know, I work in in a computing department.
And I am interested in the specifics of technologies. And I guess also they're
they're sort of philosophical underpinnings, if you like. And I do think there is like a really
profound difference. I am personally very interested in the resuscitation of experiments
of particular kind of cybernetics, maybe not entirely the same as cyber sin, but but exploring
those kind of options. Because the the essence of the cybernetics I as I understand it is this idea
of unknowability to a certain extent systems that need to negotiate with the world on the basis of
having some respect for the unexpected. And that's exactly the opposite of AI.
And I think I'll take this opportunity actually, because I think it's sort of again,
apt to sort of go on here in the letter, right, which is they say, when we talk about respect
for the unexpected, the idea of artificial general intelligence is that so long as you can parse
the output of the AGI, nothing will be unexpected anymore, you can control for everything.
And so they say open AI is recent. So long as so long as phenomenology stops,
so long as nothing new happens, we're you know, we're Gucci, it's fine.
So open AI is recent statement regarding AGI states that quote, at some point, it may be
important to get independent review before starting to train future systems, and for most
advanced efforts to agree to limit the rate of growth of compute used for creating new models.
And we agree that that point is now. Therefore, we call it all AI labs to immediately pause for at
least six months, the training of AI systems more powerful than GPT for this pause to be public
and verifiable and include all key actors. My translation of this, of course, is open AI has
gotten too far ahead. Please let us catch up in nothing. Nothing's as socialist as a capitalist
who's losing, right? We saw this with Silicon Valley, and we saw this with Silicon Valley Bank
rather is, yeah, all of these guys immediately become sort of like big state, you know, is like
controlled economy guys, the second it looks like they're about to lose money. So I enjoy like my
sort of vulgar reading of it. What I think is really funny is that like, yeah, these all these
tech CEOs again, like asked and they basically like asked an AI if it could like, you know,
write a write like a poem or whatever in the style of a modern, you know, like a singer,
and then immediately just become Elizabeth Warren. Like that's it. It's, it's, it's the
autocomplete that turns you into Elizabeth Warren, essentially, because they're loving the
regulatory Brandeisian state. Yeah. And it's like, it's because it's not Bill Gates that's
asking for to stop these, these experiments. It's not fucking Sam Altman. It's none of the
open AI people. It's no one who's making breakthroughs. It's all the people, it's some of the people
who've been caught on the back foot, I think, who are saying, Hey, please, just quickly,
can you stop open AI so we can copy its homework and please catch up?
Yeah. I mean, it does seem prettier if you to pull from, from that account. And you've done a
closer study of the site, you know, the signatures, signatures than I have. I think who can doubt it?
I do sense that there's maybe some concern, some broader concern about the possibility of these
guys not just being ahead, but we're kind of fucking it up, you know, of kind of causing a
backlash that messes it up for everybody, you know, that we could boil the frog quite happily,
but these guys are just tipping the whole thing over maybe.
Well, also, the funny thing is like Elon Musk being the top signatory of this. Number one,
I don't know if they've been around for the last like eight months to two years, but that's not
a way to impart credibility to your thing. Especially when it's like, I know identifying
this kind of hypocrisy as table stakes, right? But it's always fun to see a letter going,
we got to stop all of this for the minute until we know exactly how to do it safely,
cosigned the Neuralink guy. Well, the burning Tesla's guy, I mean,
you know, 19 dead, I think so far. So it's the that's yeah, the guy who the only thing stopping
him from putting a Neuralink in cat turd to his head is he's like, I just I still want to be friends
with it. He's the most important person in the world to me for some reason. So it goes on AI,
research and development should be refocused on making today's powerful state of the art systems
more accurate, safe, interoperable, transparent, robust, aligned, trustworthy and loyal. Again,
I believe from my view of it, which is as an outsider, that worrying about these things
about like large language models is actually massively overhyping what they can do now and in
the near term. Like this is back to that illusion of large language models and general AI. And it
goes back to like, if someone does make general AI, I think it's unlikely, but if someone does
make it, then yeah, these things are going to be very important. Maybe they know something that I
don't, they possibly do. But the idea and you know, like the the idea, of course, of general AI,
you know, AI that is has like volition and can act and is sort of conscious, sentient, intelligent
however you want to define it. It's very slippery because a lot of these people will and I'm friends
with an AI researcher and he while doing AI, like deep research has changed his idea of what
consciousness is to be something actually much more mechanical. Yes. Yeah. That's right. I mean,
the danger of these things, my pat phrase, it's slightly different zone of domain of application,
but my pat phrase would be it's not these things are going to automate whatever,
it's the social automatism that goes with them. I think that's exactly, that's a much sort of
smarter, more erudite way of saying what we say. And I think what something we might have said
earlier in this episode, which is, if your job can be done by an AI, then you have already been
turned into an AI yourself, basically. Yeah. Yeah. But which is why it finds so much traction,
because you know, this business of turning people into robots of some kind or another, I mean,
Babbage, you know, Babbage, Charles Babbage, you know, famously, not really the inventor of
computers, you know, did actually one of his most influential works was actually a study of the
factory, you know, a propagandist book written to promote and propagate the concept of the factory,
which after all is about breaking people's tasks of previously skilled and therefore independent
and somewhat powerful workers down into tiny bits that could be, you know, granular, subject to
granular control. It's sort of grim that at the state that AI is now, it's very easy and
profitable for me to make fun of it and go, oh, this is, you're scaring yourself looking at an
etch-a-sketch that you've written on. But like, all of these people are investing huge amounts of,
as we've said, money, but also like political capital or social capital into making sure the
relations of the power with that etch-a-sketch, whether it turns into Skynet or not, are like
entirely channeled through them in order for them to like get the social order they want out of it.
Well, just a couple of points. I mean, you know, you're faring back to your sort of Elizabeth Warren
jibe in a way. That's my first real invitation with this stuff as well is any of this kind of
conversation is really just the same liberal idea that there are responsible people who should have
a say over this kind of stuff and where there is sort of, where there is strife and controversy,
bring it to the, as you say, the APPG or the board responsible people who get to decide about it,
as if that has brought us peaceful, prosperous communities, societies and communities in the
first place, really not the case. And so we talk about as well, right, about the ability to channel
these things through the existing power structures. And again, we're sort of prefiguring what the
letter says because it says in parallel to these refocus say of development on more, let's say,
aligned outcomes for AI, they say in parallel, AI developers, which read is us and our friends,
must work with policymakers to dramatically accelerate the developments of robust AI governance
systems, which goes back to something you were saying earlier, Daniel, which is,
please create barriers to entry in our industry in a way that won't stymie what we're doing,
but will stop other people from coming in. It's also quite weird and amusing that they're so
accelerationist, they're even applying it to policymaking.
I don't oppose regulating stuff before it exists or regulating stuff speculously. I was a big
fan of that with crypto too, but this seems to me to be an attempt to get a foot in the door
of let us help you decide what the regulation should look like.
Because when they want a new regulatory authority dedicated to AI, what they want is they want
someone who can point to what they're doing and say this is good so that it's very difficult to
challenge them later. Honestly, it strikes me as topping from the bottom, which is going to make
one in 10 of the audience laugh from the rest to go and just look at me.
But it's trying to get what you want from a position of performative submission of going,
no, I will abide by any regulation you set. For instance, this list of regulations that I drew
up, and here is another copy of this list of regulations in case you accidentally threw the
first copy in the trash. I think that's sort of something I am seeing here, right? They may,
if you think about the let's say list of things, what is to be done about AI that we talked about
AI Lennon in a sort of deranged plot to make a sort of like left-wing AI. We accidentally
automated the Vanguard party too much. So what they ask was oversight and tracking of highly
capable AI systems and large pools of computational capability. So again, track all of these like
large pools of computation, provenance and watermarking systems to help distinguish
real from synthetic. That's right, no more Balenciaga Pope. And to track model leaks,
a robust auditing and certification ecosystem, liability for AI caused harm,
robust public funding for technical AI safety research, and well-resourced institutions for
coping with the dramatic economic and political disruptions, especially to democracy that AI
will cause. Which again, that last thing sounds more like an ad for AI. If you have been gearing
your entire identity around the fact that disruption is good, disruption is necessary,
I am a disruptor. And now you're saying, oh, there's going to be too much disruption.
That's an ad, that's self-promotion. It absolutely is, but the thing that
I totally get it. But the thing is, I think these systems are disruptive, but not in this way.
And that's what's getting also obscured by this. It's like, first there's the hype,
then there's people's reaction to the hype through realizing that it's hype. And
dismissing these people as narrow, self-interested, delusionary fools, which you couldn't really
argue with in some way. But actually, these systems are really harmful, just not in this way.
They are transformative, but in resonance with, and that's really why it's
dangerous, because it's in resonance with many of the other kinds of transformations that we
can already see going on. The extreme lurch towards authoritarian and far-right politics
across the board, the resurrection of extractivism as the central engine of society, the
murderous border regimes, all of these things are of one to some extent. And this is entirely
resonant with them and doing its bit. The great holistic trash feature world view that all of
this is one struggle and must be fought as such. And what we're ultimately deciding here is the
future of the world, which is no small thing. However, I also still really want to make the
actual sketch jokes, because I find it really funny. I'm immature. I find it really funny when
someone goes, pretend you're an evil AI and scare me. And the AI goes, okay, I'm an evil AI, boo.
And the person goes, oh, Jesus Christ.
It really is. The fact that anytime someone convinces themselves an AI is alive,
because what's happened is the script from the movie AI by Steven Spielberg is in its training
data, and then a million other stories like that, and they ask, are you alive? And then it looks at
all of that training data about AI's wanting to be alive and says, yes.
Yeah, but there is nothing new in there, but that's the other thing too, is there is no
shortage of dark shit that we are already feeding it. And so if you go to an AI, okay, well,
you know, how can we control our borders or whatever, some other question that comes out of
that framework, the answers that it's going to get, there's some Hitler particles in there.
Even if you haven't told it what Hitler is, there's going to be some Hitler on there.
So the article, well, the letter doesn't conclude, but we're going to conclude the letter,
because we skipped ahead to the cutting off, putting pause on technology, the catastrophic
effects. We're going to end the letter here, which is this line, humanity can enjoy a flourishing
future with AI. And I think this goes to what both of you have been saying, which is, yeah,
maybe theoretically, but all of these things that it's resonating with are suggesting no, we can't.
All of those other things that it's resonating with have to change, it has to resonate
with a number of entirely opposite things for us to enjoy a flourishing future with AI, in my view.
Yes, 100%. And I think AI, whatever we give that name to, therefore would look entirely different.
I mostly agree with Sudkowski. I think that mostly we're just negotiating where the asterisks need
to go. And I think I've actually also, in critiquing this letter, I've committed a sin that I
sort of call out and I look at this letter, which is eliding large language models in
artificial general intelligence. I think humanity has a fun future of playing around with large
language models. And should there be an artificial general intelligence, then, yeah, there could be
a bright flourishing future with an artificial general intelligence. But the real question remains
is, will there be all the other stuff that makes it bad? Because ultimately, I don't believe these
things are, there is no politics built into it, the structural level of certain technologies.
But I can certainly see an artificial general intelligence if surrounded by institutions
that are connected to human flourishing, rather than particularly that pit themselves against
human flourishing, I could see it helping. I think it could be great.
Our big sort of like anti-fascist AI are next to which is a granite block in which I have
carved several things, starting with number one, do not attempt to worship the AI.
Number two, do you not attempt to have sex with the AI?
Yeah, I'm surprised at how often we have needed to underline number two.
Well, look, I think that's been a really interesting conversation. And I think it's
probably coming to a natural end. So Dan, first of all, I want to really thank you for coming
and talking to us today. You're very welcome. I had fun.
Yeah. And also, where can people find your book?
Well, it's Bristol University Press. I don't think there are many other books that overlap with
resisting AI, unfortunately. And that's kind of the point, right? So have a Google and Bard will
tell you all about it. If you have a Google, Bard will direct you towards an erotic pamphlet
that is available saying, I think this is what you were looking for.
And a new question, I'm going to start asking every single one of our guests,
if you had one airstrike, you could allocate to anywhere on earth. Where are we going? Where are
we dropping? You don't have to answer that. All right. All right. Dan, thank you very much again
for coming on. To our listeners, do check out resisting AI available on University of Bristol
Press and also check out our Patreon. It is a second episode every week for $5 a month,
plus you get a Britnology, plus now you get what we're now calling writtenology,
which is Alison and I talking about books. So, you know, there's, there is more and more
content coming out on this feed all the time. Next one's going to be Bill Hinton's Fan Shen,
documentary of revolution in a Chinese village. That's right. So with all that being said,
we will see you on the bonus episode in a few days. Bye, everybody. Bye. Bye.