TRASHFUTURE - *PREVIEW* American Herstory X feat. Jathan Sadowski
Episode Date: May 30, 2024For this week's bonus, Jathan Sadowski of This Machine Kills podcast joins the gang to talk about the “new” generation of AI models announced by Google, OpenAI, Microsoft, and others - all of whic...h seem to want to imitate the movie Her. The value proposition of this to a normal person is unclear. Get the whole thing on Patreon here: https://www.patreon.com/posts/american-x-feat-105234291 *EDINBURGH LIVE SHOW ALERT* We're going to be live at Monkey Barrel comedy at the Edinburgh Fringe on August 14, and you can get tickets here: https://www.wegottickets.com/event/621432 *STREAM ALERT* Check out our Twitch stream, which airs 9-11 pm UK time every Monday and Thursday, at the following link: https://www.twitch.tv/trashfuturepodcast *WEB DESIGN ALERT* Tom Allen is a friend of the show (and the designer behind our website). If you need web design help, reach out to him here: https://www.tomallen.media/ Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and November (@postoctobrist)
Transcript
Discussion (0)
Real quick on this as well, I don't think the endpoint is that we will like constantly have
conversations with our phone. I think that's actually for them, that's just a necessary
step along the way to the actual endpoint is that conversation becomes obsolete because the AI
can so precisely predict exactly what you need before you even think to ask for it.
You no longer have to directly
interact with the phone.
It becomes a one way interaction
of the AI feeding you.
Oh, you want to know what the weather is?
I knew you wanted to know
what the weather is before you even knew
you wanted to know what the weather is.
Here's what the weather is, right?
Like that is, I think,
actually their thinking here. This is why, you know, all weather is, right? Like that is, I think, actually, their thinking here.
This is why, you know, all these models are predictive
models to varying degrees, whether it's predicting
what the next token in a linguistic
chain is, or it's predicting what
somebody wants, you know, out of
a search result, or it's predicting
what you want before you know that you want it.
Like that's their actual mindset
here is not that it becomes this like constant companion that you want it. That's their actual mindset here. It's not that it becomes
this constant companion that you're chattering away with all day, every day. Instead, it becomes
this thing that makes chattering away obsolete because it's feeding you information that you
then just receive. So there are a couple other sides of that. Number one, I think the other side
of the multimodal model is, yes, there is the one that individuals will use as consumers. I
take your point, Jaython, as well.
This is especially true if you look at the other companies, Microsoft and Google especially,
whose all their new product announcements are just like, let us intrude in more of your
life. Give us access to everything and we'll let you search your own photos or
whatever.
But the other side of it as well, and we talk about companies like this in the show quite
a bit, is people who are trying to turn this into industrial functions. Right now, there
are a lot of companies who are trying to build something like some kind of a chat GPT Omni
type interface for warehouse, so that you can
have a robot that quote unquote sees.
And again, the ambition is to finally be able to own a person because you just make the
person and then you own them. But again, the reality of that is going to be probably that
these things will... Because they're all just behaving predictively based on averages,
again, that they are going to be unpredictable and things are going to get much worse wherever
they're added into industry.
Now again, none of these demo days are about that because none of these companies want
to make it... Want you to be thinking about, we're going to make a human that's going to
be able to take your job, for example,
or you're going to have to interact with. But again, I think that's the vision. Although
Jaython, I don't know what you think about that.
Obviously with these demo days, we have to talk about them. But not because they're actually
revealing something about what technology is going to be like, but more because they're
revealing something about what these companies desire, what goals they're working toward.
But we always have to think, like,
keep in mind that this shit doesn't work and it's not going to work in anything
like the kind of like, you know, these,
these totalistic visions that we're laying out,
those are not predictions or prophecies of what's going to pass.
These are more us explaining the imaginary and the desires that power these technology companies.
In reality, it's not going to look like Brave New World or 1984, right?
This super totalistic and really highly effective kind of dystopia.
My touchstone for this is always something more like Philip K.
Dick and Paul Verhoeven is dystopian because nothing fucking works, but people
still keep shoving it down your throat and still keep giving it power and authority.
Right.
It's Ed 209.
It's the robot that doesn't disarm when you tell it to disarm and thus it shoots up the
boardroom.
I mean, in shalom.
I've got to shoot up the boardroom.
I've got PTSD.
I've got to do it.
That's what I think we always have to keep in mind here is that like reality exists in
the gap between the present and the future that these companies want to reach.
But that doesn't mean that like the consequences are no less material just because we have a lot of AI slop rather than
some shiny, highly effective interface.
In fact, I think it's more consequential because they treat the slop as if it is highly effective,
totalizing and perfect. So based on what you're talking about, we'll move on to Google in a second. But the chat
GPT looking at a piece of paper, quote unquote, looking at that says, I heart chat GPT and
reacting to it in a way that is an approximation enough of how a human would do it to impress
Kevin Ruse.
All that has to happen is someone who manages buying at Amazon needs to start a pilot program
to have multimodal visual robot warehouse workers, which we know they're also looking
at. They just need to impress them enough. They need to close that imaginative gap just
enough that they're willing to take a punt on it.
And then all of a sudden, if you're working in one of those warehouses, your job not only
becomes doing your job, but also doing it around the robot that everyone is
convinced is going to do it better than you. This is how these fictions can be actually
damaging to the actual economy where actual things are supposed to happen.
Because it creates the Ed209s that you then have to navigate around
by impressing people like Kevin Ruse, basically.
Yeah, it's a con game. It's a high stakes confidence trick is what it is. that you then have to navigate around by impressing people like Kevin Ruse, basically.
Yeah, it's a con game. It's a high stakes confidence trick is what it is. I think we
can't understand machine learning without understanding the kind of epistemic politics
of authority that surround it. So in other words, the kind of politics of how we understand
knowledge and understand truth and then then through that kind of construct
these regimes of authority,
that's actually the real power of machine learning.
Not some like technical abilities that it has,
but rather a kind of political ability it has
to construct these like naturalistic truths
out of the world that it then convinces you
it has your privileged access to
the ability to discern patterns in the universe of data that humans can't and to discern them
so quickly and act on them so decisively that it surpasses any human capability.
That's the real power of machine learning is it's politics, not its technology.
The truth that it's acting on when given access to the entire learning is it's politics, not its technology.
The truth that it's acting on when given access to the entire internet, it's that you should
glue your pizza, right?
That's how reliable it is.
Speaking of giving access to the entire internet, why don't we talk about Google?
Because Google has a basically similar enough product called Astra that again is meant to
possibly go in glasses, see the world around
you, tell you where you are on a map, and so on.
But also, they have a number of other things that they've announced are worth talking about.
I think the most important is probably the redesigning of Search to include even more
useless nonsense between the top of the page and what you're actually looking for.
ALICE Yeah, it's broken now, Search is broken now,
it's over.
You can't Google anything anymore, which is, you'd think that would be important to a company
whose name, when used as a verb, refers to searching with that product.
No Googling.
It's against the rules.
Yeah, I mean, this is the thing, instead of like, don't be evil, we just have like, don't
Google.
Yeah.
Don't be anything.
I mean, if you wanna talk about about like companies that whose core product is sort
of shriveling and dying as they sort of, as the rot chases their expansion,
I mean, you could do worse than Google and Facebook.
Ask Jeeves just being like, and where has that brought you back to me?
Toss it on the scrap heap without a vista.
Yeah. Yeah. Yeah.
Not only now are they just adding tons of sponsored links and SEO spam to the top of
whatever you're searching for, or are they creating their own little info cards based
on whatever they can find, they're now using a large language model, they're using AI,
to try and parse your search for you, and answer your question largely wrong.
Yeah, based off of Reddit comments and Quora
and shit like that, and because it doesn't know what a joke is and because AI is allergic
to being right about anything, it always includes these strange hallucinations.
It's also broken all the calculation stuff, so you used to be able to just type into Google
like, you know, 100 miles in kilometers, that's now gonna be wrong.
Which is great, that's gonna lead to some entertaining errors, I think.
I tend to like to think of these things in terms of, y'know, ancient metaphors.
And I'm thinking about the burning of the Library of Alexandria, but on purpose, and
to power a windmill that is not doing very much, right?
It's the thing, it's not even really the Library of Alexandria, right?
It's just, it's a convenience, right?
But essentially in this case, if you want to know what a hundred kilometres is in miles
or whatever, you have to return with a V to like, dedicated website run by like, a very
dedicated person for a thankless kind of, for no reward, you have to go to like, you know, bobsunitcalculator.com
or whatever, and type in the thing into a much older UI. And I think that's maybe to
the good.
In that sense, yes, but I more mean that the thing that indexed all of that information,
or the thing that most people use to index all of that information, and trying to continue to squeeze more growth, more value out of this by either becoming flashier to investors
or more valuable to advertisers, is slowly burning the thing that allowed all of that
information to be indexed in the first place.
ALICE Yeah, I mean, I guess it's not a great thing,
right, but like, in this analogy I guess I'm saying, it is still quite easy
for you to, instead of going to the Library of Alexandria to fuck off to the monasteries,
you know, or whatever, and like, go find their copy of, you know, the name of the rose, or
whatever the fuck.
I guess it's just a shame that the sort of sum total of human knowledge to this point
was entrusted to the thing that's incentivized to interpose itself in as many places as possible to try and
digest it for you or show you something else that someone has paid to show you.
Mm. I've had to go to the monastery atop Mount Athos to find Bob from Bob's unit
converter to be like, we need you back! You've got to come out of retirement for
one last job. When we needed him most, he vanished.