Significant Others - Daron Acemoglu on Marxism and AI
Episode Date: April 4, 2024Are we, in fact, living in a time of revolution? What it means and what you can do about it, according to one of MIT’s greatest thinkers. Power and Progress: Our 1,000-Year Struggle Over Technology... & Prosperity
Transcript
Discussion (0)
Welcome to Significant Others. I'm Liza Powell O'Brien, and in yesterday's episode, we heard all about how Karl Marx advocated for the emancipation of the proletariat while creating some highly unequal living conditions in his own home.
To talk a little bit more about the world that Karl Marx helped make, I'm very excited to welcome MIT Institute Professor Darren Asamoglu to the podcast.
Professor Asamoglu, it is an incredible honor to have you here today. You've authored many books,
and we could spend an entire episode just listing your accolades. But to start,
you are a professor of economics. Is that correct?
Correct.
And most recently, you have published a book that you
co-wrote called Power and Progress. That's right, Liza. Thank you. The subtitle is Our 1,000-Year
Struggle Over Technology and Prosperity. And we're going to begin by covering each of those years
in real time right now. Thank you, Liza. It's my pleasure to be here. Marx's name is so closely linked to the idea of revolution. He wrote a lot about economic and social revolution. And it's been said a lot recently, and that doesn't always mean it's true, but it does feel like it is true right now, that we are living through quite a revolution in terms of what artificial intelligence is doing to every aspect of our lives.
artificial intelligence is doing to every aspect of our lives.
Absolutely. Yeah. Well, great. And I think it is a really insightful step back to go to Marx because I am not a Marxist and I don't think he got everything right, but he did get a few things right. And chief among them is the importance of conflict in economic relations, that there
is going to be a process that benefits some people and not others.
He emphasized the control over means of ownership of means of production.
That probably wasn't a bad starting point for the 19th century.
But today we should perhaps think about the control over the means of information.
And the AI revolution is inseparable from who will control information, who will be
able to present it in the way that they want to present it to us, and who will control
data and what they will do with it. So some of the conflicts that Marx imagined and a few of them that he very successfully documented
are going to be transformed with the AI revolution as well.
I will not spend this whole time talking about Marx because I feel that that, you know,
only serves us academically for a little bit.
I feel that that only serves us academically for a little bit.
But I just couldn't stop wondering when I was sort of engaging with coverage of his ideas.
I wouldn't even say I was engaging directly with his ideas because I was not cracking open Das Kapital.
But as I understand it, one of his most revolutionary concepts was that the workers did possess something, right? That the serfs were thought
to possess nothing. And then it turns out, no, they possess the means for laboring, which is the
value. And that, so I just can't stop wondering what would he have to say about this world in
which, you know, it, again, is a lay person's take on it, but it feels as if,
you know, what each individual currently owns or should have ownership to is, as you say,
information. The fact that that even needs to be articulated is sort of terrifying. And then
beyond that, the fact that it may not be agreed upon is even scarier.
So anyway, I don't know if you can play, you know, fancy football coach, but.
I can, I can.
Well, first of all, I think it's a very, very important point you are raising, Liza.
But I want to, again, take one step back.
And I think we are in the midst of a really transformative moment, but we are also creating a lot of hype.
But we are also creating a lot of hype.
One thing that was very important that laborers possessed at the time, Marx writing, is still important, manual work.
We have robots.
We have a few advanced types of machinery that can perform some manual tasks. What we do is physical, including social contact, including how we manipulate all sorts of physical objects around us. So we're going to need workers.
And when people say, oh, artificial intelligence is going to do such and such, they often are forgetting the very important social and physical aspects of work.
So that's where the hype comes in.
That doesn't mean that workers are going to get compensated for that.
There is a social decision, technological decision, political decision.
So let's not forget that.
But second, for the tasks for which AI has a claim to be doing a decent job or better
job than humans, AI is able to do that because of the data that we
have provided. And a lot of that is creative data. Let's go back to the Writers Guild of America
strike against Hollywood. I think they had it just right. The issue is, you know, if any one
of these AI programs, be it GPT-4 or things that are going to come next, are able to do
a halfway decent job, and I don't think they can are able to do a halfway decent job.
And I don't think they can do much more
than a halfway decent job at the moment
in terms of writing scripts
or helping the production process of entertainment products.
That's thanks to their ability
to have taken the creative data of workers before.
And right now we have allowed tech companies to expropriate that data.
In the same way they have expropriated Wikipedia's data, a lot of books, a lot of other things.
But even worse is the creative data of artists, writers, creators.
And the question is, who owns that data?
What are the limits to put on companies to use that data?
And what sort of compensation we want?
And I think this is not just distributional questions.
They're actually also efficiency questions, by which I mean that it's probably not a good
thing if we fail to compensate creative artists so that they don't create the relevant data
and we are left in a world of low quality data.
I mean, if you think about it,
many of the problems of hallucination,
low reliability,
and the crazy things that come out of
many of these large language models
is because they've been trained on Reddit.
That's not like the highest achievement of humanity
of the last 5,000 years,
to put it mildly.
So we want higher quality data to go into AI products if we are going to create something good out of the AI product.
But that means we also have to incentivize and compensate the people who are providing that data.
It feels like we have to say, I'm just thinking about how if the analogy is to a human being that the input needs to be of higher quality.
You know, you send your kid to a good school.
Absolutely.
When you send AI to a better school,
then just like right now they're hanging out in the bathroom,
just hearing all of the gossip.
Absolutely.
Even worse.
Nonsense.
Yes, exactly.
Right now, AI is in the news a lot.
And right now, AI is in the news a lot. There seems to be a clarifying divide, I guess, between the camp that is pushing for progress and trusting that progress will be good because technological progress, quote unquote, is always good. And then the camp that is very anxious, and I'm talking about within the AI community,
the camp that is anxious and circumspect about sort of the Pandora's box of this all. And with
the shakeup at OpenAI, with the ouster of Altman, and then the reinstitution of Altman, it feels,
again, from the layperson's perspective, that caution has completely lost the game right now.
Is that true?
That's a very, very fair assessment.
I mean, unfortunately, unfortunately.
First of all, look, I don't think we can trust worrying about the safety
and negative social consequences of AI to people in the AI community only.
So I would not have been relaxed that there were a few board members of OpenAI
with its weird organizational structure that worried about guardrails.
Oh yeah, I can sleep well at night.
No, that certainly wasn't the case for me.
But the current saga definitely shifted the center of gravity more towards people who will pay a lip service to safety and regulation, but are going to push very fast.
Look, I think you have to think about what is the general broader ecosystem of the tech world.
And that cannot be separated from venture capital and an imperative for growth.
I don't think many people today will look back and say, Facebook adopted a strategy
that was so good for humanity by growing to 4 billion users at the fastest speed possible.
I think even Facebook people wouldn't say that today.
But why did they do that?
They did that because their mindset and financial incentives
were shaped in an ecosystem where the thing that you want to do
is grow as fast as possible, dominate as fast as possible,
as much data as possible, become the largest corporation so that you can elbow out all of the rivals.
That's what we're doing with generative AI.
So it didn't work so well with Facebook when that was a much more limited product.
What makes us think it's going to work much better with open AI and generative AI?
This leads me to a weird question.
of AI. This leads me to a weird question. So I was reading this article about sort of just doing an overview of the history of these various AI development companies and telling the story again
and again of basically how money came in and completely dominated the conversation and that,
you know, all the decisions are being made in order to, you know, that I was like, it's like
the movie Indecent Proposal. Like these people have this idea of having in order to, you know, that I was like, it's like the movie Indecent Proposal.
People have this idea of having a conscientious, you know, approach to developing this technology.
And then someone comes, Google comes in and says, we'll give you $650 million, but you have to let us do whatever we want with it.
And they're like, okay, go for it.
Yeah, exactly.
okay go for it yeah exactly all all the well-laid plans of social consciousness are only as good as paper thin when they meet big sums of money absolutely that's right which is very human
and then i was thinking okay say we get to the point where ai is really running everything
ai doesn't care about money so like what would that do would that do? But I have an objection.
I have an objection.
I even object to the language AI is going to be running everything.
Okay, good.
That language, I think, is very misleading because it already puts us in the mindset
that this is like super intelligent AI that's going to have its own agenda.
intelligent AI that's going to have its own agenda.
Everything I know so far is that AI is an impressive set of tools,
but they are in the hands of people.
So it is the agenda of the people who use that tool or those tools.
And it's the guardrails and regulations and institutions that put restrictions on these people.
So we are not seeing the will of GPT-4.
We are seeing the will of Sam Altman and, you know, companies that are funding him.
So I think that's the perspective we should bring to it. But that makes me no less comfortable, by the way.
Perhaps AI would be better than very, very powerful, unchecked tech entrepreneurs like
Sam Altman and Elon Musk.
I don't know. Machines versus Elon Musk. I don't know.
Machines versus Elon Musk.
I'll take machines.
When they talk about the singularity, then what is it with the event that I've seen referred
to as the singularity is the moment, quote unquote, when AI becomes whatever, more in
control, superior to him.
So what are people actually referring to when they
use that term? Well, I think that term as a definition means that's the moment where
machines in general, but led by AI, are so capable that they can do everything that humans can do,
can do everything that humans can do as well or better.
At that point, machines would have no longer any need for humans, except for perhaps humans may have some say about the objectives, but they can start self-replicating and from then
on they can get even better and better and better.
So that's the idea.
You know, that goes back to the sort of ideas of a mathematician, Goody, who said, like, you know, the super intelligent machine is the last machine humans need to invent.
And from there on, everything machines can do.
And those sorts of ideas have, of course, always been part of science fiction. And we've also baked in a bias towards these machines in the way that we have set the agenda for artificial intelligence and computer science going back to the days of Alan Turing, who was a brilliant mathematician and definitely the undeniable father of a lot of what came thereafter.
Very, very amazingly intelligent and creative person.
But I take issue with the philosophical approaches that he introduced,
which is to elevate the creation of autonomous intelligent machines as the pinnacle of what we should be striving for.
And instead, there were other thinkers at the time and thereafter, and I side with them much more,
that what we should be trying to achieve is machine usefulness, meaning rather than intelligence of
machines or machine intelligence, we should strive to get machines that are useful
to humans. And that means we have algorithms and models and AI that make us better, more capable
workers, more capable, better decision makers, better communicators, better democracy tools,
better emotional, you know, fewer emotional and mental problems. And we've done the opposite. We have sideline workers.
We are on our way to destroying democracy.
We have created a whole mental health crisis
thanks to social media.
So I think the machine intelligence agenda
cannot be separated from these really negative effects
of the computers and AI technologies.
Yeah, so what do we do? What do we do? the computers and AI technologies. Yeah.
So what do we do?
What do we do?
I mean, you know, well, the good news is neither Elon Musk nor AI is in charge.
We are in charge.
We live in a democracy.
If we want a course correction, we can have a course correction.
It's definitely not too late.
It won't be too late in five years' time either.
Okay.
Thank you. Now I can go to bed.
Some hope. Some hope.
What do we, I mean, you've written this incredible book that is full of observations and
recommendations for policymakers and people who are in some degree of control.
People who will listen. That's not a big set, but you know.
Let's hope. Yeah. So what about the rest of us? What about the lowly user? What is it that we can do? meaningful change must start by a broad consensus,
not everybody agreeing, but enough people
in the broader democratic community
agreeing what is technically feasible
and what's socially desirable.
And I would say that pro-human AI
is technically feasible and socially desirable.
And by that, I mean AI that is pro-worker,
meaning it expands human capabilities,
creates information for workers of all sorts,
all skill levels, to perform more complex tasks
and more effective problem-solving approaches
in a range of directions.
Electricians, blue-collar workers,
educators, healthcare professionals, entertainment people, all of us. And pro-democracy, meaning that
instead of being manipulated by these machines, we use these machines as tools for generating
better communication without sacrificing mental health, without pushing ourselves into filter
bubbles and echo chambers and manipulative environments.
And that's what I mean by pro-human AI.
And I think if we bring enough people around the table
and have this conversation,
I believe that most people will agree
that's the right direction.
And then if we have the right people from the AI community
who can objectively give us information, I think they will tell us, yes, those are absolutely feasible.
But then we can step back and say, is that where we're going?
And we'll realize that's not where we're going.
And then we can have a constructive conversation about how is it that we could have a course correction.
And I think that will require some institutional changes, countervailing powers. I don't think we can go to the place that I am outlining as long as Elon Musk or Sam
Altman or the next Bill Gates is in charge and we just all just say, okay, you're the
genius who decide our future direction.
That's never in history.
That's why we are.
Simon Johnson and I wrote this book about the last thousand years to show that in history,
it's never worked very well when we have
empowered a very, very small group of
elite to decide
the future of technology and the direction
in which technology is developed and used.
So we need a more
broader process of democratic control
with the right institutions rather than surveilling powers.
And then we can also talk about policies.
Do we want
to be flooded with digital ads that exploit our information? If not, what are the policies we can deal with that? The data issue we talked about at the beginning. So how can we make sure that the next time a large language model is trained, they get permission from Wikipedia writers and perhaps do some compensation, or even more to people who have created creative output, like
screenwriters.
So does that require regulation?
Does that require a market?
I would say a combination of the two.
How do we build them?
How do we encourage firms to treat their workers better, including by providing them tools
for being more productive rather than just sidelining them?
How do we ensure the right incentives in the research community? So there are many, many facets to this problem.
Mm-hmm. It feels as if understanding, you know, the first thing before you can protect yourself
against someone stealing something from you, you have to understand that you have something of
value, right? So this idea of, you know, everyone understanding that what they're doing when they engage with social media platforms
right now is gifting them this possibly last thing that we have to have ownership over.
Absolutely. And that is very, very important. It's a very important observation, but that's
an observation of the 2010s. In the 2010s, the main thing that
you could have done if you did not want to gift your data to Facebook or to Twitter,
you could have withdrawn from these platforms or you could have taken some privacy-preserving
steps. But in the 2020s, that doesn't work anymore because everything you have done
can be expropriated by these companies because it's all out there. So that's why we need
a broader regulation. Where do you stand on the proposal? I think it was Andrew Yang who was
proposing sort of that, you know, he called it almost a tax on social media companies or a
universal basic income. Yeah, basic income, right, That we'd be paid compensated flat rate once, I assume, in our lifetime for all of our data.
So I have a guess about what you're going to say to that.
But where are you on that proposal?
Actually, you know, I think Andrew Yang deserves a lot of credit for being at the forefront of highlighting some of these issues in a very productive and public way.
And in the book, we call for a digital ads tax, but not a general tax on social media. Why? Because
I think what you want with taxes is not just to raise revenue. That's a nice side product. But
what you want is also correct behavior away from the more socially damaging things. So if you put a digital ad tax, you will open up the social media space as well as the broader
communication space to other business models.
So now people can enter with models like Wikipedia, which is a nonprofit, or they can
enter with a subscription model or other creative ways in which they can monetize their products.
So I think a digital ad tax is much better
because it also fights against
all of these negative mental health effects
of social media.
And when we have revenues from these things,
including from data markets
where people can sell their data,
and not individually, but collectively,
like the Writers Guild of America,
I think that's also very useful.
But I wouldn't sign up for a universal basic income yet.
And why not?
I have three basic objections, if you forgive the pun, to basic income.
One is that I find it defeatist.
Universal basic income says we are inevitably going to a future in which a few billionaires
are going to dominate everything, and let's make sure that they leave some crumbs to us. And that's not the type of society I want to live in. And I think
there's an alternative, which is what I just tried to outline with the pro-human AI. So let's not
give up on that. Let's try to be much more proactive to create a better world. Second, I don't think
the political economy of universal basic income works. Today, I don't know many billionaires who are
willingly paying a lot of taxes. Some of them are giving money to charity, but it's a way of
projecting their power even more. So what makes you think that they're going to willingly
hand over hundreds of billions of dollars? I don't think that's going to work very well.
But the third one is actually the one that really worries me. If we create a society
in which, say, 1%, 10%, 20% of the people are big earners and the rest of us are dispensable and we
can't even earn that much, and you give us some, you know, decent middle-class income, even that
is not going to be enough.
That's still a very hierarchical two-tier society
where we are the 80% are the takers
and then there are the big creators
that gather all the status, all the power,
and all the say about the direction of society.
That's still a very sad society for me.
So let's hope that we don't have to go
to universal basic income.
Right. Well, it does also, as you're talking, I'm thinking, you know, I've been, I think,
like some other people, I don't know how many, you know, there's this sort of general
fear and lack of understanding of AI and hiding from it, you know, sort of, I don't want anything
to do with it. I'm not going to go and chat GPT. This terrifies me.
But to lean into it,
to embrace it,
to ask,
how can it be?
My tool is a step maybe in the right direction because it is,
I guess,
the beginning of appropriating it widely rather than letting it sit in the
hands of a few.
Absolutely.
Absolutely.
We have to get informed.
Yeah.
For some people,
that means using the tools. For some people, that means
using the tools. For some people, it may be reading about them. Look, I have invested a lot to
be informed about social media, but I am not on Facebook or Instagram or TikTok. I think that's
fine. And, you know, you choose your own poison. But being informed is really important. But also,
I would go further. I think we really
have to change the conversation away from the really dichotomous, false bipolar structure that
we have right now, which is that either you're a doomer and you think superintelligence is going
to come and all the robots are going to kill us, or you're like, you know, OpenAI, some outland or The Economist magazine,
you know, you think we're just going to be all we are so, so, so, so fortunate. And all we should
do is be a little bit more grateful to these wonderful people. So as long as the discussion
is dominated by these two poles, I think it makes it very difficult for average people to be
informed. That's why it's important for podcasts like this to sort of
present the middle road, which has a lot of problems. But, you know, those problems don't
emerge from the fact that we're dealing with killer robots. I'm going to use that as our
new tagline, presenting the middle. Yeah, that sounds a little boring, but sure. Sorry. Why not? I'll give you credit.
Um,
Oh,
please don't.
My,
uh,
my final question for you is something that we ask everyone.
You can answer it in whatever way you like,
which is,
um,
is there a person or a thing or an event,
uh,
that you consider to be a significant other in your life in terms of having
shaped your trajectory in a profound way?
Oh my goodness. Uh, well, you know, my wife is a computer scientist, so I have learned a lot
from her. So that, I think, qualifies significant other in many ways.
Certainly does. We're all grateful for that union. Yes. Well, thank you so much. This has really been such a pleasure.
Thank you, Liza.
And, you know, good luck to us all. I don't know how we move forward, but I'm glad.
Hopefully it's not just luck.
Yes.
Good organizational farsightedness to us all.
Yeah. Well, I hope everyone is buying your book and reading it and calling their congressperson.
Thank you.
Thank you so much.
And that's a wrap for this season of Significant Others.
Once again, it has been a real joy getting to tell these stories.
And as usual, I learned a lot, and I hope you have too.
Thanks to everyone who sent in episode suggestions. Keep them coming by emailing
significantpod at gmail.com. And from me and everyone at Team Coco, thanks so much for listening.
Significant Others is produced by Jen Samples. Our executive producers are Nick Liao, Adam Sachs, Jeff Ross, and Colin Anderson.
Engineering and sound design by Eduardo Perez, Rich Garcia, and Joanna Samuel.
Music and scoring by Eduardo Perez and Hannes Brown.
Research and fact-checking by Michael Waters and Hannah Sio.
Special thanks to Lisa Berm, Jason Chalemi, and Joanna Solitaroff.
Talent booking by Paula Davis and Gina Batista.