The Jordan B. Peterson Podcast - 515. Moral Dilemmas of AI | Marc Andreesen
Episode Date: January 16, 2025Dr. Jordan B. Peterson sits down with entrepreneur and software pioneer, Marc Andreessen. They discuss the timeline of the woke institutional takeover, the ruinous effects it has had on Western ideolo...gy and business, the ways in which AI will shape society, and the immense responsibility we have to instill the future with an ethos and morality that serves human flourishing. Marc Andreessen is a cofounder and general partner at the venture capital firm Andreessen Horowitz. He is an innovator and creator, one of the few to pioneer a software category used by more than a billion people and one of the few to establish multiple billion-dollar companies. Marc co-created the highly influential Mosaic internet browser and co-founded Netscape, which later sold to AOL for $4.2 billion. He also co-founded Loudcloud, which as Opsware, sold to Hewlett-Packard for $1.6 billion. He later served on the board of Hewlett-Packard from 2008 to 2018. Marc holds a B.S. in computer science from the University of Illinois at Urbana-Champaign. Marc serves on the board of the following Andreessen Horowitz portfolio companies: Applied Intuition, Carta, Coinbase, Dialpad, Flow, Golden, Honor, OpenGov, Samsara, Simple Things, and TipTop Labs. He is also on the board of Meta. This episode was filmed on December 18th, 2024. | Links | For Marc Andreessen: On X https://x.com/pmarca?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor Substack https://pmarca.substack.com/ “The Techno-Optimist Manifesto” (Book) https://a16z.com/the-techno-optimist-manifesto/
Transcript
Discussion (0)
This movement that we now call Wookness, it hijacked what I would call sort of at the time
you know bog standard progressivism, but you know it turned out what we were dealing with was
something that was far more aggressive. You're pouring cultural acid on your company and the
entire thing is devolving into complete chaos. It's also I think the case that the new
communication technologies have also enabled reputation savagers in a way that we haven't
seen before. The single biggest fight is going to be over what are the values of the AIs.
That fight, I think, is going to be a million times bigger and more intense and more important
than the social media censorship fight.
As you know, out of the gate, this is going very poorly.
Stop there for just a sec, because we should delve into that.
That's a terrible thing.
Hello everybody. So I had the opportunity to talk to Mark Andreasson today, and Mark has been quite visible on the podcast circuit as of late.
And part of the reason for that is that he's part of a swing within the tech community,
back towards the center, and even more particularly under the current conditions,
toward the novel and emerging players in the Trump administration.
Now, Mark is a key tech visionary.
He developed Mosaic and Netscape, and they really laid the groundwork for the web as we know it.
And Mark has been an investor in Silicon Valley circles for 20 years and is
as plugged into the tech scene as anyone in the world. And the fact that he's decided
to speak publicly, for example, about such issues as government tech collusion, and that
he's turned his attention to away from the Democrats,
which is the traditional party, let's say, of the tech visionaries.
And they're all characterized by the high openness that tends to make people liberal.
The fact that Mark has pivoted is, what would you say?
It's an important, it may be as important an event as musk aligning with
trump and so i wanted to talk to mark about his vision of the future he laid out a manifesto a
while back called the techno optimist manifesto which bears some clear resemblance to the Alliance for Responsible Citizenship Policy Platform,
that's ARC, which is an enterprise that I'm deeply involved in.
And so I wanted to talk to him about the overlap between our visions of the future
and about the twist and turns of the tech world in relationship to their political allegiance
and the transformations there that have occurred and
also about
the problem of AI alignment, so to speak. How do we make sure that these hyper intelligent systems that the
techno
utopians are creating don't turn into like cataclysmic
apocalyptic totalitarian monsters. How do we align them with proper human interests
and what are those proper human interests
and how is that determined?
And so we talk about all that and a whole lot more.
And so join us as we have the opportunity
and privilege to speak with Mark Andreessen.
So Mark, I thought I would talk to you today about an overlap in two of our projects, let's say,
and we could investigate that. There should be all sorts of ideas that spring off that.
So I was reviewing your techno-optimist manifesto, and I have some questions about that and some concerns. And, and I wanted to contrast that and compare it
with our ARC project in the UK. Because I think we're pulling
in the same direction. And I'm curious about why that is and
what that might mean practically. And I also
thought that would give us a springboard off which we could leap in relationship to, well, to the ideas you're developing. So there's a lot
of that manifesto that for whatever it's worth, I agreed with. And I don't regard that as particularly,
what would you say, important in and of itself. But I did find the overlap between what you had been
suggesting and the ideas that we've been working on for this alliance
for responsible citizenship in the UK quite striking.
And so I'd like to highlight some similarities
and then I'd like to push you a bit on some of the issues
that I think might be, might need further clarification.
That's probably the right way to think about it.
So at, for this art group, we set up as,
what would you say, a visionary alternative
to the Malthusian doom saying of the climate hysterics
and the centralized planners,
because that's just going nowhere.
It's, you can see what's happening to Europe.
You see what's happening to the UK.
Energy prices in the UK are five times as high
as they are in the United States.
That's obviously not sustainable.
The same thing is the case in Germany.
Plus, not only are they expensive,
they're also unreliable, which is a very bad combination.
You add to that the fact too,
that Germany's become increasingly dependent on markets like they're served by totalitarian dictatorships
essentially and that also seems like a bad plan. So one of our platforms is that
we should be working locally, nationally and internationally to do everything
possible to drive down the cost of energy
and to make it as reliable as possible.
Predicated on the idea that there's really no difference
between energy and work.
And if you make energy inexpensive,
then poor people don't die.
And so, because any increase in energy costs
immediately demolishes the poorest subset of the population.
And that's self-evidence as far as I'm concerned.
And so that's certainly an overlap with the ethos
that you put forward in your manifesto.
You predicated your work on a vision of abundance
and pointed to,
I noticed you, for example, you quoted Marion Toopey,
who works with the Human Progress
and has outlined quite nicely the manner in which
over the last 30 years,
especially since the fall of the Berlin Wall,
people have been striving on the economic front,
globally speaking, like never before. We've virtually
eradicated absolute poverty and we have a good crack at it eradicating it completely in the next
couple of decades if we don't do anything, you know, criminally insane. And so you see a vision
of the future where there's more than enough for everyone. It's not a zero sum game. You're not a fan of the Malthusian proposition
that there's limited resources and that we're facing a,
you know, either, what would you say,
a future of ecological collapse or economic scarcity
or maybe both.
And so,
the difference, I guess,
one of the differences I wanted to delve into is you put a lot of stress
on the technological vision.
And I think there's something in that that's insufficient.
And this is one of the things I wanted to grapple
with you about, because there's a theme
that you see, a literary theme.
There's two literary themes that are in conflict here
and they're relevant because they're stories of the psyche
and of society in the broadest possible sense.
You have the vision of technological abundance and plenty
that's a consequence of the technological
and intellectual striving of mankind.
But you also have juxtaposed against that the vision of the intellect as a
Luciferian force and the possibility of a technology-led dystopia and catastrophe, right?
And it seems to hinge on something like how the intellect is conceptualized in the deepest level
of society's narrative framing.
So if the intellect is put at the highest place,
then it becomes Luciferian and leads to a kind of dystopia.
It's like the all seeing eye of Sauron
in the Lord of the Rings cycle.
And I see that exactly that sort of thing
emerging in places like China.
And it does seem to me that that technological vision,
if it's not encapsulated in the proper underlying narrative,
threatens us with an intellectualized dystopia
that's equiprobable with the abundant outcome
that you described.
Now, one of the things we're doing at Arc
is to try to work out what that underlying narrative
should be so that that technological enterprise
can be encapsulated with it and remain non-dystopian.
I think it's an analog of the alignment problem in AI.
You know, you can say, well, how do you get
these large language model systems to adopt values
that are commensurate with human flourishing?
That's the same problem you have when you're educating kids,
by the way.
And how do you ensure that the technological enterprise
as such is aligned with the underlying principles
that you espouse of, say, free market,
free distributed markets and human freedom in the classic Western sense.
And I didn't see that specifically addressed
in your manifesto.
And so I'm curious about,
with all the technological optimism
that you're putting forward,
which is something that,
well, why else, why would you have a vision other than that
when we could make the world an abundant place? But there is this dystopian side that can't be ignored.
And, you know, there's 700 million closed circuit
television cameras in China,
and they monitor every damn thing their citizens do.
And we could slide into that as easily as we did
when we copied the Chinese in their response
to the so-called pandemic.
So I'd like to hear your thoughts about that.
Sure, so first, thanks for having me,
and it's great to see you.
I'm very influenced on this by Thomas Sowell,
wrote this great book called A Conflict of Visions,
and he talks about fundamentally,
there are two classes of visions of the future.
He calls the unconstrained visions and the constrained visions.
The unconstrained visions are
the sweeping transformational discontinuous social change.
We're going to make the new man,
we're going to make the new society,
we're going to have the Pol Pot in Cambodia,
we're going to declare year zero,
everything that came before is irrelevant,
it's a new era, Lenin.
Basically every revolutionary wants to completely radically transform everything, zero, everything that came before is irrelevant, it's a new era, Lenin, you know, basically
every revolutionary, right, wants to, you know, completely radically transform everything
and how can you not because the current system is unjust and we need to achieve total justice
and so forth.
And so the unconstrained vision, you know, it's classically the vision of totalitarians,
it sells itself as creating utopia, as you well know, it tends to produce hell.
In contrast, you know, he said that the constrained vision is one in which you
realize that man has fallen and that we are imperfect and that things are always
going to be some level of mess,
but it can be a slightly better mess than it is today.
We can improve on the margin, things can be better,
people can live better lives,
they can take better care of their families,
their countries can get richer,
they can have more abundance. We're in progress on the margin. Of course, their countries can get richer, they can have more abundance,
and progress on the margin.
Of course, the unconstrained vision
is very compatible with totalitarianism.
The Chinese Communist Party for sure has
an unconstrained vision as the Bolsheviks did before them,
and the Nazis, and other totalitarian movements.
The constrained vision is very consistent, I think,
with the long-run Western ideals and liberty and freedom and then free markets.
And so one of the things I do try to say in the manifesto is I'm not a utopian.
And I think utopian dreams turn into dystopia.
I think that's what you get.
I think history is quite clear on that.
And then to your point on technology, I would just map that straight onto that, which is
yes, 100% technology can be a tool that revolutionaries can use to try to achieve utopia slash dystopia.
And for sure, the Chinese Communist Party is trying to do that.
And there are forces, by the way, in the US that also for sure want to do that.
But technology is also completely perfectly compatible with the constrained vision and
change on the margin and improvement on the margin, which is where I am.
I think that is 100% a human issue and a social and political issue, not a technological issue,
right?
Right, right.
Yes, exactly.
Right.
So this is sort of the running, a little bit of the running joke right now in the AI alignment.
There's this classic, there's a super genius of AI alignment,
this guy Roko who's famous for
this thing called Roko's Basilisk in AI alignment.
So Roko's Basilisk is,
you better say nice things about the AI now,
even though the AI doesn't exist yet,
because when it wakes up and sees what you read,
it's going to judge you and find you wanting.
So he's sort of this famous guy in that field.
What he actually says now is,
basically, it turns out the AI alignment problem is not a problem
of aligning the AI, it's a problem of aligning the humans.
Right, it's a problem of aligning the humans
and how we're gonna use the AI.
Right, precisely to your point.
Yes, right.
Right, and that is one of the very big questions.
There's another book I'd really recommend on this
that directly to your point,
it's got Peter Huber wrote this book called Oral's Revenge. Famously in 1984, as you mentioned,
there's this concept of the telescreen, which is basically the one-way propaganda broadcast
device that goes into everybody's house from the government, top down, and then has cameras
in it so the government can observe everything that the citizens do.
That is what happens in these totalitarian societies. They implement systems like that.
In the book, Our World's Revenge, he does this thing where he tweaks the telescreen and he makes it two-way instead of one-way. The revolutionaries give it the sort of resistance force to the
totalitarian government, give it the ability to let people upload as well as download.
All of a sudden, people can actually express themselves,
they can express their views, they can organize.
Of course, based on that,
they can then use that technology to basically rise up
against the totalitarian government
and achieve a better society.
Look, as you mentioned earlier,
the ability to do universal two-way communication
also lets you
create the mob effect that we were talking about,
and this personal destruction engine.
So there's two sides to that also,
but it is the case that you can squint at a lot of this technology one way,
and see it as an instrument of totalitarian oppression,
and you can squint at it another way,
and see it as an instrument of individual liberation.
I think for sure, and you can squint at it another way and see it as an instrument of individual liberation.
I think, for sure, there are a lot of, you know, how you design the technology matters a lot,
but I at least believe the big picture questions are all the human questions
and the social and political questions, and they need to be confronted directly as such.
And we need to confront them directly for that reason.
And, right, so these are human questions,
ultimately not technological questions.
Okay, okay, so that's very interesting
because that's exactly what we concluded at Arc.
So one of the streams that we've been developing
is the better story stream,
because it's predicated on the idea,
which I think you're alluding to now,
that the technological enterprise has to be nested
inside a set of propositions that aren't in themselves predicated on the idea, which I think you're alluding to now, that the technological enterprise has to be nested inside
a set of propositions that aren't in themselves
part and parcel of the technological enterprise.
Right, and then the question is, what are they?
So let me outline for a minute or two
some of the thoughts I've had in that matter,
because I think there's something crucial here
that's also relevant to the problem of alignment.
So like you said to the problem of alignment. So like you said that the problem with regard to AI
might be the problem that human beings have
is that we're not aligned, so to speak.
And so why would we expect the AIs to be?
And I think that's a perfectly reasonable criticism.
I mean, part of the reason that we educate young people
so intensely, especially those who'll be
in leadership positions, is because we want to solve
the alignment problem.
That's part of what you do when you socialize young people.
Now, the way we've done that for the entire history
of the productive West, let's say, is to ground
young people who are smart and who are likely to be leaders
in something approximating the religious and humanist,
religious slash humanist slash enlightenment tradition.
It's part of that golden thread.
Now part of the problem I would say
with the large language model systems
is that they're hyper-trained on,
they're like populists in a sense.
They're hyper-trained on the over-proliferation of nonsense
that characterizes the present. We're hyper trained on the over proliferation of nonsense
that characterizes the present. And the problem with the present is that time hasn't had
a chance to winnow out the wheat from the chaff.
Now, what we did with young people is we referred them
to the classic works of the past, right?
That would be the Western canon whose supremacy
has been challenged so successfully
by the postmodern nihilists.
We said, well, you have to read these great books
from the past.
And the core of that would be the Bible.
And then you'd have all the, what, the poets and dramatists
whose works are grounded in the biblical tradition
that are like secondary offshoots
of that fundamental narrative.
That'd be people like Dante and Shakespeare
and Goethe and Dostoevsky.
And we can imagine that those more core ideas
constitute a web of associated ideas
that all other ideas would then slot into.
You know, you could make the case technically,
I think, that these great works in the past
are mapping the most fundamental relationships
between ideas that can possibly be mapped
in a manner that is sustainable and productive
across the longest possible imaginable span of time.
And that's different than the proliferation
of a multiplicity of ideas that characterize the present.
Now that doesn't mean we know how to wait.
So if you're gonna design a large language model,
you might want to wait the works of Shakespeare
10,000 times per word as crucial as,
you know, what would you say,
the archives of the New York Times for the last five years.
It's something like that.
Like there's an insistence in the mythological tradition
that people have two fundamental poles of orientation.
One is heavenward or towards the depths. You can use either analogy.
And that's the orientation towards the divine
or the transcendent or the most foundational.
And then the other avenue of orientation is social.
That'd be, you know, the reciprocal relationship
that exists between you and I
and all the other people that we know.
And if you're only weighted
by the personal and the social, then you tilt towards the mad mob populism that could characterize
societies when they go off kilter. You need another axis of orientation to make things
fundamental. Now, I just want to add one more thing to this that's very much worth thinking about.
So the postmodernists discovered,
this is partly why we have this culture war,
the postmodernists discovered
that we see the world through a story.
And they're right about that because what they figured out,
and they weren't the only ones, but they did figure it out
was that we don't just see facts, we see weighted facts.
And the weighting system, a description of someone's
weighting system for facts is a story.
That's what a story is, technically.
You know, it's the prioritization of facts
that direct your attention.
That's what you see portrayed
in a characterization on screen.
Okay, now, postmodernists figured out
that we see the world through a story, but then they made a dreadful mistake,
which was a consequence of their Marxism.
They said that the story that we see the world through
is one of power, and that there is no other story than power,
and that the dynamic in society is nothing
but the competition between different groups
or individuals striving for power.
And I don't mean competence, I mean the ability to use compulsion and force, right? is nothing but the competition between different groups or individuals striving for power.
And I don't mean competence,
I mean the ability to use compulsion and force, right?
It's like involuntary submission.
I'm more powerful than you
if I can make you submit involuntarily.
Now the biblical canon has an alternative proposition
that's nested inside of it,
which is that the basis of individual stability
and societal stability and productivity
is voluntary self-sacrifice, not power.
And that is, those two ethos,
they are 100% opposed, right?
You couldn't get to visions that are more disparate
than those two.
Now the power narrative dominates the university
and it's driving the sorts of pathologies
that you described as having flowed out,
let's say into the tech world and then into the corporate
and the media world and into the corporate world beyond that.
One of the things we're doing at Arc
is trying to establish the
structure of the underlying narrative, which is a sacrificial narrative,
that property, that would property ground, for example, the technological
enterprise so that it wouldn't become dystopian.
And, you know, you alluded to that when you
pointed to the fact that there has to be something outside the technological
enterprise to stabilize it.
You alluded to, for example, a more fundamental ethos
of reciprocity when you said that one form of combating
the proclivity for top-down force, for example,
in this one-way information pipeline
is to make it two-way, right?
Well, you're pointing there to something like,
see, reciprocity is a form of repetitive self-sacrifice.
Like if we're taking turns in a conversation,
I have to sacrifice my turn to you and vice versa, right?
And that makes for a balanced dynamic.
And so anyways, one of the problems we're trying to solve
with this ARC enterprise is to thoroughly evaluate
the structure of that underlying narrative.
And we could really use some engineers to help
because the large language models are going to be able
to flesh out this domain property
because they do map meaning in a way
that we haven't been able to manage technically before. So I think the single biggest fight that has ever happened over
technology, and there have been many of those fights over the course of the last, you know,
especially 500 years, the single biggest fight is going to be over what are the values of the AIs.
To your points, like what will the AIs tell you when you ask them anything that involves values,
social organization, politics, philosophy, religion. That fight, I think, is going to be
a million times bigger and more intense and more important than the social media censorship fight.
And I don't say that because the social media censorship fight has been extremely important,
but AI is going to be much more important
because AI is such a powerful technology
that I think it's going to be the control
layer for everything else.
And so I think the way that you talk to your car and your house
and the way that you organize your ideas,
the way you learn, the way your kids learn,
the way the health care system works,
the way the government works, how government policies are implemented.
AI will end up being the front end on all those things.
So the value system in the AIs is going to be
maybe the most important set of
technological questions we've ever faced.
As you know, out of the gate,
this is going very poorly.
Yes.
Right?
Very.
There's this question hanging over the field right now,
which you could sort of summarize as why are the AIs woke?
Why do the big lab AIs coming out of the major AI companies,
why did they come out with the philosophy of
a 21-year-old sociology undergrad at Oberlin College,
with blue hair who's completely emotionally activated.
You can see many examples of people who have posted
queries online that show that or you can run your own experiments.
They basically have the fullest version of
this fundamentalist emotional far progressive absolutist wokeness coded into them.
You said upfront that the presumption must be
that they're just getting trained on more recent bad data
versus older good data.
There is some of that, but I will tell you
that there is a bigger issue than that,
which is these things are being specifically trained
by their owners to be this way.
Yeah, yeah, okay, so let's take that apart because that's very, very important.
Okay, so like I played with Grok,
I played with Grok a lot and with ChatGPD,
I've used these systems extensively
and they're very useful, although they lie all the time.
Now you can see this double effect that you described,
which is that there is conscious manipulation
of the learning process
in an ideological direction,
which is, I think, absolutely ethically unforgivable.
Like, it even violates the spirit of the learning
that these systems are predicated on.
It's like, we're gonna train these systems
to analyze the patterns of interconnections
between the entire body of human,
of ideas in the corpus of human knowledge,
and then we're going to take our shallow conscious understanding and paint an overlay on top of that.
That is so intellectually arrogant that it's Luciferian in its presumption. It's appalling.
But even Grok is pretty damn woke, and I know that it hasn't been messed with
at that level of, you know, painting over the rot, let's say.
And so, I think we've already described,
at least implicitly, why there would be
that conscious manipulation.
But what's your understanding of the training data problem?
And I can talk to you about some AI systems
that we've developed that don't seem to have that problem and why they don't have that problem. And I can talk to you about some AI systems that we've developed that don't seem to have that problem
and why they don't have that problem.
Because it's crucially important,
as you already pointed out, to get this right.
And I think that, I actually think that to some degree,
psychologists, at least some of them,
have figured out how to get this right.
Like it's a minority of psychologists
and it isn't well known,
but the alignment problem
is something that the deeper psychoanalytic theorists have been working on for about a hundred
years and some of them got that because they were trying to align the psyche in a healthy direction.
You know it's the same bloody problem fundamentally and there were people who really made progress in
that direction. Now they aren't the people who had the most influence
as academics in the universities
because they got captured by, you know,
Michel Foucault, who's a power mad hedonist
for all intents and purposes,
extraordinarily brilliant, but corrupt beyond comprehension.
He is the most cited academic who ever lived.
And so the whole bloody enterprise,
the value enterprise in the universities
got seriously warped by the postmodern Marxists
in a way that is having all these cascading ramifications
that we described.
All right, so back to the training data.
What's your understanding of why the wokeness emerges?
It's present bias to some degree,
but other, and what other contributing factors are there?
Yes, I think there's a bunch of biases. So there's three off the top of my head you just get immediately.
So one is just recency bias. You know, there's just a lot more present day material available for training than there is old material.
Because all the present day materials already on the internet. Right, number one. And so that's going to be influence. Number two, you know, who produces content is, you know,
people who are high in openness.
Right, the creative class that creates the content is self-biased.
And then there's the English language bias, which is like almost all of the trainable data is in English.
And, you know, that that isn't is in a small number of other Western languages for the most part.
And so there, you know so there's some bias there.
And then frankly, there's also this selection process,
which is you have to decide what goes in the training data.
And so the sort of humorous version of this
is two potent sources of training data
could be Reddit and 4chan.
And let's say Reddit is like super far left on average,
and 4chan is super far right.
And I bet if you look at the training data sets for a lot of these AIs,
you'll find they include Reddit, but they don't include 4chan.
Right? And so-
Right, right, right, right.
It's included bias that way. By the way,
there is a very entertaining variation of this that is playing out right now,
which is these companies are increasingly being sued by
copyright owners for training on data of material that's
currently copyrighted and most specifically books.
So there are court cases pending right now.
The courts are going to have to take up this question of copyright and whether it's
legal to train AIs on copyrighted data or not and on what terms.
One of the running jokes inside the field is if those court cases come down such that
these companies can't train on copyrighted material, then for example,
they'll only be able to train on books published before 1923.
Right.
Right.
So just...
It should be an improvement, actually.
Well, imagine for a moment, if you would, training on books before 1923.
The good news on that is you don't get all of the last 100 years of insanity.
The bad news is people before 1923 were insane in their own ways.
Yeah, right.
Well, and also, you don't have the advantage of all the technological progress.
Yeah, exactly.
And so these are very deep questions.
All of these questions have to get answered.
You know, Elon has talked about this, like Rock has some of this.
He's working on that.
Having said that, I will tell you most of what you see when you use these systems that
will disturb you is not from any of that.
Most of it is deliberate top-down coding in a much more blunt instrument way.
How is that done, Mark? Like, what does that look like exactly? You know,
I mean, it's really nefarious, right?
Because that means that you're interacting in a manner that you can't predict with
someone's a priori prejudices
and you have no idea how you're being manipulated.
It's really, really bad.
And so first of all, why is that happening?
Like if the large language models value is in their wisdom
and that wisdom is derived from their understanding
of the deep pattern of correlations
between ideas, which is like a major source of wisdom, genuinely speaking. Why pervert that with
an overlay of shallow ideology? And why is the ideology in the direction that it is? And then
how is that gerrymandering conducted? Yes, let me start with the how. So the how is a technique, there's an acronym for it.
It's called reinforcement learning by human feedback.
And so in the field it's called RLHF.
And RLHF is basically a key step for making an AI
that works, that interacts with humans,
which is you take a raw model, which is sort of feral
and doesn't quite know how to orient to people.
And then you put it in a training loop
with some set of human beings who effectively socialize it.
And so reinforcement learning for human feedback, the key
there is human feedback.
You put it in dialogue with human beings,
and you have the human beings do something very analogous
to teaching a child.
Here's how you respond.
Here's how you're polite.
Here's the things you can and can't say. Here's how you're polite. Here's the things you can and can't say.
Here's how to word things.
Here's how to be curious.
All the behaviors that you presumably
want to see from something you're interacting with that
is sort of a human proxy kind of form of behavior,
that is a 100% human enterprise.
You have to decide what the rules are for the people who
are going to be doing that work.
They're all people.
And then you have to hire into those jobs.
The people going into those jobs are in many cases,
the same people, this will horrify you.
They're the same people who were in the trust
and safety groups at the social media companies
five years ago.
Oh, good.
Oh, that's great.
Oh, that's wonderful.
Yeah, you're right.
I couldn't imagine a worse outcome than that.
So all the people that Elon cut
out of the trust and safety group at Twitter
when he bought it, many of them have migrated
into these trust and safety groups
at these AI companies and they're now setting these policies
and doing this training.
So the terrifying, well, the terrifying thing here
is that we're going to produce hyper-powerful avatars of
our own flaws.
Right?
And so if you're training one of these systems and you have a variety of domains of personal
pathology, you're going to amplify that substantively. You're going to make these giants, like I joke with my friend Jonathan Pagio, who's
very reliable source in such matters, that we're going to see giants walk the earth
again.
I mean, that's already happening, and that's what these AI systems are.
And if they're trained by people who, well, let's say are full of unexamined biases and prejudices and deep
resentments, which is something that you talk about in your manifesto, resentment and arrogance
being like key sins, so to speak, we're going to produce monstrous machines that have exactly
those characteristics and that is not going to be good. And that's like, you're absolutely
right to point to this as you know it, to point to this as perhaps the serious problem of our times.
If we're going to generate augmented intelligence,
we better not generate augmented pathological intelligence.
And if we're not very careful, we are certainly going to do that,
not least because there's way more ways that a system can go wrong
than there are ways that it can, you know, aim upward in a, in a
unerring direction.
And so, okay, so why is it these people who were, this is so awful.
I didn't know that, that were say part of the safety and trust issue at Twitter,
who are now training the bloody AIs.
How did that horrible situation come to be?
It's the same dynamic.
It's the, the, the big AI companies have the exact same dynamic as the big social media companies,
which have the exact same dynamic as the big universities, which have the exact same dynamic
as the big media companies, which is, right, you have these either formal or de facto cartels.
You know, you have a small handful of companies at the commanding heights of society that
hire all the smart graduates.
As I say, take a step back.
You don't see ideological competition
between Harvard and Yale.
You would think that you should, because they should compete
in the marketplace of ideas.
And of course, in practice, you don't see that at all.
You see no ideological competition
between the New York Times and the Washington Post.
You see no ideological competition
between the Ford Foundation and any of the other major foundations.
They all have the exact same politics.
You see no, prior to Elon buying Twitter,
you saw no ideological competition
between the different social media companies.
Today, you see no ideological competition among the big AI labs.
Elon is the spoiler.
He is coming in to do and he's going to try to do an AI,
what he did in social media,
which is create the non-woke one.
But without Elon, you weren't seeing that at all. And so you have this consistent
dynamic across these sectors of what appears to be a free market economy, where you end up with
these cartels, where they sort of self-reinforce and self-police, and then they're policed by the
government. Anyway, so I want to describe the general phenomenon, because that's what's happening
here.
It's the same thing that happened with the social media companies.
And then this gets into policy on the very serious policy issues on the government side,
which is, is the government going to grant these AI companies basically protected status
as some form of monopoly or cartel in return for these companies signing up for
the political control that their masters in government want or in the alternative is there
actually going to be an open AI universe, a true open AI, like truly open where you're
going to have a multiplicity of AIs that are actually in full competition competing and
then you'll have some that are woke and you'll have some that are non-woke and you'll have
some trained on new material and some trained on old material,
and so forth and so on, and then people can freely pick.
And the thing that we're pushing for is that latter outcome.
We very specifically want government
to not protect these companies,
to not put them behind a regulatory wall,
to not be able to control them
in the way that the social media companies
got controlled before Elon.
We actually want full competition,
and if you want your woke AI, you can have it but there are many other choices. Well can you imagine developing a super intelligence
that's shielded from evolutionary pressure? Like that is absolutely insane. That's absolutely insane.
I mean we know that the only way that a complex system can regulate itself across time is
that the only way that a complex system can regulate itself across time is through something like evolutionary competition. That's it. That's the mechanism. And so if you decide what, that this
AI is correct by fiat and then you shield it from any possibility of market feedback or environmental
feedback, well that is literally the definition of how to make something insane.
And so now you talked about in some of your recent podcasts,
you talked about the fact that the Biden administration
in particular, if I got this right,
was conspiring behind the scenes with the tech companies
to cordon off the AI systems and make them monolithic.
And so can you elaborate a little bit more on that?
Yeah, so this is this whole dispute that's playing out,
and this gets complicated, but to provide a high-level view.
So this is a whole dispute about so-called AI safety.
And so there's this whole kind of, you might call it concern
or even panic about, like, are the AI's going to run under control?
Are they going to kill us all?
By the way, are they going to be racist?
All these different concerns over all the different ways in which these things can go wrong,
there's this attempt to impose the precautionary principle on these AIs where you have to prove that they're harmless
before they're allowed to be released, which inherently gets into these political questions.
So anyway, the AI safety movement conjoins a lot of these questions into this overall elevated level of concern.
And then basically what has been happening is the major AI labs, basically they know
what the deal is.
They watch what happened in social media.
They watch what happened to the companies that got out of line.
They watch the pressures that came to bear.
They watch what the government did to the social media companies.
They watch the censorship regime that was put in place, which was very much a political,
you know, top-down censorship regime.
And basically they went to Washington over the course of the last several years,
and they essentially proposed a trade.
The trade was, we will do what you want politically.
We will come under your control voluntarily from
a political standpoint the same way the social media companies had.
In return for that, we essentially want a cartel.
We want a regulatory structure set up such that
a small handful of big companies will
be able to succeed in effect forever and then new entrants will not be allowed to compete.
In Washington, they understand this because this is the classic economic concept of regulatory
capture. This is what every set of major big companies in every industry does. The AI companies
went to Washington and they tried to do that. Basically, what was happening up until the
election was the Biden administration was on board with that.
That led to the conversations that I've talked about
before that we had in the spring with the Biden administration,
where they told us very directly,
senior officials in the administration told us very directly,
look, do not even bother to try to fund AI startups.
There are only going to be two or three large AI companies
building two or three large AIs and we are going to control them.
We are going to set up a system in which we control them and they are going to be, you
know, they're not going to be nationalized, but they're going to be essentially de facto
integrated into the government.
And we are going to do whatever is required to guarantee that outcome.
And it's, you know, it's the only way to get to the outcome that we will find acceptable.
Okay, okay, well, so there's so much in there that's pathological beyond comprehension
that it's difficult to even know where to start.
It's like, who the hell thinks this is a good idea?
And why?
Like, who are these people that feel
that they're in a position to determine the face
of hyperintelligence,
of computational hyperintelligence. And who is it that thinks that that is something that should be
like regulated by a closed government corporate cartel? Like I don't understand that at all,
Mark. I don't know if I've ever heard anybody detail out to me something that is so blatantly
both malevolent and insane simultaneously.
So like, how do you account for that?
I mean, I know it's shocked you.
I know that's why you've been talking about it recently.
Now it should shock you because it's just beyond
comprehension to me that this sort of thing can go on
and thank God you're bringing it
to light.
But like how do you make sense of this?
What's your understanding of it?
Well, look, it's the same people who think that they should control the education system,
same people who think they should control the universities, same people who think they
should control social media censorship, the same people who think that they should permanently
control the government and government bureaucracies.
It's this, you know, whatever, pick whatever term you want. It's this elite
class, ruling class, oligarchic class.
Worshippers of power. Worship. Remember, it's one ring of power that binds all the evil
rings. Yeah, well, it's worshipers of power. And the damn postmodernists, you know, when
they proclaimed that power was the only game in town, a huge part of that was both a confession
and an ambition. Right? If power is the only game in town, a huge part of that was both a confession and an ambition.
Right? If power is the only game in town, then why not be the most effective power player?
The reason I'm so sensitized to this is because this is exactly what I saw happen with social
media censorship. I sat in the room and watched the construction of the entire social media
censorship edifice every step of the way, going all the way back to the... I was in the original
discussions about what defines concepts
like hate speech and misinformation.
Like I was in those meetings and I saw the construction
of the entire private sector edifice that resulted
in the censorship regime that we all experienced.
And I was close into the, you know,
there's a whole group at Stanford University
that became a censorship bureau that was working
on behalf of the government.
I know those people. One of the people who ran that used to work for me.
I know exactly who those people are.
I know exactly how that program worked.
I knew the people in government who were running things like this,
the so-called global engagement center and all these different arms
of the government that had been imposing social media censorship.
So this is this entire complex
that we kind of saw unspooled in the Twitter files,
and then we've seen in, you know, the investigative reporting
by people like, you know, Mike Benz,
and Mike Schellenberger, and these other guys.
Like, I saw that whole thing get built,
and I, you know, over the course of, you know,
basically 12 years, I saw that whole thing get built,
and then, of course, I've been part of Elon's takeover
of Twitter, and so I've seen the, you know,
what it takes to try to unwind that with what he's doing at X.
And so I feel like I saw the first movie, right?
And then AI is a much more important topic,
but AI is very clearly the sequel to that.
And what I'm seeing is basically the exact same pattern that I saw with that.
And the people who were able to do that for social media for a long time are
the same kind of people and in many cases, literally the same people who are now trying to do that in AI
And so I like at this point, I feel like we've been warned like we've seen the first movie. We've been warned
We've seen how how bad it can get we need to make sure it doesn't happen again
And yeah
We need you know those of us in a position to be able to do something about it need to talk about it
It needs to try to prevent it. Well, so at AHRQ, we're trying to formulate a set of policies
that I think strike to the heart of the matter.
And the heart of the matter is what story should orient us
as we move forward into the future.
And we're going to discover that by looking at the great stories
of the past and extracting out their genuine essence.
And I think the ethos of voluntary self-sacrifice
is the right foundation stone.
And I think that the proposition that society's built
on sacrifice is self-evident once you understand it,
because to be a social creature,
you have to give up individual supremacy.
You trade it in for the benefits of social being.
And your attention
is a sacrificial process too, because there's one thing you attend to at a time and a trillion
other things that you sacrifice that you could be attending to. Now, I think we do understand,
we're starting to understand the basics of the technical ethos of the sacrificial,
of the, what would you say, of the sacrificial foundation.
It's something like that.
And I think we understand that at Arc.
We have some principles that we're trying to use
to govern the genesis of this organization,
which I think will become the go-to,
and maybe already has the go-to conference,
at least for people who are interested
in the same sort of ideas that you're putting forward. We had a very successful conference last year, and maybe already has the go-to conference, at least for people who are interested
in the same sort of ideas that you're putting forward.
We had a very successful conference last year
and the one that's coming up in February
looks like it's going to be larger and more successful.
We had spinoffs in Australia and so forth.
And so part of the emphasis there
is that we wanna put forward a vision that's invitational.
And there's a policy proposition, there's a proposition with regards to policy that lies at
the bottom of that, which is that if I can't invite you on board to go in the direction that
I'm proposing, then there's something wrong with my proposition. Right? If I have to use force,
if I have to use compulsion, then that's indicative
of a fundamental flaw in my conceptualization. Now there might be some exceptions for like
overtly criminal and malevolent types, because they're difficult to pull into the game. But
if the policy requires force rather than invitational compliance, there's something wrong
with it. And so what we're trying to do, and I see like very close parallels to the project
that you're engaged in is to formulate a vision
of the future that's so,
what would you say?
So self-evidently positive that people would strive
to find a reason not to be enthusiastically on board.
And I don't think you have to be a
naive optimist to formulate a vision like that. We know perfectly well that the world is a far more
abundant place than the Malthusian pessimists could have possibly imagined back in the 1960s
when they were agitating madly for their propositions of scarcity and overpopulation.
And so, okay, so what's the conclusion to that?
Well, the conclusion in part is that this AI problem
needs to be addressed, you know?
And I've built some AI systems that are founded
on the ancient principles, let's say,
that do in fact govern free societies,
and they're not woke.
They can interpret dreams, for example, quite accurately,
which is very interesting and remarkable to see.
And so they're much more weighted towards,
at something like the golden thread that runs through
the traditional humanist enterprise,
stretching back two or 3000 years.
And maybe there's 200 core texts in that enterprise
that constitute the center,
what used to constitute the center
of something like a great books program,
the great books program,
which is still running at the University of Chicago.
Now that's not sufficient because as you pointed out,
well, there's all this technological progress
that has been made in the last hundred years,
but there's something about it that's central and core.
And I think we can use the AI systems actually to untangle
what the core idea sets are that have underpinned,
free and productive, abundant, voluntary societies.
No, it's something like the set of propositions and productive, abundant, voluntary societies.
It's something like the set of propositions
that make for an iterating voluntary game
that's self-improving.
That's a very constrained set of pathways.
And there's something in that that I think attracts people
as a universally acceptable ethos.
It's the ethos on which a successful marriage would be founded
or a successful friendship or a successful business partnership
where all the participants are enthusiastically on board
without compulsion.
And then Jean Piaget, the developmental psychologist,
had mapped out the evolution of systems like that
in childhood play.
And so he got an awful long,
he was trying to reconcile the difference
between science and religion in his investigations
of the development of children's structures of knowledge.
And he got a long way in laying out the foundations
of that ethos.
And so did the comparative mythologists like
Richa Eliade, who wrote some brilliant books on,
well, I think they're sort of like the equivalent
of early large language models.
That's how it looks to me now is Eliade was very good
at picking out the deep patterns of narrative commonality
that united major religious systems
across multiple cultures.
That was all thrown out by the way,
that was all thrown out by the post-modern literary theorists.
They just tossed all that out of the academy.
And that was a big mistake.
They turned to Foucault instead.
It was a cataclysmic mistake.
And it certainly ushered in this era of domination
by power narratives, which is underlying the sorts
of phenomena that you're describing
that are so appalling.
So what's happened to you as a consequence
of starting to speak out about this?
And why did you start to speak out?
And how do you, you said you were involved in this.
And so what's the difference between being involved
and being complicit?
I mean, I know people learn, well, these are complicated
problems that people learn, but
like, why are you speaking out?
How are people responding to that?
And how do you see your role in this as it unfolded over the last, say, 15 years?
Yeah, so complicated question.
And I'll start by saying I claim no particular bravery, so I don't claim any particular moral
credit on this.
Start by saying there's this thing you'll hear about,
sometimes this concept of so-called, fuck you money.
And so that, right, there's this sort of like,
okay, if people are successful,
you make a certain amount of money,
now you can tell everybody, fuck you,
you can say whatever you want.
And I will just tell you,
my observation is that's actually not true.
Yeah, right, definitely not. The reason that's not true is because the people who prosper in our society tend
to do so because they're becoming responsible for more and more things.
And specifically, they're becoming responsible for more and more people.
And so one of the things I would observe about myself and observe about a lot of my peers
is even as we became more and more bothered and concerned and ultimately very worried about some of these things is
as that was happening, we were taking on greater and greater responsibilities for our employees
and for all the companies that were involved in, right, and for all the shareholders of
all of our companies.
And so, I think that's part of, and, you know, you could say, you know, this sort of this
endless, you know, sort of question between kind of, you know, absolute, you know, sort
of absolute commands of morality versus the real world compromises
that you make to try to function in society.
I would say I was just as subject to that inherent conflict as anybody else.
I was in the room for a lot of these decisions.
I saw it every step of the way.
In some cases, I felt right up front that something was going wrong.
I was in the original discussion for one of
these companies on the definition of hate speech.
You can imagine how that discussion goes.
You know exactly how the discussion went,
but I'll just tell you, it's like,
well, hate speech is anything that makes people uncomfortable.
It's well.
So then I'm like, well,
that comment you just made makes me uncomfortable,
and so therefore that must be hate speech.
Then they look at me like I've grown a third eye,
and I'm like, OK, that argument's not going to work.
And then they're like, well, Mark, surely you
agree that the N-word makes people uncomfortable.
And I'm like, yes, I agree with that.
If our hate speech policy is people
don't get to use the N-word, I'm OK with that,
as long as people can say it.
But of course, it doesn't stop there,
and it slides into what we then saw happen.
So I saw that happen.
The misinformation thing, same thing.
The misinformation thing, actually, on The misinformation thing actually on social media
is a fascinating and horrifying thing that played out,
which is it actually started out to actually attack
a specific form of actually spam.
So there were these Macedonian bot farms
that were literally creating what's called click spam
or sort of ad fraud on social media.
They were creating literally fake news stories.
The classic one was the pope has died.
And it's like, no, the pope has not died.
That is absolute misinformation.
But the reason that this bot farm puts that story out is because when people click on
it, they make money on the ads.
And then that's clearly a bad thing, and that's misinformation, and clearly we need to stop
that.
And so the mechanism was built to stop that kind of spam.
But then after the election, we discovered
that anybody who was pro-Donald Trump was presumably
an agent of Vladimir Putin.
And then all of a sudden, that became misinformation.
And so the engine that was intended to be built for spam,
then all of a sudden applied to politics.
And then off and away they went.
And then everything was misinformation.
Culminating in objections to three years of COVID lockdowns
became misinformation.
So I saw that entire thing on spool.
I saw all the pressures brought to bear on these companies.
I saw the people who went up against this get wrecked.
I saw these companies try to develop all these trade-offs.
Obviously I would claim for myself that I tried to argue this kind of every step of
the way.
And by the way, I'm not the only one who was concerned about this.
And I'll just, I think we should give Mark Zuckerberg a little bit of credit on this
on one specific point, which is you may recall he gave a speech in 2019 at Georgetown, which
he gave a very principled defense of free speech from first principles.
And was, you know, he at that point was trying very hard to kind of maintain the line on
this.
Now, 2020, everything went like completely nuts.
And then the Biden administration came in and the government came in and they really
lowered the boom.
And so things went very bad after that.
But even Mark, who a lot of people get very mad at on these things, he was trying in many
ways to hold on to these things.
Anyway, it unfolded the way that it did.
I don't claim any particular courage.
I will tell you, basically starting in 2022, I saw some leaders in our industry really
start to step up.
And one that I would give huge credit to is Brian Armstrong, who's the CEO of Coinbase,
which is a company that we're involved in.
And you may recall, he's the guy who wrote basically a manifesto and he said, these companies
need to be devoted to their missions not every other mission in society
Right, right, right. Right. And so he declared like there's gonna be a new way to run these companies
We're not gonna have all the politics. We're not gonna have the whole bring-your-whole-self-to-work thing that you know
We're not gonna have all the internal corrosion
We're gonna go back, you know, we're gonna go we're gonna have our mission and then we're gonna focus on that
We're not gonna take on the other worlds the world's the world's ills
And then and then he did this thing where he actually got, he actually purchased company of the activist class
that we talked about earlier.
And the way that he did that was with a voluntary buyout
where he said, if you're not on board with working
at a non-political, non-ideological company
that's focused on its own mission, not every other mission,
then I will pay you money to go work someplace
where you'll be able to fully exercise your politics.
There are a bunch of other CEOs that have been basically following in
Brian's footsteps more quietly,
but they've basically been doing the same thing.
A lot of these companies have turned the corner on this now and they're
working these people out.
Then quite frankly, the big event is I think this election,
and people have all kinds of positive,
negative takes on Trump and this gets into lots and lots of political issues.
But I think that the Trump victory being what it was and being not just Trump winning again,
but also Trump winning the popular vote and also simultaneously the House and the Senate,
it feels like the ice has cracked.
It's like maybe the pressure for the ice to crack was building over two years, but it
feels like as of November 6th, it feels like something really fundamental changed
where all of a sudden people have become
basically willing to talk about the things
they weren't willing to talk about before.
Okay, let's go back to your manifesto.
So I wanted to highlight a couple of things
in relationship to that.
I had some questions for you too.
Tell me to begin with, if you would, why you wrote this manifesto, maybe let everybody know about it first, why you wrote it and what effect it's had and
then I'll go through it step by step at least to some degree and and I can let you know
what ideas we've been developing with the Alliance for Responsible Citizenship,
and we can play with that a little bit.
So, what I experienced,
I'm on 30 years now in the tech industry,
in the US and Silicon Valley,
and what I experienced was between roughly 1994 when I entered through to about 2012,
was one way in which everything operated
and set of beliefs everybody had.
And then basically this incredible,
discontinuous change that happened between,
call it 2012 and 2014,
that then cascaded into, you know,
what you might describe as, you know,
some degree of insanity over the last decade.
And of course, you've talked about a lot of aspects of that insanity.
But the way I would describe it is for the first 15,
20 years of my career,
there was what I refer to sometimes as the deal with
the capital D or you might call it the compact,
or maybe just the universal belief system,
which was effectively everybody I knew in tech was a social liberal progressive
and good standing.
But operating in the era of Clinton Gore and then later on through Bush and into Obama
first term, it was viewed as that to be a social progressive and good standing was completely
compatible with being a capitalist, completely compatible with being an entrepreneur and a business person, completely
compatible with succeeding in business. And so the basic deal was you have the
exact same political and social beliefs as everybody you know. You have the
exact same social and political beliefs as the New York Times every day.
And their beliefs change over time, but you up that yours to stay current.
And everybody around you believes the same thing.
The dinner table conversations are everybody's in
100 percent disagreement on everything at all times.
But then you go succeed in business,
and you build your company,
and you build products, and you build new technology.
If your company succeeds,
it goes public and people become wealthy.
Then you square the circle of
social progressivism and entrepreneurial success, and business success, you square the circle of social progressivism and entrepreneurial success and business success,
you square the circle with philanthropy.
So you donate the money to good social causes and then,
someday your obituary says he was both
a successful business person and a great human being.
Basically, what I experienced is that deal broke down
between 2012, 2014, 2015,
and then imploded spectacularly in 2017.
Ever since, there has been no way to square that circle, which is if you are successful
in business, in tech, in entrepreneurship, if you become successful, you are de facto
evil.
You can protest that you're actually a good person, but you are presumed to be de facto
evil.
By the way, furthermore, philanthropy will no longer wash your sins.
This was a massive change and this is still playing out,
but philanthropy will no longer wash your sins because philanthropy is unacceptable.
The belief goes, philanthropy is an unacceptable diversion of
resources from the proper way that they should be deployed,
which is the state, to a private enterprise form of philanthropy, which is sort of de facto,
is now considered bad.
And so everybody in my world basically
had a decision to make, which was,
did they basically go sharply to the left on not just
social issues, but also economic issues?
And did they become starkly anti-business, anti-tech,
essentially self-hating in order to stay
in the good graces of what happened on that side.
Or did they have to do what Peter Thiel did early on and go way to the right,
and basically just punch out and declare that,
I'm completely out of progressivism,
I'm completely finished with this,
and I'm going to go a completely different direction.
Obviously, that was part of
the phenomenon that culminated in Trump's first election.
And so anyway, long story short,
the manifesto that I wrote is an attempt to kind of bring things back to,
you know, what I consider to be a more sensible way to think and operate,
you know, a big tent social and political umbrella,
but you know, where tech innovation is actually still good,
business is still good, capitalism is still good,
technological progress is still good, the people who work on these things actually are still good. Business is still good. Capitalism is still good. Technological progress is still good.
The people who work on these things actually are still good
and that actually we can be proud of what we do.
You said that something changed quite radically in 2017.
I'd like you to delve a little bit more
into the breakdown of this deal.
Like your claim there was that for a good while,
center left positions politically,
let's say philosophically,
were compatible with the tech revolution
and with the big business side of the tech revolution.
But you pointed to a transformation across time
that really became unmistakable by 2017.
Why 2017 as a year and what is it that you think changed?
You know, you painted a broad-scale picture of this transformation and also pointed to the fact that
it was no longer possible to be an economic capitalist, to be a free market guy, and to proclaim allegiance to the progressive
ideals that became impossible.
And in 2017, what do you think happened?
How do you understand that?
Yeah, so different people, of course, have different perspectives on this, but I'll tell
you what I experienced.
And I think in retrospect, what happened is Silicon Valley experienced this before a lot
of other places in the country and before a lot of other fields of business.
And so I have many friends in other areas of business who live and work in other places
where I would describe to them what was happening in 2012 or 2014 or 2016.
And they would look at me like I'm crazy.
And I'm like, no, I'm describing what's actually happening on the ground here.
And then three years later, they would tell me, oh, it's also happening in Hollywood,
or it's also happening in finance, or it's also happening in finance,
or it's also happening in these other industries.
In retrospect, I think I had a front row seat to this
just because Silicon Valley was first in.
Silicon Valley was first in.
Silicon Valley was the industry that went the hardest
for this transformation upfront.
What we experienced in Silicon Valley,
and then the nature of my work,
over this entire time period, I've been a venture capitalist and an investor. the nature of my work, over this entire time period,
I've been a venture capitalist and an investor.
And so the nature of my work is I've been exposed
to a large number of companies all at the same time,
some very small, and then by the way, also some very large.
So for example, I've been on the Facebook board
of directors this entire arc, right?
And a lot of what I'm describing,
you can actually see through just the history
of just the one company, Facebook, which we can talk about.
But anyway, so I think I basically saw the Vanguard movement up close.
And essentially what I saw was, it was really 2012, it was the beginning of the second Obama term,
and it was sort of the aftermath of the global financial crisis.
And so it was some combination of those two things.
So the global financial crisis hits in 2008, Occupy Wall Street takes off, but it's this kind of fringe thing.
Bernie Sanders starts to activate as a national candidate.
Some of these other politicians on the sort of further to the left
start to become prominent, start to take over the Democratic Party.
And then the economy caved in,
so we went through a severe recession between, call it 2009 to 2011. 2012, the economy
was coming back. People maybe weren't worried about being fired anymore, right? If people
think they're going to get fired in a recession, they generally don't act out of the company.
But if they think their jobs are secure in an economic boom, they can start to become
activists. And so the sort of employee activist movement started around 2012. And then the
Obama second term, I would say the progressives in the Democratic Party kind of took more control,
kind of starting around that time.
And the Obama administration itself kind of turned to the left.
And so you started to get this kind of
activated political energy,
the activist movements in these companies,
where you had people who the year before had been
a quiet web designer working in their cubicle,
and then all of a sudden they're a social and
political revolutionary inside their own company.
And then by the way, the shareholders activated,
which was really interesting.
Like this is when Larry Fink at BlackRock decided
he was going to save the world.
And then the press activated.
And so all of a sudden, the same tech reporters
who had been very happy covering tech
and talking about exciting new ideas,
all of a sudden became very accus know, kind of very accusatory and
started to condemn the industry.
So that started to pop around around 2012 and then
what I saw, you might even describe it as like a controlled skid that became an uncontrolled skid,
which was that energy built up in tech between 2012 and 2015.
And then, you know, basically what happened in rapid succession was Trump's nomination and then Trump's election,
his victory in 2016.
And I described both of those events as like 10Xing
of the political energy in this system.
And so both of those events really activated,
very strong antibody responses,
which as you know culminated in like
mass protests in the streets right after the 2016 election.
And then of course the narrative then
became crystallized,
which is there are the forces of darkness
represented by Trump, represented by the right,
represented by capitalism, represented by tech,
and there are the forces of light represented by wokeness
and the racial reckoning and the George Floyd protests
and so forth, and it became this very clear litmus test.
And so the pattern basically locked in hard in 2017
and then continued to escalate from there.
So in your manifesto, you list some of these ideas
that were pathological, let's say,
that emerged on the left.
And I just want to find the,
while you, for example for example you say technology
doesn't care about your ethnicity, race, religion, national origin, gender, sexuality,
political views, height, weight, etc. listing out the dimensions of hypothetical
oppression that the intersectionalist woke mob stresses continually. Now you
you point your finger at that, obviously,
because you feel that something went seriously wrong
with regard to the prioritization
of those dimensions of difference.
And that's part of the movement of diversity.
That's part of the movement of equity and inclusivity.
Let me just find this other, yes, here we go.
Our present society has been subjected to
a mass demoralization campaign for six decades against technology and against life under varying
names like existential risk, sustainability, ESG, sustainable development goals, social responsibility,
development goals, social responsibility, stakeholder capitalism, precautionary principle, trust and safety, tech ethics, risk management, degrowth. The demoralization campaign is based
on bad ideas of the past, zombie ideas, many derived from communism, disasters then and now
that have refused to die. And that's in the part of your manifesto that is subtitled the enemy.
That's an enemy of the enemy you're characterizing there as a system of ideas and I guess that
that would be the system of woke ideas that presumes and correct me if I get this wrong
that presumes that you know we're fundamentally motivated by power,
that anybody who has a position of authority
actually has a position of power.
The best way to read positions of power
is from the perspective of a narrative
that's basically predicated on the hypothesis
of oppressor and oppressed, and that there are multiple dimensions of oppression
that need to be called out and rectified.
And the DEI movement is part of that.
And so you point to the fact that these are zombie ideas
left over, let's say, from the communist enterprise
of the early and mid 20th century.
And that seems to me precisely appropriate.
And you said you thought those ideas emerged
on the corporate front in a damaging way first in big tech.
You know, I probably saw that most particularly evidence
of that most particularly in relationship to the scandal
that surrounded James DeMore.
Because that was really cardinal for me
because like I spent a fair bit of time talking to James
and my impression of him was that he was just an engineer.
And I don't mean that in any disparaging sense.
He thought like an engineer.
And he went to a DEI meeting and they asked him
for feedback on what he had observed
and heard and James being an engineer thought
that they actually wanted feedback, you know,
because he didn't have the social skills to understand
that he was supposed to be participating
in an elaborate lie.
And so he provided them with feedback about their claims,
especially with regards to gender differences.
And James actually nailed it pretty precisely
for someone who wasn't a research psychologist.
He had summarized the difference in the literature
on gender differences, for example, extremely accurately.
And they pilloried him.
And I thought, that's really bad,
because it means that, you know,
Google wouldn't stand behind its own engineers
when he was telling the truth.
And there was every attempt made to destroy his career. Google wouldn't stand behind its own engineers when he was telling the truth.
And there was every attempt made to destroy his career.
Now, why do you think that whatever happened
affected tech first?
And what did you see happening
that you then saw happening in other corporations?
Yeah, so why did it happen in tech first?
So a couple of things.
So one is tech is just, I would say,
extremely connected
into the universities.
And so almost everything we do flows from the computer science
departments and then engineering departments
at major US research universities.
And we hire kids from new graduates all the time.
And so we just have a very, very tight.
And we work with the university professors and research groups
all the time. And so there with the university professors and research groups all the time.
And so, there's just direct connection there. And so, you know, it's like if an ideological
pathological virus is going to escape the university and jump into the civilian population,
it'll hit tech first, which is what happened. Or maybe, you know, tech and media first.
So that's one. And then two, you know, two is, I think,
the sort of psychological sort that happens when kids decide
what profession to go into.
And what we get are the very high openness people.
You know, the highest openness people come out of college,
you know, who are also high IQ and ambitious.
And they basically, you know, they go into tech,
they go into creative industries, or they go into media,
right, or they're sort of the They're sort of where they sort into.
And so we also get the most open.
And by the way, also ambitious, right?
We get the ambitious driven,
you say high industriousness ones as well.
And then that's the formula for highly effective activists, right?
And so we got the full load of that.
And then look, this movement that we now call Wookness,
it hijacked what I would call sort of at the time,
bog standard progressivism, which is,
of course you want to be diverse,
and of course you want to be inclusive,
and of course you want everybody to feel included,
and of course you want to be kind,
and of course you want to be fair,
and of course you want to adjust society.
And that was part of the just moderate belief set that everybody in my world had,
for the preceding certainly 20 years.
And so at first it just felt like,
oh, this is more of what we're used to.
Of course this is what we want.
But it turned out what we were dealing with was
something that was far more aggressive,
a much more aggressive movement.
And then this activism phenomenon.
And then this became a very practical issue for
these companies on a day-to-day basis.
So you mentioned the DeMore incident.
So I talked to executives at Google while that was
going down because that was so confusing for me at the time.
The reason they acted on him the way they did and fired him,
and ostracized him, and did all the rest of it is because they
thought they were hours away from
actual physical riots on the Google campus.
They thought employee mobs were going to try to burn the place down, physically.
Right? And that was such, at the time, that was such an aberrant phenomenon, expectation. There were other companies, by the way, at the same time that were having all-hands meetings that were
completely unlike anything that we had ever seen before, that you could only compare to struggle sessions.
There's the famous, the Netflix adaptation of
Three-Body Problems starts with this very vivid recreation of
a Maoist era communist Chinese struggle session,
where the students are on stage and the disgraced
professor is on stage confessing his sins, and then they beat him to death.
And the inflamed passions of
the young ideologically consumed crowd that is
completely convinced that they're on the side of justice and morality.
Fortunately, nobody got beaten to death
at these companies on stage in an all-hands meeting.
But you started to see that same level of activated energy,
that same level of passion,
you started to see hysterics, of activated energy, that same level of passion, you started to see hysterics,
people crying and screaming in the audience.
And so these companies knew they were at risk from their employees
up to and including the risk of actual physical riots.
And that at the time, of course, was like a completely bizarre thing.
And we at the time had no idea what we were dealing with.
But in retrospect, it was through events like what James DeMore went through
that we ultimately did figure out what this was.
Okay, okay, so let me ask you a question about that.
You know, it's a management question, I guess.
So, I had some trouble at Penguin Random House
a couple of years ago
after writing a couple of bestsellers for them.
I was contracted with one of their subdivisions
and they had a bit of an employee rebellion
that would be perhaps reminiscent of the sort of thing
that you're referring to.
And they kowtowed to them and I ended up switching
to a different subdivision.
Now, really made no material difference to me.
And I was just as happy to be with a subdivision where
Everybody in the company visible and invisible was working to make what I was doing with them successful rather than
scuttling it invisibly from behind the scenes, but my sense then was
Why don't you just fire these people?
And so what i'm dead serious about that.
It's like, first of all, I'll give you an example.
So we just set up this company, Peterson Academy online,
and we have 40,000 students now and about 30 professors,
and we're doing what we can to bring extremely high quality,
elite university level education to people everywhere
for virtually no money.
And that's working like a charm.
Now we set up a social media platform inside that
so that people could interact like they do on Twitter
or Facebook, et cetera, Instagram,
because we try to integrate the best features
of those networks. But we wanted to make sure that it was a civilized place.
And so the fact that people have to pay for access to it helps that a lot, right?
Because it keeps out the trolls and the bots and the bad actors who can multiply accounts beyond comprehension for no money.
And so the mere price of entry helps,
but we also watched and if people misbehaved,
we did something about it.
And we kicked four people out of 40,000.
And one of them we put on probation.
And that was all we had to do.
You know, there was goodwill
and everybody was behaving properly.
And like I said, there was a cost entry
but it didn't take a lot of
Discipline it didn't take a lot of disciplinary action to make an awful lot of difference with regard to behavior
and so, you know, I can understand that google might have been apprehensive about
activating the activists within their confines, but
Sacrificing jamesMore to the woke mob
because he told the truth is not a good move forward.
And I just don't understand at all.
You see, and the same thing happened at Penguin,
at Penguin Random House.
It's like, you could just fire these people.
Like they were people there who wanted to not publish
a book of mine that they hadn't even read.
You know, they weren't people who deserve to be working
at what's arguably the greatest publishing house
in the world.
So why, you alluded to it a little bit.
You said that people were taken by surprise, you know,
and fair enough.
And it was the case that there was a radical transformation
in the university environment somewhere between 2012
and 2016,
where all these terrible, woke, quasi-communist, neo-Marxist ideas emerged and became dominant
very quickly.
But I'm still, why do you think that that was the pattern of decision that was being
made instead of taking appropriate disciplinary action and just ridding the companies of people
who were going to cause trouble.
Yeah, so there's a bunch of layers to it in retrospect.
And let me say that this what you describe has it is what's happening now.
So in the last two years a lot of companies actually are at long last they are firing activists and we can talk about that.
And so I think the tide is turning on that a bit but
but going back in time going back in time in time between 2012 and let's say 2022.
So like a full 10-year stretch
where what you're describing didn't happen.
I think there's layers. So one is, as I said,
just people didn't understand it.
I think quite frankly, number two,
a lot of people in charge agreed with it at least to start.
So they saw people who had what appeared to be
the same political ideological leanings as they did
and were just simply more passionate about them.
So they thought they were on the same side.
They agreed with it.
Then at some point, they discovered that they were dealing with something different,
maybe a more pure strain or a more fundamentalist approach.
At that point, of course, they became afraid.
So they were afraid of being lit on fire themselves.
And by the way, I would describe, you know,
I think tech is starting to work its way out of this.
I think Hollywood is still not,
and my friends in Hollywood, when I talk to them.
Oh, not at all.
Not at all.
When I talk to people who are in serious positions
of responsibility in Hollywood, you know,
after a couple drinks and, you know,
in sort of a zone of privacy, you know,
it's pretty frequently they'll say, look,
I just can't, it's still too scary.
I can't go up against this because it'll ruin my career.
So there is this group frenzy cancellation,
ostracizing career destruction thing, that's real.
But let me highlight two other things.
So one is, it wasn't just the employees.
It was the employees,
it was substantial percentage of the executive team.
It was also the board of directors in a lot of cases.
So you'd have politically activated board members,
and some of these companies still have that, by the way.
It was also the shareholders.
You would think that investors in
a capitalist enterprise would only be concerned with
economic return,
and it turns out that's not true because you have this intermediate layer of institutions like BlackRock
where they're aggregating up lots of individual shareholders.
And then the managers of the intermediary can exercise their own politics using the voting power of aggregated small shareholder holdings.
So you have the shareholders coming at them. Then, by the way, So you have the shareholders coming at them.
Then by the way, you also have the government coming at them.
This administration has been very aggressive on a number of fronts.
We could talk about a bunch of examples of that,
but you have direct government pressure coming at you.
You have the entire press corps coming at you.
So it feels like it's the entire world
bearing in on you and they're all gonna light you on fire.
And then that takes me to-
Well, and that does happen.
That does, like we should also point out,
that's not a delusion.
I mean, part of also, it's also, I think the case
that the new communication technologies
that make the social media platforms so powerful
have also enabled reputation savagers
in a way that we hadn't seen before
because you can accuse someone
from behind the cloak of anonymity
and gather a pretty nice mob around them in no time flat
with absolutely no risk to yourself.
And there's a pattern of antisocial behavior
that characterizes women, and this has been well documented
for 50 years in the clinical literature.
Like, antisocial men tend to use physical aggression,
bullying, but antisocial women use reputation savaging
and exclusion.
And it looks like social media, especially anonymous social media, what would you say,
enables the female pattern of aggression, which is reputation savaging and cancellation.
Now I'm not accusing women of doing that.
You've got to get me right here.
It's that there are different pathways to antisocial expression.
One of them, physical violence, isn't enabled by technology, but the other one, which is reputation savaging and exclusion, is clearly abetted by technology. And so that's another feature that
might have made people leery of putting their head up above the turret. You know, like in Canada,
while I'm still being investigated by the
Ontario College of Psychologists and I'm scheduled free re-education if they can ever get their act
together to do that. And I've fought an eight-year court battle which has been extremely expensive
and very, very annoying to say the least. And I don't think that there's another professional
in Canada on the psychological or medical side
who's been willing to put their head above the parapet
except in brief, you know, in brief interchanges.
And the reason for that is it simply is,
it simply is too devastating.
And so I have some sympathy for people who are concerned
that they'll be taken out because they might be,
but you know, by the same token,
if you kowtow to the woke mob for any length of time,
as the tech industry appears to be discovering now,
you end up undermining everything that you hold sacred.
I mean, you alluded to the fact that you'd hope that
at least the shareholders would be appropriately oriented
by market force forces,
greed to put it in the most negative possible way.
And you'd hope that that would be sufficient incentive
to keep things above board
because I'd way rather deal with someone
who's motivated by money than motivated by ideology.
But even that isn't enough to ensure
that even corporations act
in their own best economic interest.
So it is a perfect storm.
You alluded to government pressure as well,
and so maybe you could shed a little bit more light on that,
because that's also particularly worrisome.
And it's certainly been something that's characteristic
and is still characteristic of Canada under Trudeau.
Yeah, so just a couple things on that.
So one is, I should just note, and I'm sure you'll agree with me on this,
there are many men who also exhibit that reputational destruction.
Oh, absolutely. Men will use it.
They typically don't in the real world.
But if the pathway is laid open to it on social media, let's say,
and there's a particular kind of man
who's more likely to do that too.
Those are the dark tetrad types who are narcissistic
and psychopathic and Machiavellian and sadistic,
lovely combination of personality traits.
And they're definitely enabled online.
Yeah, so we've had plenty of them as well.
Yeah, so the government pressure side,
so when this all hit, like I said,
I didn't, nobody I knew understood what was happening.
I didn't understand it.
And so I did what I do in circumstances like that.
And I basically just tried to work my way backwards
through history and figure out where this stuff came from.
And I think for pressure on corporations,
the context for this is that corporations are...
There's this cliche that you'll hear actually interestingly from the left, which is, well,
private companies can do whatever they want.
They can censor whoever they want.
Private companies have total latitude to do whatever they want.
And of course, that's totally untrue.
Private companies are extensively regulated by the government.
Private companies have been regulated by a civil rights regime imposed by the government
for the last 60 years.
That civil rights regime, you know, certainly for the last 60 years. That civil rights regime certainly has done many good things
in terms of opening up opportunities
for different minority groups and
so forth to participate in business.
But that civil rights regime put in place
this standard called disparate impact,
in which you can evaluate whether a company is racist or not
on the basis of just raw numbers
without having to prove that they intended to be,
in terms of who they select for their employees.
So companies, predating the arrival of what we call woke,
they already had legal and regulatory and political and
compliance requirements put on them to achieve things
like racial diversity, gender diversity, and so forth.
I grew up in that environment.
I considered that totally normal for a very long time.
I just figured that's how things worked,
and that was the positive payoff from the Civil Rights
Movement and from the 1960s, and that was just the state of play.
And by the way, it was, I think, manageable and good
in some ways, and kind of on our way we went,
like we could deal with it.
But basically, what happened was when woke arrived,
that regime was enormously intensified.
And what happened was a sequence of events,
and literally there was a playbook where, for example,
per DEI, there was a sequence of events
where activists and employees and board members would push you.
First of all, you had to start doing
explicit minority statistical reporting.
So you had to fully air in public any disparate impact,
any differences in racial, gender, ethnic,
sexual differences relative to the overall population in a statistical report you had
to update every year.
And of course, they would tell you, as long as you issue this report, you're fine.
Well, of course, that wasn't the case.
What followed the report was, okay, now you need what's called the Rooney rule.
And the Rooney rule basically says you have to have statistically proportionate representation
of candidates for every job opening relative to the overall population.
Right.
And again, what the-
So stop there for just a sec, because we should delve into that.
That's a terrible thing, because we can think about this arithmetically.
It's like you have to have proportionate representation
of all protected group members in all categories.
Okay, there's a lot of horror in those few words
because the first problem is those categories
are multiplicable without end.
And you see this, for example,
with the continued extension of the LGBT acronym.
There's no end to the
number of potential dimensions of discrimination that can be generated.
And then, so that's an unsolvable problem to begin with. It means
you're screwed no matter what you do. But it's worse than that when you combine
that with the doctrine of intersectionality, because not only do you
then have the additive consequence
of these multiple dimensions of potential prejudice.
So for example, in Canada,
it's illegal to discriminate
on the basis of gender expression.
Okay, that's separate from gender identity.
So now there's a multitude of categories
of gender identity, hypothetically.
I mean, the estimates range from like two to 300.
But gender expression is essentially
how you present yourself.
It's, I think it's technically indistinguishable
from fashion fundamentally.
And I'm not trying to be a prick about that.
I mean, I've looked at the wording
and I can't distinguish it conceptually
from its mode of self-presentation,
hairstyle, dress, et cetera.
And so that means you can't discriminate
on the basis of whatever infinite number
of categories of gender expression you could generate.
And then if you multiply those together,
I mean, how many bloody categories do you need before you multiply them together?
You have so many categories that it's impossible to deal with.
So there's a major technical problem
at the bottom of this realm of conceptualization
that's basically making it A,
impossible for companies to comply
and exposing them to legal risk everywhere,
but also that
provides an infinite market for aggrieved and resentful activism.
Yeah, that's right.
It feels like what we saw.
So reporting leads to candidate pools.
Candidate pools, the pressure then is, well, you need to hire proportionately according
to whatever these categories are, including all the new ones.
And then hiring means that then step four is promotions.
You need to promote at the same rate, right? And the minute you have that requirement, of course,
now any performance metrics are just totally out the window because you can't, right? You just have
to promote everybody identically, right? And that's sort of the slide into the complete removal of
merit from the system. And then, by the way, the fifth stage is you have to lay off, proportionately,
right? And so you're bound on the other side. And what happens is precisely And then, by the way, the fifth stage is you have to lay off proportionately, right?
And so, you know, you're bound on the other side.
And what happens is precisely what I'm sure you know happens and what you've seen happen,
what happens is a descent of the culture of the company into complete, you know, dog,
dog eat dog, us versus them, you know, the employee base starts to activate along these
identity lines inside the company.
These companies all created what are known as this incredible euphemism
of this employee resource groups, ERGs,
which is basically segregated employee affiliation groups.
So you now have the employees.
The employees aren't employees of your company,
the employees are members of a group who just happen to be at your company,
but their group membership along whatever axis we're talking about, their group membership ends up trumping their roles as employees.
And then you have this internal descent into accusations, into fear. You have this incredible
tokenization that takes place where anybody from an underrepresented group is the classic problem of
affirmative action. Any member of an underrepresented group is assumed to have gotten hired only because of their skin
color or their sex, which is horrible for members
of that group.
And so you get this downward slide.
Especially the competent ones.
It's terrible for the competent ones.
Exactly.
And so it's acid.
You're pouring cultural acid on your company,
and the entire thing is devolving
into complete chaos internally.
And what's happening is the activists, and the press, and the entire thing is devolving into complete chaos internally.
And what's happening is the activists and the press and the board and everybody else
is pressing you to do this.
And then the government on top of that is pressing you to do it.
And under this last administration, that reached entirely new heights of absurdity.
So let me take a step back.
Once you walk down this path and go through all those steps, I believe there's no question
you now have illegal quotas.
And you have illegal hiring practices and you have illegal promotion practices. And by the way, you also have illegal layoff practices.
I think any reading of US civil rights law, which says you are not allowed to discriminate on the basis of all these characteristics,
you have worked yourself into a system in which you are absolutely
discriminating on the basis of these characteristics through actual hard quotas, which are illegal.
discriminating on the basis of these characteristics through actual hard quotas, which are illegal. And so to start with, I think all of these companies that implemented these systems,
I think they've all ended up basically being on the wrong side of civil rights law, which
is of course this incredibly ironic result, right?
That they've all ended up with illegal quotas.
I mentioned Hollywood earlier.
Hollywood has gone all in for it.
They literally now publish their hard quotas, I mentioned Hollywood earlier, Hollywood has gone all in for it. They literally now publish their hard quotas.
The studios have these statements that says by X date,
50 percent of our producers and writers and actors and so forth,
are going to be from specific groups.
Again, you just read like the Civil Rights Act and it's like,
okay, that's actually not legal and yet they're doing it.
This administration, this last administration,
the Biden administration really hammered this in and they
put these real radicals in charge of groups like
the Civil Rights Division of the Department of Justice.
The ultimate amazing expression of this,
bizarre expression of this was SpaceX.
One of Elon's companies got sued by
the Civil Rights Division of this Department of
Justice for not hiring enough refugees.
Not hiring enough foreign nationals who had come either illegals
or coming in through a refugee path.
Notwithstanding the fact that SpaceX is a federal contractor and is only allowed in
most of its employee base to hire American citizens.
And so the government simultaneously demands of SpaceX that they only hire American citizens
and that they hire refugees.
And the government views no responsibility whatsoever to reconcile that.
You're guilty either way.
Right? And then, again, general companies are in this bind now where if they do everything they're supposed to do,
they end up in violation of the civil rights law, which they started out by trying to comply with.
And this has all happened without reason
and rational discussion.
This has all happened in a completely hysterical,
emotional frenzy.
And what these companies are realizing
is they're now on the other side of this
and there's just simply no way to win.
Well, there's another, there's an analog to that,
which is very interesting.
I mean, I started to see all this happen back in 1992 because I was at Harvard
when the bell curve was published and I watched that blow up the department at Harvard and
it scuttled one of my students' academic careers for reasons I won't go into. But
well, I was working with that student on developing validated predictors of academic managerial and entrepreneurial
performance.
And so we're interested in that scientifically.
What can you measure that predicts performance in these realms?
And the evidence for that's starkly clear.
The best predictor of performance in a complex job is IQ.
And psychologists tore themselves into shreds, especially after the bell curve,
trying to convince themselves that IQ didn't exist. But it is the most well
established phenomena in the social sciences, probably by something
approximating an order of magnitude. So if you throw out IQ research, you pretty much throw out all social science research.
And so that turns out to be a big problem.
Now, personality measures also matter,
conscientiousness, for example, for managers and openness,
which you mentioned earlier for entrepreneurs,
but they're much less powerful,
about one fifth as powerful as IQ.
Now the problem is that IQ measures show racial
disparities and that just doesn't go away no matter how you look at it. Now at the same time
the U.S. justice system set up a system of laws that govern hiring that said that you had to use
the most valid and reliable predictors of performance that were available
to do your hiring, your placement and your promotion,
but none of those could produce disparate impact.
Which basically meant, as far as I can tell,
whatever procedure you use to hire is de facto illegal.
Now, so lots of companies, and one of the,
I've never, I don't know why this
hasn't become a legal issue.
So you could say, well, we use interviews,
which most companies do use.
Well, interviews are very,
they're not valid predictors of performance.
They're not much better than chance.
Structured interviews are better,
but ordinary interviews aren't great at all.
So they failed the validity and reliability test.
And so I don't think there is a way that a company
can hire that isn't illegal,
technically illegal in the United States.
And then I looked into that for years,
trying to figure out how the hell did this come about?
And the reason it came about is because the legislators
basically abandoned their responsibility to the courts and decided that
they were just gonna let the court sort this mess out.
And that would mean that companies would be subject
to legal pressure and that there would be judicial rulings
in consequence, which would be very hard on the companies
in question, but it meant the legislators didn't have
to take the heat.
And so there's still an ugly problem at the bottom
of all this that no one has enough courage to address.
And so, but the upshot is that, as you pointed out,
companies find themselves in a position
where no matter what they do, it's illegal.
I've had lawyers literally write analysis for this
as I've been trying to figure it out,
you know, employment law lawyers.
And like literally, you read the analysis and it's very,
it's absolutely 100% illegal to discriminate on the basis of these
characteristics and it is 100% absolutely illegal to not discriminate on the basis of
these characteristics and that is true. And both of those are true. It is both illegal to hire.
You mentioned interviews. Interviews are an ideal setting for bias because, even if you just assume most people like people who are like themselves,
is a member from a certain group going to be more
inclined to hire members from that group?
Probably yes, just if there are no other parameters.
So precisely, you want to get to quantitative measures
because you want to take that bias out of the system,
but then the quantitative measures are presumptively
illegal because they lead to bias through disparate impact. Yeah, and so maybe the term Kafka trap, right? You end up in this
vice and then everybody is just so mad that you can't even have the discussion. And so this is
the downward spiral. On the one hand, I think there's a lot of this that just fundamentally
can't be fixed because a lot of these assumptions, a lot of this that just fundamentally can't be fixed
because a lot of these assumptions, a lot of this stuff got baked in going back to the
1960s, 1970s.
So a lot of this is long since settled law and I don't know that anybody has the appetite
to reopen Pandora's box in this.
Having said that, this new administration, the Trump administration coming in, I would
say every indication is that the Trump administration's policies and enforcement
are going to flip to the other side of this.
And so one of the things that's very fascinating about what's happening in business right now
is a lot of boards of directors are now basically having a discussion internally with their
legal team saying, okay, we cannot continue to do the just overt discriminatory hiring
and employee segmentation that we've been doing.
We're not going to be permitted to.
And so we have to back way off of these programs.
And you're already seeing Fortune 500 companies
starting to shut down DEI programs.
And I think you're going to see a lot more of that.
Because they're going to try to come into compliance
with what the new Trump regime wants, which will
be on the other side of this.
But the underlying issues are likely to stay unresolved.
I think in practice and retrospect,
maybe this is too optimistic on my part,
but my time in business, 80s, 90s, 2000s,
it felt like we had a reasonable détente.
And although you ideally might want to get in there
and figure this stuff all out, as long as it's kept
to a manageable simmer, you can have of have your cake and eat it too
and people can kind of get along and it's okay.
Maybe it's not a perfectly merit-based system
or maybe there's issues along the way,
but fundamentally companies worked really well
for a long time.
If you can work your way out of this sort of
elevated level of hysteria, and optimistically,
I would say that that's starting to happen.
And the change in legal regime that's coming, I think,
will actually help that happen.
Right.
So you're optimistic because you believe
that the free market system is flexible enough
to deal with ordinary stupidity.
But like insane malevolent stupidity is just too much.
Yeah.
Yeah, I think that's reasonable.
You know?
Well, I do think that's reasonable, because everything's a mess mess all the time and people can still manage to manage their way forward
But when you when you have a policy that says well any any
Identifiable any identifiable disparate outcome with regard to any conceivable combination of groups is
indication of illegal prejudice, there's no way anybody can function in that situation
because those are impossible constraints to satisfy.
And they lead to paradoxical situations
like the one you described Musk's company
as being entangled in, right?
That's just so frustrating for anybody
that's actually trying to do something
that requires merit that they'll just throw up their hands.
And so, yeah, yeah, yeah.
Okay, so I'm going to stop you there because we're out of time on the YouTube side, but
that's a good segue for what will continue on the Daily Wire side because we've got another
half an hour there.
And so for all of you watching and listening, join us, join Mark and I on the Daily Wear side, because I would like to talk more about,
well, what you see could be done about this moving forward
with this new administration
and how you're feeling about that.
I mean, you made a decision, I guess, early in 2023,
like so many people, to pull away from the Democrats
and toward Trump, strange as that might be.
And I'd like to discuss that decision
and then what you see happening in Washington right now
and what you envision as a positive way forward
so that we can all rescue ourselves from this mess
before we make it much deeper than it already is.
So for everybody watching and listening,
join us on the Daily Wire side.
And Mark, thank you very much for talking to me today.
I hope we get a chance to meet in San Francisco
in relatively short order.
And I'm also looking forward to continuing our discussion
in a couple of minutes.
Join us everybody on the Daily Wire side.
Good, thank you, Jordan.
["The Star-Spangled Banner"]