Factually! with Adam Conover - Why Search Engines Aren’t Unbiased with Safiya Noble
Episode Date: January 22, 2020UCLA Professor Safiya Noble joins Adam this week to discuss how racism is expressed through supposedly neutral algorithms, the real-life “Minority Report” technology that’s being used t...oday in our cities, and why we all need to stop implicitly trusting technology just because it uses a lot of math. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
You know, I got to confess, I have always been a sucker for Japanese treats.
I love going down a little Tokyo, heading to a convenience store,
and grabbing all those brightly colored, fun-packaged boxes off of the shelf.
But you know what? I don't get the chance to go down there as often as I would like to.
And that is why I am so thrilled that Bokksu, a Japanese snack subscription box,
chose to sponsor this episode.
What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds.
Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Plus, they throw in a handy guide filled with info about each snack and about Japanese culture.
And let me tell you something, you are going to need that guide because this box comes with a lot of snacks.
I just got this one today, direct from Bokksu, and look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the
guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This
one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava
potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this
is so much fun. You got to get one of these for themselves and get this for the month of March.
Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono
style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks.
You can also rest assured that you have helped to support small family run businesses in
Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight
to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything.
Hey everybody, welcome to Factually. I'm Adam Conover and, you know, we all trust technology far too much. We know that humans are fallible, screwed up animals, but somehow we've all come
to believe that once something's done with an algorithm or a satellite, or if math is involved,
then that means it must be accurate, neutral, and unquestionable. You know, we might not know
what machine learning actually is, but
hey, it's got machine right there in the name, right? So that means it must work. And blockchain,
I don't even know what it is, but that's not going to stop me from shoving all my money into it.
We've all come to believe that the mere presence of complex math assures accuracy and that
technology has an authority that we should respect, if not obey.
But the truth is that technology, at the end of the day, isn't just made by fallible humans. It's
also used by fallible humans, us. And the mistake, the foul, if you will, that we're all making
is that we trust technology far too much. Take fitness trackers like your Fitbit. Fitbits use sensors to calculate the
distance you travel, count your steps, the calories you burn, and even how well you sleep.
They take info about that activity and they spit out graphs and charts on your phone.
And those graphs and charts make it seem very authoritative and accurate, right? I mean, hell,
this thing said I ran 3.26 miles. Look at those decimal places.
But the fact that it's able to spit out a number like that doesn't actually mean the number is accurate.
A 2018 meta-analysis that looked at 67 different studies
found that Fitbit devices accurately counted steps only half of the time.
And other studies show that fitness tracker estimates for calories burned
are all over the place,
and that tracking your sleep could actually make your insomnia worse.
You know, I've even heard of Fitbit wearers who run a marathon and then complain to the organizer that the course is wrong because their Fitbit said it was a different distance. Uh, no, maybe your Fitbit just sucks ass.
just sucks ass. People are literally trusting the $70 piece of plastic on their wrists over the professional race organizers who laid the thing out with measuring tape. It's ridiculous. Or take
GPSs. You know, I think we've all been in the car with a driver who's insisted on following the GPS
directions, even though they were clearly wrong. Well, sometimes this can lead to absurd and
dangerous results.
In 2008, in England, a woman following her satellite navigation ignored a number of road warnings and then drove her Mercedes straight into a river.
Don't worry, she was fine.
Her Mercedes, sadly though, passed on.
And this last year, following a mudslide in Colorado, Google Maps suggested a detour that
stranded dozens of drivers on a dirt road that
had turned to mud. The app could calculate a route, but apparently it couldn't check the weather.
And our overconfidence in our own technology can even be fatal. Take self-driving cars. Just
because a Tesla can stay in a lane on its own doesn't mean it can necessarily stop on its own.
At least two drivers using
Tesla's autopilot feature have crashed into trucks obstructing the road in Florida and died. And at
least some observers of the industry believe that part of the problem in those cases is that the
drivers were led, whether by Tesla's marketing or their own wishful thinking, to trust the autopilot
feature too much and to believe that it had more self-driving
ability than it actually did. The truth is that as clean and clinical as computer code is,
using it does not make real human problems any less messy and complicated. The residue of our
dysfunction and our biases do not disappear when they get transferred into code. Those biases just get code washed,
expressed through a technological system
rather than a social one.
And this can have shocking
and sometimes quite disastrous results.
Our guest today is an expert on how tech
and Google specifically reinforces those biases.
Safiya Noble is an associate professor
at the University of California, Los Angeles,
and the author of Algorithms of Oppression, How Search Engines Reinforce Racism.
Before we get to that interview, though, I just want to remind you of an upcoming tour date.
On January 30th, I will be doing an hour of comedy at the Irvine Improv in California.
If you live in SoCal and you don't like driving to LA, I hope you come check it out.
That's January 30th.
And now, here's my interview with Safiya Noble.
Safiya, thank you so much for being here.
Thanks for having me.
So you're a professor at UCLA.
I am.
You're a critic of the tech industry.
I am.
And you study what exactly?
I study the way that racism and sexism are deeply embedded in digital technologies,
especially search engines.
But I look at the way in which other things like artificial intelligence, algorithms,
broadly, these kinds of new technologies, predictive technologies that we use are harming vulnerable communities.
So let's unpack that first one, because I'm sure some people listening will say
racism in a search engine. What is this kind of projecting that you're that's like saying
algebra is racist, right? It's like, well, this is just a neutral mathematical process.
How could there be racism? This is, we've got the, oh, it's infecting the academia now.
We hear that whole, I hear that guy yelling in my head, right?
Right.
And so unpack that idea a little bit for us.
You got it.
All right.
So for sure, there are a lot of people who think that digital technologies or anything
that's driven by code is purely math.
that digital technologies or anything that's driven by code is purely math. And, you know,
let's just upfront say that's like reducing human beings to just cells. It's really, you know,
yeah, that's one way you can cut in on something like computer code and software and what software engineers do. It is math, yeah. Yeah, it is math, but that's not the point. The point is the way
that a whole human being system works
and engages with its environment. And when you look at something like a search engine,
search engines are really, quite frankly, advertising platforms, full stop. So if it's
an advertising platform, then you know that the number one imperative in that platform is money.
Yeah.
And money is going to determine and shape all kinds of things,
like what shows up on the first page of a search result.
So the first thing I try to do in my work is just explain to people that
when you go do a Google search, you're not doing fact checking.
You're not going to some type of credible arbiter that's gone through all the
paces to figure out if what's showing up at the top is the most credible or reliable. You're
looking at content that's been deeply optimized to the benefit, the profit imperative of Google.
That's the first thing you've got to get. Yeah. And to broaden that out a little bit, like, okay, it is math at the root, right?
But it's math that's been created by people and sculpted by people and an organization to serve a particular purpose.
Absolutely. To make certain kinds of things more viable.
So when I talk about racism and sexism being prevalent in search engines,
what I'm talking about is that racism and sexism are actually super profitable in our country and around the world.
So when I started my study several years ago, almost a decade ago now, I was looking at things like, what happens when you do searches on different kinds of women and girls' identities?
kinds of women and girls identities. And it was interesting how overwhelmingly when you did any kind of search on the keyword girls, especially when you put an ethnic identity marker with it,
black girls, Latina girls, Asian girls, it was almost exclusively pornography and hyper-sexualized
content that you got back. Why? Because porn sells and sex sells. And no one had any particular
regard for the fact that maybe Asian girls and black girls and Latina girls didn't want to be
represented by pornography. Or that they were using the search engine. Right. I mean, how dare
they? How dare they? Like, oh, I'd like to, like, who wants to meet, maybe, maybe a Latina girl is
like, wants to meet some Latina girls. Maybe I want to see some pictures. Yeah. Like what's happening with black girls hairstyles
right now. Whoa, that's not it. That's not it. Yeah. I see that. I see that point. That's how
you, is that how you came to the work? Yeah. That's how I came to the work. I had spent 15
years in advertising and marketing. Really? Yeah. So I knew a lot about, you know, the business,
in advertising and marketing.
Really?
Yeah.
So I knew a lot about advertising. You know the business.
I knew the business,
both from the client side and the ad agency side.
And as I was leaving the ad industry,
we were just starting to hire programmers
to come in and get this straight up,
help us game the system
to get our clients' products and services
to the first page of search engines.
Right.
Right?
I mean, we knew what the deal was, that we were writing copy in the public relations departments to figure out how it could look kind of like advertorial, right?
It's search engine optimization.
Yeah, straight up.
It's the oldest trick in the book, yeah.
Exactly.
I mean, this was like the early days of that.
So it was shocking to me when I left the industry and I went back to grad school, mostly to
atone for my sins of having worked in advertising.
It's fine.
Leave it.
That's when I saw how many people in academia and the public were thinking of this new company,
Google, as like the people's public library.
Yeah.
That's how they presented themselves.
Totally. They're like, we're, you know, like the people's public library. Yeah. That's how they presented themselves. Totally.
They're like, we're, you know, like the people's information portal.
We're going to make all human knowledge accessible was their mission statement, something like
that.
That's right.
Organizing all the world's knowledge and like in parentheses in English.
So then, you know, that's like weird, right?
Cause that's actually not the whole world's knowledge, but you know, it's fine.
Right.
If you want to just be specific.
It's a good point that it's like, it's almost like the SAT.
Like people treat the SAT like, or the illusion of the SAT is like, oh, this is completely neutral.
And it's just a way of evaluating everyone as a blank slate.
But in reality, like the rich kids are paying to get test prep for the SAT.
And so they're able to game the system.
That's exactly, that's one example of like what porn companies are able to do.
Or any company with money is able to like pay money to get themselves well positioned on Google.
And then sometimes Google pushes back and says, okay, we're not going to allow that kind of SEO anymore.
And it's like a cat and mouse game.
But like it's how it, that's like a huge part of Google. So let me,
let me try to guess what your argument is. Is that okay? Or to instill it, is that like, all right,
if once you're looking at Google that way, it's a search engine with an algorithm that's made by
people in order to advance Google's business interests. Hey, why is YouTube always on the
top? Cause Google owns YouTube. Vimeo is never as highly ranked. So maybe something's going on there. Also, there's a whole advertising
system. And then there's a whole system of people who are paying money in order to get their content
on Google. So now we're not talking about math anymore. Now we're talking about a giant human
system made up of people with competing values and incentives and abilities and their money and all that going together
sounds a lot like the rest of human society in which, you know, it exists racism and sexism
and other forms of discrimination. And so, of course, those are going to exist in the,
in now in the whole edifice of what we're now calling Google.
Yeah, it's really true. That's exactly what's happening. But here's the kicker.
It's really true. That's exactly what's happening. But here's the kicker. When you start taking racism and sexism into something like a search engine or a platform like YouTube or other digital media platforms and social media, Facebook and so forth, now you're amplifying that racism to scale, to global scale.
So see, in the old days, let's say, of regular racism. The good old days of racism.
In the good old days of regular racism.
Old fashioned.
You were just an asshole and I couldn't stand you
and maybe you blocked me from getting a job.
Artisanal racism.
Just a homemade.
You had to do it person by person.
It was a slow movement of racism, right?
You'd be racist to your face.
You'd have an interaction with a racist.
I'm sorry.
You know, and then you could also take that racist to court.
Yeah.
If they blocked you from a job or they denied you admissions to a university based on your skin color, all these things, right?
There were many mechanisms, some of them more effective than others, not all foolproof.
But now when you start talking about amplifying racism and sexism and homophobia at scale,
guess what?
You can't take that algorithm to court.
You can't, what do you do when the algorithm is optimizing for the call for genocide of your people like the Rohingya, right?
Which is what's happened in Facebook.
So what are you doing now?
The UN said that Facebook had a role in the Rohingya genocide in Myanmar.
Go look that up if you missed the articles about that about a year ago.
But dangerous misinformation being spread about that minority in Facebook and it resulted in the deaths of many, many people.
That's right. That's right. So this is the kind of thing that I'm concerned with is
how does this amplification through these algorithmic practices harm people? And
what are we going to do in terms of our lack of public policy, not just in
the United States, but really globally to push back on these companies who basically
can spread disinformation, harmful propaganda without consequence.
Right.
And I think about it this way.
You know, if three guys rented some space in a strip mall and started building their chemical operation and they cooked up some drugs, let's say, and then packaged it up neatly and rolled it out at CVS and Walgreens on the weekend, that would never go down.
Yeah. But here we have three guys who can rent some space
and become software engineers
and roll out predictive analytics
and sell it to cities and states and governments.
And next thing you know,
it's a racial profiling system
that is sold to all kinds of people.
That's not-
Huge implications.
That's not hypothetical.
That's not hypothetical. That's not hypothetical.
Startup culture, there are startups that are selling algorithmic tools to law enforcement
cities around the country.
You bet.
Palantir and companies like that.
Palantir.
You know, I'm thinking about Compass and their recidivism prediction software that they sold
in Florida.
And, you know, Jeff Larson and Julia
Inquin did this amazing work for ProPublica a couple of years ago about how these criminal
sentencing softwares, in fact, were racially profiling and predicting African-Americans to
be four times more likely to be a threat to society, right. And you're talking about letting out white mass murderers,
people who are violent offenders, and then locking up, you know, young teenage African-American girl
because she borrowed the bike of the neighbor without permission. You see what I'm saying?
She should go to prison. And the dude who's like, I'm fully committed to violent crime for the rest
of my life. This is in the ProPublica piece? Yeah, this is in the ProPublica piece.
So, you know, when they did their research on what was happening with that software, it was amazing.
They found it was a few guys in a strip mall who'd worked on this software.
And when they brought all the evidence and they were like, hey, see what you do here?
Let us show you the 15,000 cases we went through
and how your software is actually racially profiling.
They were in total denial because what?
It's just math.
And then they finally had to come to terms with the results.
And I'm sitting here thinking,
see, we would never let the chemists roll out some drugs.
Right. Hey, it's just chemicals.
It's just chemicals. It's just chemistry, yo.
What's wrong with you?
No, but you put the chemicals together, so you're responsible for the results of them.
In a way that's killing people, right?
Or harming people or hospitalizing people or causing all of this harm.
And it's interesting to me how the tax sector evades all kinds of oversight in that way
for their products in ways that other industries, you know, we would never let them get
away with that. Yeah. And now they're getting to, so we're starting to have a moment where the
oversight is coming, right? California just passed a big data privacy law. You can imagine that
if not this administration, the next administration will start to see, you know,
maybe we'll see a federal law, right? For the first time, maybe we'll see our America's GDPR
kind of law. But at the same time, you're starting to see companies like Facebook and
Google get ahead of it and say, you know, we want regulation and here's what the regulation should
be. That's right. Setting the terms. I mean, it's like the fox guarding the hen house. It reminds
me of when the fossil fuel industry was like, we got you guys, we got it. We're going to set their regulatory framework for fossil fuels.
But see, now there are parts of the world that are uninhabitable from global warming.
So I think we have a lot at stake here,
including the fact that when we think about a whole host of other social problems that are
happening, these platforms are implicated in the spread of disinformation, spreading disinformation about
climate change, about vaccinations, all kinds of things that have, again, real world consequences
for people. And these are the kinds of things that we need. We need serious money to research.
We need support. Those of us who do this kind of work, I'll tell you, there's only a handful of people.
The majority of people who study tech are enamored with it.
They're in love with it.
They want to talk about how amazing the possibility of these predictive analytics are.
And I think about the things.
analytics are. And, you know, I think about the things I've got an eight-year-old son,
and I look at all the kind of new systems that are coming on board in K through 12 education,
right, where they're tracking kids, tracking every little dimension of how they learn.
And a lot of these technologies, you know, the foregone conclusion that we can see is that these will be tracking systems that tell us,
let's see who's worthy to go to college or not. Who has the capacity to do certain types of jobs?
Right.
It's like over-determining what the future will be and what your future will be.
It's the job placement test on steroids. It's the job placement test being run by Mark Zuckerberg,
where now there's an algorithm
that you don't have access to that's designed by someone in Silicon Valley and sold to your
school district that's been tracking you for your entire school career. That's terrifying.
It is terrifying. I mean, listen, all of us Gen Xers would seriously be unemployed
if anyone were tracking us through the 80s and 90s. I'm 100% sure about that. I mean, talk about the SAT,
which is already an incredibly flawed metric.
And, you know, but it had that promise of,
well, here's a number, right?
You're looking at all of these.
Who's the real audience of the SAT, right?
It's the admissions people who are looking at 10,000 applicants
and they get a number they can look at.
And if they want to, they can just filter by the number.
Yes, to screen out, not to screen in. I mean, it's so interesting because here I, okay,
I teach at UCLA. So I have all of the valedictorians of the United States for the
most part, right, who are applying. UCLA has the largest freshman kind of applicant pool
in the United States. I think we get about 100,000 first year applications,
okay, for freshman year. So, you know, we're not taking, we're taking, you know, a handful.
Yeah. Do you know how big the freshman class is?
I think it's like between four and 6,000. It's not big.
That's extremely selective for a public university.
Extremely selective. UCLA is now the number one public research university in the rankings.
We beat Berkeley and everyone who went to Cal right now hates my guts that I had to say it.
But it's real.
It happened.
It's just live with it.
All right.
So here we are.
We're at UCLA.
When you look at the history of admissions, and I love that you brought this up because,
you know, in the old days, people got applied and went to college.
This is before we had even the UC system.
This is from your part of the world on the East Coast, right? The Ivy League. It's like the sons and daughters,
mostly the sons of the elite. That's who got to go to college, right? Then the country was growing
and more and more people were interested in higher education. And so you started to see these new kinds of ways of
filtering out and filtering in. So you have the rise of things like the letter of recommendation.
A lot of people don't know the history of the letter of recommendation was really about seeing
what network you were in. What was your social network? And did you have the right people,
quote, quote, writing for you? This was actually a discriminatory practice to keep Jewish people
out of the university. If you look at the history, that was one of the reasons because too many
Jewish people were applying to universities and universities wanted to figure out how to,
so they get the letter of recommendation. Okay. So then fast forward.
But if someone from the, someone who's got a seasoned subscription to the Metropolitan Opera
writes you a letter of recommendation, then you're in society, right?
Solid gold. Exactly. So here you have, or your congressman, right? Or your state senator
wrote for you, you're in the mix. All right. So you fast forward to something like the admissions,
the SAT, the ACT, these kinds of tests, again, these become, you know, a belief that there's some type of metric that we can implement.
And if you don't score at the right level, you're out.
And of course, if you look, I mean, look, I'm from the 80s.
And I remember the like big pushback on the SATs in the 80s.
It was about the way in which they were racially biased.
Yeah.
Right. pushback on the SATs in the 80s. It was about the way in which they were racially biased. There's just certain words that don't get used in some households in America.
In some schools.
Yeah, in some schools, in some cultures, in some communities. I was on a phone call last night with
a couple of other professors, and one of them said, yeah, we just got to take it bird by bird.
And I started laughing. And then my colleague was like, yeah, bird by bird, you know, that's it.
I've never heard this phrase before.
What the fuck is bird by bird? You know what I'm saying? Like, that's not even a phrase that I
would ever roll up in my neighborhood or in my community or my family. That's not ours.
Yeah.
Okay. So, you know, what if bird by bird is on the SAT? You see what I'm saying? It's like a
framing of a certain swath
of middle-class, upper middle-class white America. The analogy test is entirely, yes,
very much like that. And especially the, it's also dependent so much on the kind of education,
the idea that it's some sort of IQ test is, is ludicrous because the, you know, there's this
long essay portion. Actually, I don't know how they do it now. When I took it, there was a long essay portion, et cetera. It was a very, and also I was a bad
student. I just tested well. You know, I did, I did good on it, but I'm like, I was good at video
games and it helped me on the SAT because I was like good at solving puzzles. It was like, it's
such a weird thing that's so specific, right? Yes. And has so many clear societal cultural factors in it that like result
in bias against some populations versus others. And that's, we thought about covering that on my
show, Adam ruins everything and decided not to, cause we were like, everybody knows this already.
Like this is like too well covered. Well, now we're here, we've done it. Yeah. I mean,
you know, the logics are really similar and I think it's a good metaphor, you know, like to frame up what predictive analytics
will be like and are like right now.
So, you know, the way predictive analytics, for example, in predictive policing work right
here in LA and other cities, and LA is a big test market for predictive policing is,
which is what an algorithm that's going to tell the police, Hey,
it's like minority report. Hey, a crime is going to be committed at third.
That's exactly right. Wow. Okay. So what do they do?
They take all of the historical police data for the city and they look to see
like, where have the majority of arrests taken place in LA.
And they use that as the baseline data that they put into the algorithm to predict where
future crime will happen. So let's say you live in a neighborhood that's over-policed historically.
It's hard to imagine. You know already who that is. It's the African-Americans, Latinos, the poor people. So police have been basically occupying your neighborhoods for 50 years.
Yes. Right. Let's we could go back to the origin story of policing emanating out of slave catching.
Oh, yeah. I feel like nobody is really ready for that kind of truth. But I'm just going to look. I think we can all agree.
The LAPD has a racist history.
We also Rodney King.
It's a matter of it's true.
We don't need to prove this to us today. Sophia.
Okay.
I know it.
So that's the base.
All right.
So that's our baseline.
So if you take that data and you make that your baseline for then figuring out where
will future crime happen?
Well, guess what?
It's going to happen in all the places that you've over arrested people in the past i mean this is like
just predicting the past directly into the future garbage in garbage out racism in racism out it's
like computer science 101 shit i know and yet weirdly people think that, you know, the data in Quotiefingers is neutral, that it's unbiased.
And even now you have people who are willing, I mean, you know, a lot of people don't want to use the words I use.
I talk about these algorithms being oppressive, that they foment and bolster oppression.
You know, some people want to deal with like oppression light. So they want to
say things like, well, the data is biased, but we can unbias the data. That's like saying we can
unbias history. I mean, history has happened and that is the data. The data is what has happened.
So you can't unbias that unless you want to redo, you want a redo of the last 350 years.
unless you want a redo of the last 350 years.
So what we have to talk about is like,
what is this society that we want?
How do we optimize toward that?
And that's where you get into a lot of competing values because there are some people in Silicon Valley
who want to colonize Mars and just get out.
You know what I mean?
I mean, that's one set of weird values.
You know, there are other people who believe strongly in this so-called meritocracy,
right? Because they've got the good SAT scores and they went to the elite colleges and universities
and they think they actually just earned their way in, right? On their own merit.
I don't understand people, because I got good SAT scores and I went to an elite college.
And here you are, now you get to sit with me.
So you know you're not, you bombed out.
Well, no, because that experience I know is bullshit.
Like I'm not, I don't, I know I didn't work particularly hard.
I was like an okay student.
You know, I'm very lucky to have the opportunities that I did.
Right.
And I, and you know, there's like, I'll give myself credit where credit's due, but I'm not sitting around going like, you know, I saw a dude in a, I used to work at a tech company.
College room used to be part of a tech company.
And I remember walking through the parking lot and seeing a guy had like a, it was like a Maserati or like a fancy car like that.
And there's a very, there was a tech company that was really popping on another floor of the building.
I won't say which one.
No, I will.
It was Tinder.
And so Tinder shared the building.
Definitely popping.
And there was a guy, this was like 2015 or something.
Tinder was huge.
And there was a, there was a car in the parking lot.
Don't know whose, but the license plate was self-made.
S-L-F-M-A-D-E.
And I was just like, no, you're fucking not dude. Like I know everyone who
works for that company. You know, I know, I know where, what advantages you guys started with the
same ones that I started with, you know, that's why we're all in this building together. Right.
Um, and I don't, I have trouble understanding that, that mentality, like give yourself credit
where credit is due. Absolutely. You know, like I'm like, all right, the guy who also started
with my advantages, I did a little better than him. Cool. I'll, I'll give myself credit for like,
I'll work in that guy by 10%, you know, or whatever. But you know, the idea of, of, you know,
every, every single thing I have is from the sweat of my brow is so transparently not true.
This is off topic, but I was just a little, a little rant from me.
No, I love it.
I mean, I feel like actually the license plate
should say hooked up.
Do you see what I'm saying?
Not just because it's tender, but also.
I mean, this is so real because the, you know,
the alleged meritocracy and the fraternity,
quite frankly, that is Silicon Valley,
where, you know, they hire from the same top five
engineering schools in the country and they hook up all their friends and they use phrases like
culture fit. Yeah. Right. As a justification for who's in and who's out. Yeah. You know,
culture fit means, you know, you're one of the homies. Yeah. Right. I mean, that's what that means. It's like I could kick it with you all day, all week. Yeah. I could have fun with you. Hey, let's you're not going to have fun. We have a beer in the afternoon. We'll play Smash Brothers and et cetera. And we're it seems like I can hang. Oh, we're simpatico. I like you. We hit it off in the interview.
we're simpatico.
I like you.
We hit it off in the interview.
You're not some lady who has to go breastfeed.
You know,
you're not like,
you don't remind me of my mom.
She's not playing smash brother.
She's always spending her time in that weird room.
Yeah.
What is she doing?
What is she holding?
We don't even know what her main is.
Like, is she a Kirby main or what?
We don't know.
She's always in that room.
I'm sorry.
You know,
I have bronchitis and that when I start laughing, then I get a bronchitis cough.
Listen, this is what I'm saying.
This is one of the reasons why you don't see diversity in Silicon Valley.
You don't see old people.
I dare you to show me a person in Silicon Valley in a major tech company who's like 50 pounds or more overweight. I mean, it's like,
you got to be a marathon runner or something. I don't know when I go, I mean, I feel like I'm like
a normal sized woman, whatever that is. I'm like average woman. And I go and I'm like, I would be
here for a week and, and probably start trying to cultivate anorexia. Like, I don't even know
because there's like a certain profile. Maybe you have to live on
Soylent. I don't know. Like, there's a whole thing about a way of being in the world that's like
hyper-optimized in every way. And, you know, that optimization is for young people. It's for
young white and Asian people. It is for men. It is for people who are
really fit. So, you know, it is people who went to good schools in quotey fingers, whatever that
means. So, you know, if that's what we're optimizing for, because we think that those
are the most valuable people in our society and that that's the technocracy who should
make the decisions for all the rest of us.
It's a really narrow window.
And I'm-
Those aren't necessarily bad values in themselves to like,
you know, meditate and exercise a lot if those are things that you want to do.
I do some of those things.
But those are values that not everyone-
Yeah, that's right.
That's my worry is that, you know, it's like that there's a narrow framework of what is quote unquote good in the world and that that that framework doesn't allow for the breadth of our humanity. a state school. I came from a working poor family. You know, I didn't get tracked into
top schools. I worked really hard to eventually go to the University of Illinois at Urbana-Champaign
when I was in my 40s and I got a PhD. And now I get to work at an amazing place like UCLA with
some of the most brilliant people in the world and some, you know,
that are cool. They're okay. And, you know, I don't, I know that it was going, for example,
to a place like Illinois that gave me, I went to the top ranked place in my field in information
science. And that's how I got a job at a place like UCLA. So I'm super clear.
I'm also a black woman. So I really understand like where I kind of got disregarded as a young
person and nobody saw a potential or a possibility in me to be smart, to be a contributor. I remember
my high school counselor saying, the best you'll ever do is trade school or the military. College is not for you. And I was a person who was dying to be an intellectual
and dying to be a researcher. And I was so curious, just had a curious spirit. So, you know,
for a person like me, I know it's not self-made. I know somebody gave me an opportunity and I got a chance to
be my best self. And you can't optimize for that, right? I was already optimized out of
the opportunity or the possibility as a young person. So this is the part that's so interesting
to me when we think about predictive analytics is like, is there space to just, you know, make mistakes?
What about that one year that you bottom out because, you know, your grandma dies or your
parents die or you got to move or you get evicted or whatever. And now you've destroyed your own
personal algorithm for success. And it neglects the idea that, you know, you're saying that,
yeah, you worked hard and you give yourself credit for that, but also you're aware of the advantage
and the chance that you were given
and the help that someone else gave you
that maybe a lot of folks in Silicon Valley
are not as aware of.
And if you're doing an algorithm,
because we were talking about the SAT,
the specter of the hyper version of the SAT
is you're being tracked by your school throughout
your entire life. And then you don't even need to take a test, right? They just have a big database
and the database says, we send these people to college and don't send these people. The end,
and no one even thinks about it, right? And all it does is evaluate those kids and say,
do those kids deserve, right? Are they self-made? Did they put the work in themselves?
And it doesn't ask, well, who is going to benefit from getting an opportunity that they might not
have otherwise had? The thing for me is like, I was not a good student. I was a B student,
right? And I applied to a small liberal arts college. I went to Bard College
and they didn't take SAT scores. And my GPA was not very good, right?
I got rejected from every other
like sort of liberal arts school I applied to.
But like, I wrote a good essay, you know?
And they were like, they just had this approach.
It was, you know, it was 15 years ago.
So they had a little more leeway to do it.
But no, actually it was 20 years ago.
They had less applicants then, but they looked at,
they were like, oh, we like this actual person. You know what I mean?
There's something in this actual person that we are going to invest into.
And because they invested in me,
like my life flowered after I went to college, like, like,
like intellectually and emotionally. And I was just,
I just became a different person. And I had other friends,
I had friends in high school who didn't have that chance, you know, and, um, and didn't have that same experience.
And I, and I was really aware that the, that something had been given to me that hadn't
been given to them.
Um, and, uh, and that was just getting into the school.
Like we were also able to afford it, you know, so, um, you know, that, that's just my own
experience with, uh, you know, advantage's just my own experience with, you know, advantage and
disadvantage, et cetera, when I'm already a person who's very lucky in every other aspect of my life.
Imagine that for a kid who has, doesn't have that long list of advantages I already had.
And listen, let me just say, I love that you brought up the point about we could afford it,
because if we think that the predictive analytics of the future aren't going to look at
the feasibility of your ability to pay for college or pay for the training, right, or the
professional skill building, then, you know, we're crazy. Because colleges and universities want to
admit people who are going to finish. They're not interested in bringing in people who drop out.
They're really deeply invested in people finishing.
And part of that is financial.
So I think about my son and the way in which
the financial picture of my husband and I
are going to play a role
and whether he looks like a good quotey finger candidate
for an opportunity. So it's like all of these factors. I mean, if they had looked at my own,
my parents' financial background, they would be like, I feel like this is a person who has about
$5 and we should definitely not let her in. And I was hustling all the time to get through
Fresno State. I mean, bartending, you know, working retail. I worked at the mall. I did like
everything to be able to pay my tuition so I could go to school. And that wouldn't have really looked
right in the algorithm. Do you know what I'm saying? I couldn't have put my hustle
into that analytic. And, you know, I think a lot of couldn't have, I couldn't have put my hustle into that
analytic. And, you know, I think a lot of people are like that. So the whole point of the critique
is to say, we need more human agency, more human decision-making, not kind of turning it over to
some type of static predictive analytic that uses some type of baseline that probably isn't your baseline, right?
Some type of standard that just decides.
And then we stand back and say like, oh, well, you know, the algorithm said.
And it's like the Apple card, you know, that just happened.
It was in the news like a few weeks ago where the guy who's the inventor of Ruby on Rails, which is a programming language,
got the new Apple card and he got 20 times the credit limit of his wife, even though his wife
had better credit and they like share in all their financials. Because, you know, and I thought
somebody on NPR interviewed me about that and she's like, you know, what do you think that is?
And I was like, well, you know, one generation ago, my mom's generation, she couldn't open a banking account without her husband or her father signing on it.
So the baseline of women's financial data is actually real thin over time.
You see what I'm saying?
Their credit and their finances always tied to men being the primary kind of like financial resource.
Yeah. tied to men being the primary kind of like financial resource. So yeah, of course, if that
data is feeding an algorithm that the banking industry is using, it's going to give the men
20 times the credit. I mean, this is not even rocket science to me when we try to unpack what
we think is happening. And the fact that the algorithms, of course, are all black boxed and
all we can do is try to deduce from all the evidence that's mounting in the world, you know, makes it even tougher.
Well, we got to take a quick break, but when we come back, I want to talk about
more specifically, we've been talking about these broader issues of inequity in tech and the world,
frankly, you have to describe discrimination in the world before we even get to it in algorithms.
But I want to get into more of the algorithm specifically when we come right back with more Safiya Noble.
Okay, we're back with Safiya Noble. So I want to talk about some specific examples about technological or algorithmic systems exhibiting discrimination in this way.
So, for instance, search engines use that example of black girls.
My understanding is Google has sort of like updated their algorithm in the years since.
And like you don't see exactly that problem anymore.
Although I tried it with DuckDuckGo and I did see, you know, there's porn in the first 10, you know.
DuckDuckGo is a simpler search engine that I very much like because it doesn't track you.
But, you know, it's like got a different algorithm.
Yes.
So are there other examples that people can go try themselves today in Google to see what you're talking about?
Well, sure.
Well, sure. I mean, I think that results in Google, the very first hit took you to
a disinformation site that said Donald Trump had won the popular vote, which is not an alternate
fact. It's not a fact, right? That's just straight up disinformation. So one of the things I think
people are going to really need to pay attention to is what type of content is getting optimized to the first page, really making sure they're looking at those URLs, seeing if these are credible news institutions and kind of fact-based organizations.
kind of fact-based organizations rather than, you know, people's blogs or these kind of disinformation news sites. That'll be a really powerful place that people should be watching
as they're searching on different candidates, especially people who are challenging the
incumbent, Donald Trump. I think that we'll see a lot of disinformation that's going to be flourishing once the Democrats net out on their candidate. So that's a thing. Certainly, I think people are
still, you know, people send me searches all the time that they're looking at. So for example,
people in the book, I talk about what it means to look on various kinds of concepts. So concepts like beauty or beautiful, you know, those will often give you like a certain type of beauty standard that comes back.
You do an image search for it.
Image searches.
Yeah.
Image searches are really powerful because they tell you a lot about how the culture is defining ideas.
You know, so you can look up various occupations. I think that's always powerful. When you look at professors, for example,
you're not going to see women who look like me.
But let me ask you this. Okay. So if the argument is, as we've laid out, that, hey,
we've got racism in society, Society builds the algorithms, right?
The algorithms are built by these systems
where all right, you know, the advantaged folks
who, you know, maybe have that blind spot
are building the algorithm.
They build it right in, you know,
and then we see it in our lives, right?
Or, hey, yeah, there's, you know,
we've got racists on Twitter, right?
Well, what about when someone says, well, hey, yeah, of course there's racists on Twitter.'ve got racists on Twitter, right? Well, what about when someone says,
well, hey, yeah, of course there's racists on Twitter.
There's racists in the real world.
You know, of course there's,
like the thing that you're talking about,
if you do a Google image search for professor
and you see a bunch of white guys in tweed jackets, right?
Well, that didn't, Google didn't create that, right?
Google image search didn't create that.
So why the focus on these companies and their products rather than, hey, why don't we go
like root out the racists?
Why don't we go change those ideas?
And then the search engine will change as well.
Why don't we, you know, get into it with the racists on Twitter and like shout them down
or, you know what I mean?
Yeah, I know what you mean.
I mean, here's the thing.
If the public looked at something like a search engine
and Google search and its products like,
hey, this is just reflecting the biases of society
or we know this is an advertising platform.
So people are going to get what's optimized for.
That would be one thing.
But what the research shows is that people relate
to search engines like they are credible, vetted, and reflective of the so-called truth.
So that's actually really different than, it's a different kind of moral and ethical
stance and standard that companies have to have if the public thinks of them as being vetted,
credible, fair in their representation of ideas and information and knowledge.
But unfortunately, that's not what they are. So I guess, you know, some of us think that
part of the way that we do our work is to help the public understand what the platforms are, rather than
saying that the platforms are just a reflection of society. I got to tell you, you know, I travel
all over the world giving talks around my book and my research. And I don't think that society
and the majority of people hold with such intensity, the kind of like gross stereotypes
that get represented in some of these platforms. Do you know, I think people are more nuanced
and I think there's something to be said about thinking of Google and other companies as like
the public good or serving the public interest
rather than thinking about them as what they are
kind of multinational
publicly traded companies
whose first priority is to
raise the stock value
and pay dividends to their
shareholders. That's actually their
first priority. So one of the things that I think
could be... They're not Wikipedia.
Wikipedia is at least... I'm sure Wikipedia has got its own problems.
You know it does. That's a different show.
Of course it does. But at the very least, it's a nonprofit.
Correct.
With volunteers, you know?
And we know that about Wikipedia, right? We know that it's like kind of subjective.
We know that it's crowdsourcing from a lot of different people. It still has gatekeepers who
are mostly men and mostly white.
But it also has articles about its own systemic biases right there on the, you go click about
problems on Wikipedia and you'll go see it.
That's right.
That kind of transparency doesn't exist in something like a search engine.
Yeah.
The search engine, even the design of it, you know, I just try to explain to my students,
the fact that you have a rank order from one to millions in the cultural context of the West.
I mean, nobody puts a foamy finger up at a game that's like, we're number 1,395,000.
I mean, we're number one.
So the idea that in a rank order, one through 10 are the top and should be trusted compared to the rest,
that itself is a kind of logic.
It seems definitive.
It's definitive.
I was talking to, when I was talking to our researcher, Sam, before we were preparing for
this interview, I compared it to like, you know, Pitchfork, the music review website,
and they give all the albums, they add a decimal point. They're like, we give it a 7.6, right?
And like at the end of the day, look, if you get a seven on Pitchfork, it's a good album.
If you get an eight, it's a great album.
In the opinion of the site, if you get anything lower than seven, it's a shitty album.
That's how you actually read Pitchfork reviews.
But they all say like 7.9.
Like it's got this like point in there.
It's like, what's the point of that?
What is the purpose of it?
To like communicate, oh, this is exact.
Like this is precise.
We've added a decimal point.
So you should really believe it
because we're the fucking experts, you know?
Yeah.
And-
It's not a regular seven.
Yeah, it's not a regular seven.
Yeah, we give it a seven.
Not like a B, right?
A B, which Entertainment Weekly uses,
that's like, I give it a B.
I liked it.
It was good.
That's my opinion.
Seven is like
definitive. Right. And that was like the whole, and that like gate that literally that gave
Pitchfork a lot of power. Like people started to take the rankings really seriously in music
to like an absurd degree where they like destroyed careers and stuff like that because of like the
number that they were giving people. And so that pledge of accuracy, or I talked in the intro about how a Fitbit will tell
you that you ran 3.12 miles, right? And that makes it seem exact. Unless you're like fist pumping.
Yeah, exactly. Because that really adds steps. Or if you run through a tunnel and well, that's,
that's for G if you're using a GPS watch, not all Fitbits are GPS, but right. Like depending on
what, yeah. What part of your body you put it on, et cetera.
But that decimal point gives the illusion of exactness
and it makes you believe, well, it must be,
hey, it's the second decimal point.
Yeah, it's science, right?
I mean, this is the problem.
It's kind of like junk science, pseudoscience,
this type of trust in that.
And I think it also confuses and muddies the water
around what truth and facts
are. And that's the thing that really worries me about and why I study search engines in particular
is because they work like an arbiter of truth. You know, I come out of a field of library
information science. Okay, that's what my PhD is in. And
in the library science world, we are really very, very clear that the physical library can only hold
so much in a collection. We're clear about the limits. We know that there's a lot of subjectivity
that goes into what is kept and what isn't kept. And over time, like thousands of years, we have a whole training and a professional code of ethics that we teach librarians about things like fair representation.
We have something like a book like the Pernkopf Atlas.
I don't know if you know about this, but the Pernkopf Atlas used to be considered in the last century the definitive kind of illustrated medical book of the human body.
It had these beautiful drawings of the human body and the musculature.
And people in med school would use this because it was their best way of understanding the human body. And then we came to understand that the Pernkopf Atlas was illustrated by looking at the corpses of people who had been murdered during the Holocaust. Okay? So we can take a book like that, and we can put
it in the history of the Holocaust at the same time that we can put it in the medical illustrated literature. And we
have a subjective way of understanding it in a more complex and robust way. You cannot do that
in a search engine. You see what I'm saying? So this is the part of the challenge. You know,
the whole idea of hyperlinking and pointing to sites this is the like the the origin story of search
engine was really about um if a lot of people point to your page and link to your page it's
credible and it goes up in the rankings right it becomes more visible that was like the early
origin story yeah yeah search rank it was a good algorithm yeah still it i mean i mean but here's
the thing that was borrowed directly from library citation practices,
where in academia, for example, if you cite to me, to my work a bunch of times,
that's signaling that my work is credible. But see, what we know is that you might be citing
my work and saying like, that work is trash. Okay. You might actually-
This famously bad study by Sophia Noble.
Terrible.
Yeah.
Okay.
So we know exactly that the citation alone is not sufficient.
The pointing to the so-called science of the count, right?
How many times is not enough because it might be pointing to terrible things.
Again, we don't have that kind of nuance
and understanding in a search engine,
but those are the logics, right?
So guess what?
People spent, there are huge farms,
information farms that are optimizing content
all over the world to spread misinformation
and disinformation by hyperlinking a zillion sites, right?
To push content.
Search engine optimization.
That's what it is. And it's one of the reasons why something like the porn industry that has
more money than any industry, right? Is able to have, you know, hundreds of thousands of micro
porn sites that get really specific and really, really niche. And they're all interconnected and
hooked up with each other. And that helps push the content. And people spend their lives and
get paid trying to figure out how to game the algorithms and push that content up to the top.
And so this is the kind of thing that when the public is using these and they don't understand
that that's what's happening. And let me tell you, you know, who's got a lot of technical skill
in spreading disinformation and optimizing content are white supremacists and white nationalists.
Whoa, you got to look at the work of people like Jesse Daniels at CUNY and Joan Donovan at Harvard.
When you look at their work, you will not sleep at night. When you think about the technical skill
that white supremacists who are organized
to specifically spread disinformation and propaganda against people of color and Jewish
people, it's frightening. So if you don't know these things are happening, you're just like,
it's computer code, it's math, and it's science. And you're not, and the problem is that the design of something like Google or any of these sites specifically, like, desurfaces that, like, hides that stuff that's happening under the surface and makes it look super clinical and super exact.
It makes it look like that Fitbit readout or just a piece of technical equipment that feels extremely neutral when it's
hiding all of this human manipulation underneath.
Yeah.
That white opaque screen.
Yeah.
Nothing to see here.
Yeah.
Cute art,
a little doodle.
Yeah.
Google at the bottom.
You never look at that 13th.
Oh,
you're like,
it's there.
So they really must've done their,
their homework.
They crawled it.
And you know,
Google only indexes about half of the internet anyway.
Really?
Yeah.
So, you know, the question is like, what is in the other half?
And of course, that's where sex trafficking and child sexual exploitation content and,
you know, the dark web, like some of the worst, worst.
And we see that stuff bleed through.
Sometimes it gets optimized in, especially into like platforms like YouTube.
And this is, of course, where the work of one of your previous guests, Sarah Roberts, is so important, right?
The content moderators who are letting it in or not seeing it, not catching it, not knowing what it is.
And, you know, and we see the glimpses of that come through.
Let's talk about some other bits of tech in the time that we have left. What about like
facial recognition algorithms? I understand.
It's a no.
It's a no?
It's a no go.
You don't want to talk about them?
I mean, we can talk about it. I'll tell you, you know, the thing that's really interesting,
there was just, I just signed on to a letter that went to, I think it was addressed to the New York State Attorney General, because landlords in New York of low-income housing, there was one case in particular, was interested in installing facial recognition to allow people into the building,
which means if you're low income
and you live in this building,
you have to use the facial recognition software to get in.
Now, first of all, probably if you're low income,
you might be a person of color
because of historical discrimination.
And we know from the work of people like Joy Bulwamini
at the MIT Media Lab that these facial recognition software have low efficacy for people of color, especially women of color.
And of course, you know, women of color and children are the most likely to live in poverty or be low income in the United States.
And so here you have faulty technology that doesn't read women of color faces, especially black women's faces, very well.
And that becomes the criteria by which you can get into your apartment.
Yeah.
And your friends.
Yeah.
And your family.
You're just going to have visitors.
And your coworker from work.
Now they've got to be programmed into the database so that the door can read them.
And if it doesn't, they can't come over.
They can't get in.
Yeah.
And it might not read their faces effectively.
That's right.
I remember reading an account by, and I'm sorry, I don't remember who it is because I read it a few years ago.
But it was by a programmer who worked, I think, for a startup that was designing early face recognition technology.
And their tech demo did basically what Snapchat does now, you know,
where it like puts, it puts a funny hat on you or whatever, you know,
on the screen you're,
you're looking at it and showing you your face with an alteration.
And they did a demo in like Times Square or somewhere big like that,
where people could come like try it out.
And it was like their first big opening.
They're like, oh, we're so excited for people to try our tech.
And then a black man walks in front of it and like tries to do it and just
doesn't work.
It's just like,
it doesn't pop on his head.
And the guy,
and the guy writing the piece was like,
my,
like his heart fell into his stomach.
Cause he suddenly realized in that moment that there were no black
employees at the company.
And so,
and thus no black,
no black test,
because they were testing it on their own faces. Exactly. Cause and thus no black, no black test, because they were testing it on their
own faces. Exactly. Because they had no black testers. And so they had just forgotten to,
you know, make sure it worked on black faces. And then the guy testing it was like,
oh, I guess it doesn't work. I don't know. Like wandered off, like didn't feel like he was
actually the recipient of racism in that moment. But, you know, it was just like this
cautionary tale. And that always stuck with me when I saw, you know, in the years since just
article after article about this facial recognition feature does not work or just works, even if it
works 10% less well, right, on certain types of faces. when that's you getting into your apartment or or something
else important like that or you being scanned by a law enforcement camera that is looking for a
suspect of some kind right a mismatch which you saw the aclu did that study on uh the congressional
black caucus members and it ran all of the okay so a few years ago maybe i don't know a year and a
half ago the aclu did a study and with some researchers and they found, they ran the Congressional Black Caucus members, their headshots, their congressional headshots through a database that would do like facial recognition matching.
And, you know, it was like a huge percentage of them, it identified and flagged as criminals that were in a criminal database.
Right.
Which, of course, is not true because you can't be in a criminal database and be a senator or a congressman.
I mean, OK, so the fact that this software is so crude and I'm telling you, if you talk to people who do like machine learning and are working on these technologies, they'll tell you that the tech is so crude right now.
It's still trying to figure out like, is this table like that table?
Are these tables, right?
Is this cat like that cat?
Is it a cat?
So to think that the level of precision down to our human faces, right? And of course,
faces change over time. They age, they get different. And then again, that that technology
serves as some type of baseline for access. I mean, that's the question is like, are these
things going to be used as like, you know, arbiters or, you know, pathways?
And the thing is, they already are, you know, like in, I saw, you know, arbiters or, you know, pathways. And the thing is they already are, you know, like in, I saw a, you know, article a month or two ago
and tweeted out about, you know, a, here is a camera that is being sold in China to the government
there specifically being advertised that it can tell who is a Uyghur. You know, like this is one
of the, this is like being done currently around the world.
And it's not hard to believe
that we'll see it in the United States.
If you believe that it's impossible for,
you know, the United States
to deploy the same kind of technology
that is being deployed there,
I think you're naive.
And the thing is, regardless of how crude it is,
you're right that so much technology like this is actually cruder than we think.
And we fill in the blanks because we want it to work.
Right.
So I always use the example of ways when people use the, the driving app ways, it says it
gives you the shortest route.
I used to try to use it when I was still driving around LA and I ride in the car with people
and I would do it.
I'm like, this is not fucking faster.
It's not faster.
What it does is it tells you that it's
faster and it gives you a route that allows you to believe while I'm not on the main street,
I'm making crazy rights and lefts. So therefore it must be faster. And if you are therefore the
type of person who is anxious, if you think there might be a faster route, you're on the freeway
and you're like, there's gotta be a faster way. I want to know I'm going the faster way. I'm
freaking out here, right. I'm not that kind
of driver. I just, you know, when I was driving, I would just sit and be like, I'll be a little
late. It's fine. I don't want to stress out, but some people get anxious. Right. And so ways is an
app that tells you you're on the fastest route and it relieves that anxiety, whether or not it is the
fastest route that in my, that in my view is whataze is actually giving you, right? And so a law enforcement will
buy a camera that says, hey, we can detect criminals, whether or not it can detect criminals,
because they really want a camera that can detect criminals. Or, you know, take another example
talked about in the intro, self-driving cars. People really want a self-driving car and they
will buy one, even though it's not really self-driving.
And if you and they will trust it to the point where they die because they really want to believe in this thing.
Or kill a black person that doesn't recognize that a black person is a pedestrian walking in front of the car.
Right.
Because the sensors don't pick up their skin.
Absolutely.
Well, car.
I mean, I know there's too many shows.
We've talked about that case on this show before, but yeah, the, the, the fundamental thing of getting back to what I talked about in the intro and what we've discussed here is that the real, the biggest issue here, I don't think that we're going to eliminate racism from technology or from society tomorrow.
eliminate racism from technology or from society tomorrow, right? I don't think your argument is like, hey, Google should eliminate all racism from everything it does tomorrow, right?
I mean, that would be great if it did.
That would be great.
But that's not going to solve the systemic problems of our society.
Yeah.
But I do think the whole sector has a responsibility to not foment the spread of myths and disinformation,
which includes, you know, graphic and gross stereotypes about people who are already
marginalized, who are already, right? Because what that does, I mean, you know, in places like
Germany and France, one of the reasons why you can't traffic in like anti-Semitism, for example, in a search engine, you cannot, you can not, no Nazi paraphernalia, no anti-Jewish kind
of content. And they spend a lot of money screening that kind of content out. So if you think it can't
be done, look at the cases of Germany and France. One of the reasons why is because in Germany and
France, they still have living memory of the role that propaganda played
in bringing about the Holocaust.
They actually know there's a relationship
between propaganda and disinformation
against vulnerable people
or against minoritized communities
and the desensitization that would lead to mass genocide.
So we have not reconciled the role that disinformation and propaganda against
minoritized communities has played in our own holocausts in this country against American
Indians, against African Americans, against a lot of people, indigenous people. So I think we're
going to have to, at some point, reconcile, like, do we want this? Do we acknowledge that there is a role that this kind of misinformation plays
in grossly stereotyping and, of course, that having real-world consequences?
And that's the reason to study it.
That's the reason to talk about it.
And to diminish, I think you're completely right,
and to diminish our trust in the systems, right?
That, like, the issue
again, with all these systems is that we trust them. We have a belief that those propagandistic
elements are not in them or that they're not discriminatory, that they're neutral,
that they're just math, right? And that seems to me to be the first step is to take away that trust and to see these things as like, no, your Fitbit is not a magical device that knows everything.
It's a piece of plastic that was designed by some dumb programmers who wanted you to buy crap.
And so maybe it'll do an okay job and if it helps you run, good for you.
But it's not perfect.
And Google is not an oracle that is. Yeah, that's right. Yeah it's also tied to whether you have healthcare
or access to healthcare, affordable healthcare. But see, the Fitbit just lets us believe that
it's because you didn't get enough steps in. That's why you're not healthy. So see what I'm
saying? It's like, you got to tie these things to bigger systems of the way that we live and
organize our society. But instead, it's like, well, it's on you. The reason why you're not
healthy is because you did not get enough steps in today. And I think that is that
hyper reductionist kind of way that these technologies orient us and acculturate us.
It can be dangerous. And let me tie that back to racism. Cause my big problem with the,
with the dialogue of racism that I was brought up with is that it's that individualized, that it's all about me.
It's like, well, no, all that means is was I meaner to a black person today than I was to a white person, right?
Did I say a racist word, right?
Did I do the one?
No, I didn't.
Problem solved, right?
That's all I have to care about.
And that's not what it is. Right. Like the more I learn racism is redlining. Right. Which is a system that was put in or not you are racist or not today, everyone could
choose to not be consciously racist and that system will still exist. And that's what we're
really talking about. And I really thank you for coming on the show to talk to us about these.
You're awesome. It's so, it's just, I love being here and talking to you. I feel like you need to
come over and we need to have drinks.
I don't drink, but I'll have a seltzer.
Oh, that sounds great. I don't drink anymore either.
Congratulations. Well, good for you. We'll have some nice flavored La Croix.
Let's do it.
Thanks, Sophia.
Well, thank you once again to Sophia Noble for coming on the show. I also want to thank our producer Dana Wickens, our engineer Ryan Connor, our researcher Sam
Roudman, Andrew WK for our theme song, and I want to thank you for listening.
You can sign up for my mailing list or check out my tour dates at adamconover.net.
You can follow me anywhere you want on social media at Adam Conover.
Once again, I'll be at the Irvine Improv on January 30th.
I hope you come out.
And until then, we'll see you next time on Factually.
That was a HateGum Podcast.