Young and Profiting with Hala Taha - Stephen Wolfram: AI, ChatGPT, and the Computational Nature of Reality | E284
Episode Date: April 15, 2024Sponsored By: Shopify - Sign up for a one-dollar-per-month trial period at youngandprofiting.co/shopify Indeed - Get a $75 job credit at indeed.com/profiting Airbnb - Your home might be worth more tha...n you think. Find out how much at airbnb.com/host Porkbun - Get your .bio domain and link in bio bundle for just $5 from Porkbun at porkbun.com/Profiting Yahoo Finance - For comprehensive financial news and analysis, visit YahooFinance.com
Transcript
Discussion (0)
Today's episode is sponsored in part by Yahoo Finance, Porkbun, Indeed, Airbnb, and Shopify.
Yahoo Finance is the number one financial destination.
For financial news and analysis, visit the brand behind every great investor, yahoofinance.com.
Build your digital brand and manage all your links from one spot with Porkbun.
Get your.biodomain and link in BioBundle for just $5
at porkbun.com slash profiting. Attract, interview, and hire all in one place with Indeed. Get a $75
sponsored job credit at indeed.com slash profiting. Generate extra income by hosting your home on
Airbnb. Your home might be worth more than you think. Find out how much at airbnb.com
slash host. Shopify is the global commerce platform that helps you grow your business.
Sign up for a $1 per month trial period at Shopify.com slash profiting.
As always, you can find all of our incredible deals in the show notes.
More and more systems in the world will get automated. AI is another step in the automation the show notes. When Chachi PT came out in late 2022, the people who'd been working on it, they didn't know it was going to work.
They didn't think anything exciting would have happened, and by golly, it had worked.
I think the thing that we learned from sort of the advance of AI is, well, actually, there's not as much distance
between sort of the amazing stuff of our minds and things that are just able to be constructed computationally.
This is the coming paradigm of the 21st century. And if you understand that well,
it gives you a huge advantage.
Unfortunately, whenever there's powerful technology,
you can do ridiculous things with it.
Having said that, when you say things like,
well, let's make sure that AIs never do the wrong thing.
Well, the problem with that is.
Young and Profitors, welcome to the show. We're going to be talking a lot more about AI in 2024,
because it's such an important topic.
It's changing the world.
Last year I had a couple of conversations.
We had William talking about AI.
We had Mo Gowdat, which I loved that episode.
I highly recommend you guys check the Mo Gowdat episode.
But nonetheless, I'm going to be interviewing a lot more AI folks.
And first up on the docket is Dr.
Stephen Wolfram.
He's been focused on AI and computational thinking for the past decade.
Dr. Stephen Wolfram is a world-renowned computer scientist,
mathematician, theoretical physicist,
and the founder of Wolfram Research,
as well as the inventor
of the Wolfram computational language.
A young prodigy, he published his first paper at 15.
He obtained his PhD in physics at 20,
and he also was the youngest recipient of the
MacArthur Genius Grant.
In addition to this, Dr. Wolfram is the author of several books, including a recent one on
AI entitled, What is ChatGBT doing?
Which we'll discuss today.
So we've got a lot of ground to cover with Stephen.
We're going to talk about what is AI?
What is computational thinking?
How is AI similar to nature?
What is going on in the background of Chachiputi?
How does it actually work?
And what does he think the future of AI looks like
for jobs and for humanity overall?
We've got so much to cover.
I think he's gonna blow your mind.
So Stephen, welcome to Young and Profiting podcast.
Hello there.
I am so excited for today's interview.
We love the topic of AI.
And I wanted to talk a little bit about your childhood before we got to the meat and potatoes for today's interview. We love the topic of AI. And I wanted to talk a little bit about your childhood
before we got to the meat and potatoes
of today's interview.
So from my understanding,
you started as a particle physicist at a very young age.
You even started publishing scholarly papers
as young as 15 years old.
So talk to us about how you first got interested in science
and what you were like as a kid.
Well, let's see, I grew up in England in the 1960s when space was the thing of the future, which it is again
now but wasn't for 50 years.
I was interested in those kinds of things and that got me interested in how things like
spacecraft work and that got me interested in physics.
And so I started learning about physics and so happened that the early 1970s were
a time when lots of things were happening in particle physics, lots of new particles
getting discovered, lots of fast progress and so on. And so I got involved in that.
It's always cool to be involved in fields that are in the golden age of expansion, which
particle physics was at the time. So that was how I got into these things. You know, it's funny, you mentioned AI. And I realized that when I was a kid,
machines that think were right around the corner, just as colonization of Mars was right
around the corner too. But it's an interesting thing to see what actually happens over a
50-year span and what doesn't.
It's so crazy to think how much has changed
over the last 50 years.
And how much has not.
In science, for example, I have just been finishing
some projects that I started basically 50 years ago.
And it's kind of cool to finish something that,
there's a big science question that I started asking
when I was 12 years old about how a thing
that people have studied for 150
years now works, the second law of thermodynamics.
I was interested in that when I was 12 years old.
I finally, I think, figured that out.
I published a book about it last year.
It's nice to see that one can tie up these things, but it's also a little bit shocking
how slowly big ideas move.
For example, the neural nets that everybody's so excited about now in AI, neural nets were
invented in 1943.
The original conception of them is not that different from what people use today, except
that now we have computers that run billions of times faster than things that
were imagined back in the 1950s and so on.
It's interesting.
Occasionally, things happen very quickly.
Oftentimes it's shocking how slowly things happen and how long it takes for the world
to absorb ideas.
Sometimes there'll be an idea and finally some technology will make it
possible to execute that idea in a way that wasn't there before. Sometimes
there's an idea and it's been hanging out for a long time and people just
ignored it for one reason or another. And I think some of the things that are
happening with AI today probably could have happened a bit earlier. Some things
have depended on sort of the building of a big technology stack,
but it's always interesting to see that to me at least.
It's so fascinating.
This actually dovetails perfectly into my next question
about your first experiences with AI.
So now everybody knows what AI is,
but really most of us really started to understand it
and using this term maybe five years ago max,
but you've been
studying this for decades even before people probably called it AI. So can you talk to us
about the beginnings of how it all started? AI predates me. That term was invented in 1956.
You know, it's funny because as soon as computers were invented, basically in the late 1940s, and
they started to become things that people had seen by the beginning of the 1960s.
I first saw a computer when I was 10 years old, which was 1969-ish.
The time, a computer was a very big thing, tended by people in white coats and so on.
I first got my hands on a computer in 1972, and that was a computer that was the size
of a large desk and programmed with paper tape and so on, and was rather primitive by
today's standards.
The elements were all there by that time, but it's true.
Most people had not seen a computer until probably the beginning of the 1980s or something
Which was when PCs and things like that started to come out
But it was from the very first
Moments when electronic computers came on the scene people sort of assumed that computers would automate thought as
bulldozers and things like forklift trucks that automated
mechanical work.
And that was the giant electronic brains was a typical characterization of computers in
the 1950s.
So this idea that one would automate thought was a very early idea.
Now the question was, how hard was it going to be to do that?
And people in the 1950s and beginning of the 1960s, they were like, this is going to be
easy.
You know, now we have these computers, it's going to be easy to replicate what brains
do.
In fact, a good example back in the beginning of the 1960s, a famous incident was during
the Cold War and people were worried about, you know, US, Russian, Soviet communication
and so on. They said, well, maybe the people
are in a room, there's some interpreter, the interpreter is going to not translate things
correctly. So let's not use a human interpreter. Let's teach a machine to do that translation
at the beginning of the 1960s. And of course, machine translation, which is now finally
in the 2020s, pretty good, took an extra 60 years to actually happen.
People just didn't have the intuition about what was going to be hard, what wasn't going
to be hard.
So, the term AI was in the air already very much by the 1960s.
When I was a kid, I'm sure I read books about the future in which AI was a thing, and it
was certainly in movies and things like that i think then this question of okay so how would we get computers to do.
Thinking like things when i was a kid i was interested in taking the knowledge of the world and somehow cataloging it and so i don't know why I got interested in that, but that's something been interested in for a long time. And so I started thinking, you know, how would we take the knowledge
of the world and make it automatic to be able to answer questions based on the knowledge
that our civilizations accumulated? So I started building things along those lines and I started
building whole technology stack that I started in the late 1970s and now it's turned into a big thing
that lots of people use. But the idea there, the first idea there was let's be able to
compute things like math and so on. And let's take what has been something that humans have
to do and make it automatic to have computers do it. People had said for a while, when computers
can do calculus, then
we'll know that they're intelligent. Things I built solved that problem. By the mid 1980s,
that problem was pretty well solved. And then people say, well, it's just engineering. It's
not really a computer being intelligent. I would agree with that. But then at the very
beginning of the 1980s, when I was working on automating things like mathematical computation, I got curious about the more general problem
of doing the kinds of things that we humans do, like we match patterns. We see this image
and it's got a bunch of pixels in it and we say that's a picture of a cat or that's a
picture of a dog. And this question of how do we do that kind of pattern matching, I got interested in and
started trying to figure out how to make that work.
I knew about neural nets.
I started trying to get, this must have been 1980, 81, something like that.
I started trying to get neural nets to do things like that, but they didn't work at
all at the time, hopeless.
As it turns out, you know, you say things happen quickly, and I say things sometimes happen very slowly.
I was just working on something that is kind of a potential new direction for how neural
nets and things like that might work.
And I realized, I worked on this once before, and I pulled out this paper that I wrote in
1985 that has the same basic idea that I was just very proud of myself
for having figured out just last week.
And it's like, well, I started on it in 1985.
Well, now I understand a bunch more and we have much more powerful computers.
Maybe I can make this idea work.
But so this notion that there are things that people thought would be hard for computers, like doing calculus and so on.
We crushed that, so to speak, a long time ago.
Then there were things that are super easy for people, like tell that's a cat, that's a dog,
which wasn't solved, and I wasn't involved in the solving of that.
That's something that people worked on for a long time and nobody thought it was going to work.
And then suddenly in 2011, sort of through a mistake, some people who'd been working
on this for a long time left a computer trying to train to tell things like cats from dogs
for a month without paying attention to it.
They came back, they didn't think anything exciting would have happened, and by golly
it had worked.
And that's what started the current enthusiasm about neural nets and deep
learning and so on. And when Chachi PT came out in late 2022, again, the people who'd been working
on it, they didn't know it was going to work. We had worked on previous kinds of language models,
things that try to do things like predict what the next word will be in a sentence, those sorts of
things. And they were really pretty crummy in a sentence, those sorts of things.
And they were really pretty crummy. And suddenly, for reasons that we still don't understand,
we kind of got above this threshold where it's like, yes, this is pretty human-like.
And it's not clear what caused that threshold. It's not clear whether we, in our human languages,
for example, we might have, I don't know, 40,000 words that are
common in language like most languages, English as an example.
And there's probably that number of words is somehow related to how big an artificial
brain you need to be able to deal with language in a reasonable way.
And you know, if our brains were bigger, maybe we would routinely have languages with 200,000
words in them.
We don't know. Maybe it's this kind of match between what we can do with an artificial neural network
versus what our human biological neural nets manage to do. We manage to reach enough of
a match that people say, by golly, the thing seems to be doing the kinds of things that we humans do.
But I mean, this question, what's ended up happening is what us humans can quickly do,
like tell a cat from a dog or figure out what the next word in the sentence is likely to
be.
Then there are things that we humans have actually found really hard to do, like solve
this math problem or figure out this thing in science or do this
kind of simulation of what happens in the natural world. Those are things that the unaided
brain doesn't manage to do very well on. But the big thing that's happened the last 300
years or so is we built a bunch of formalization of the world, first with things like logic that was back in antiquity,
and then with math, and most recently with computation,
where we're kind of setting up things
so that we can talk about things in a more structured way
than just the way that we think about them
off the top of our heads, so to speak.
That's so interesting.
And I know that you work on something
called computational thinking.
And I think what you're saying now really relates to that.
So help us understand the Wolfram project and computational thinking and how it's related
to the fact that humans, we need to formalize and organize things like mathematics and logic.
What's the history behind that? Why do we need to do that as humans?
And then how does it relate to computational thinking in the future?
There are things one can immediately figure out when just sort of intuitively knows, oh,
that's a cat, that's a dog, whatever. Then there are things where you have to go through
a process of working out what's true or working out how to construct this or that thing. When
you're going through that process, you've got to have solid bricks to start building
that tower.
So what are those bricks going to be made of?
Well, you have to have something which has definitive structure.
And that's something where, for example, back in antiquity, when logic got invented, it
was kind of like, well, you can think vaguely, yeah, that sentence sounds kind of right.
Or you can say, well, wait a minute, this or that, if one of those things is true, then
that or that has to be true, etc., etc., etc.
You've got some structured way to think about things.
And then in 1600s, math became sort of a popular way to think about the world.
And then you could say, okay, we're looking at the planet goes around the sun and roughly an ellipse,
but let's put math into that. And then we can have this way to actually compute what's
going to happen. So for about 300 years, this idea of math is going to explain how the world
works at some level was kind of a dominant theme. And that worked pretty well in physics. It worked pretty terribly in things like biology, in social sciences and so on.
Imagine there might be a social physics of how society works that never really panned
out.
So there was this question that things were places where math had worked and it gave us
a lot of modern engineering and so on and there are cases where it hadn't really worked.
I got pretty interested in this at the beginning of the 1980s and sort of figuring out how
do you formalize thinking about the world in a way that goes beyond what math provides
one, things like calculus and so on.
What I realized is that you just think about, well, there are definite rules that describe
how things work, and those rules are more
stated in terms of, oh, you have this arrangement of black and white cells, and then this happens
and so on.
They're not things that you necessarily can write in mathematical terms, in terms of multiplications
and integrals and things like this.
And so I, as a matter of science, I got interested in. So what do these simple programs that you can
describe as these systems as rules of being, what do they typically do? And what one might have
assumed is you have a program that's simple enough, it's going to just do simple things.
This turns out not to be true. Big surprise to me, at least. I think to everybody else as well.
That people a few decades to absorb this point. It took me a solid bunch of years to absorb this point.
But you just do these experiments, computer experiments, and you find out, yes, you use
a simple rule and no, it does a complicated thing.
That turns out to be pretty interesting if you want to understand how nature works, because
it seems like that's the secret that nature uses to make a lot of the complicated stuff that we see, the same phenomenon of simple rules, complicated
behavior.
So that turns into a whole big direction and new understanding about how science works.
I wrote this big book back in 2002 called A New Kind of Science.
Well, its title kind of says what it is.
So that's one kind of branch is sort
of understanding the world in terms of computational rules. Another thing has to do with taking
the things that we normally think about, whether that's how far is it from one city to another,
or how do we remove this thing from this image or something like this, things that we would
normally think about and talk about. And how do we take those thing from this image or something like this? Things that we would normally think about and talk about.
And how do we take those kinds of things and think about them in a structured computational
way?
So that has turned into a big enterprise in my life, which is building our computational
language, this thing now called Wolfram Language, that powers a lot of research and development
kinds of things and also
lots of actual practical systems in the world, although when you are interacting
with those systems you don't see what's inside them so to speak, but the idea
there is to make a language for describing things in the world which
might be, you know, this is a city, this is both the concept of a city
and the actuality of the couple of hundred thousand cities that exist in the world, where
they are, what their populations are, lots of other data about them, and being able to
compute things about things in the world.
And so that's been a big effort to build up that computational language. And the thing that's
exciting that we're on the cusp of, I suppose, is people who study things like science and so on,
for the last 300 years, it's like, okay, to make this science really work, you have to make it
somehow mathematical. Well, now, the case is that the new way to make science is to make it computational.
And so you see all these different fields, call them X, you start seeing the computational X field start to come into existence.
And I suppose one of my big life missions has been to provide this language and notation for making computational X for all x possible. It's a similar mission to what people
did maybe 500 years ago when people invented mathematical notation. I mean, there was a time
when if you wanted to talk about math, it was all in terms of just regular words at the time in
Latin. And then people invented things like plus signs and equal signs and so on. And that streamlined
the way of talking about math.
And that's what led to, for example, algebra and then calculus, and then all the kind of
modern mathematical science that we have. And so similarly, what I've been trying to do last 40
years or so is build a computational language, a notation for computation, a way of talking about
things computationally, but let's want to build computational X for all X. One of the about things computationally that lets one build computational X for all X.
One of the great things that happens when you make things computational is not only
do you have a clearer way to describe what you're talking about, but also your computer
can help you figure it out.
And so you get this superpower.
As soon as you can express yourself computationally, you tap into the
super power of actually being able to compute things. And that's amazingly powerful. And
when I was a kid, as I say, in the 1970s, physics was popping at the time, because various
new methods have been invented not related to computers. At this time, all the computational
X fields are just starting to really hop
and it's starting to be possible
to do really, really interesting things.
And that's going to be an area of tremendous growth
in the next how many years.
I have a few follow-up questions to that.
So you say that computational thinking
is another layer in human evolution.
So I wanna understand why you feel
it's gonna help humans evolve. Also curious to understand the practical ways that you
are using the Wolfram language and how it relates to AI, if it does at all.
Let's take the second thing first. Wolfram language is about representing the world
computationally in a sort of precise computational way. It also happens to
make use of a bunch of AI.
But let's put that aside.
The way that, for example, something like an LLM, like a chat GPT or something like
that, what it does is it makes up pieces of language.
If we have a sentence like, the cat sat on the blank, what it will have done is it's read a billion webpages.
Chances are the most common next word is going to be Matt.
And it has set itself up so that it knows
that the most common next word is Matt.
So let's write down Matt.
So the big surprise is that it doesn't just do
simple things like that, but having built the structure from reading all these web pages it can write plausible sentences.
Those sentences they sort of sound like they make sense that kind of typical of what you might read they might or might not actually have anything to do with reality in the world so to speak.
world, so to speak. That's working kind of the way humans immediately think about things. Then there's the separate whole idea of formalized knowledge, which is the thing that led to
modern science and so on. That's a different branch from things humans just can quickly
and naturally do. So in a sense, Wolfram language, the big contribution right now to the world of the emerging AI
language models, all this kind of thing, is that we have this computational view of the
world which allows one to do precise computations and build up these whole towers of consequences.
So the typical setup, and you'll see more and more coming out along these lines.
I mean, we built something with OpenAI back, oh gosh,
a year ago now.
An early version of this is you've got the language model
and it's trying to make up words.
And then it gets to use as a tool
our computational language.
If it can formulate what it's talking about,
well, we have ways to take the natural language
that it produces.
We've had Wolfram Alpha System, which came out in 2009, is a system that has natural
language understanding.
We sort of had solved the problem of one sentence at a time, kind of, what does this mean?
Can we translate this natural language in English, for example, into computational language,
then compute an answer using potentially many, many steps of computation, then
that's something that is sort of a solid answer that was computed
from knowledge that we've curated, etc, etc, etc. So the
typical mode of interaction is that sort of a linguistic
interface provided by things like LLMs, and that using our
computational language as a tool to actually
figure out, hey, this is the thing that's actually true, so to speak.
Just as humans don't necessarily immediately know everything, but with tools, they can
get a long way.
I suppose it's been sort of the story of my life, at least.
I discovered computers as a tool back in 1972, and I've been using them ever
since and managed to figure out a number of interesting things in science and technology
and so on by using this kind of external to me superpower tool of computation. The LLMs
and the AIs get to do the same thing. So that's the core part of how the technology I've been
building for a long time most immediately
fits into the current expansion of excitement about AI and language models and so on.
I think there are other pieces to this which have to do with how, for example, science
that I've done relates to understanding more about how you can build other kinds of AI-like things.
But that's sort of a separate branch.
Let's hold that thought and take a quick break with our sponsors.
Young and Profiters, I don't know about you, but I love to make my home some place that I'm proud of.
And that's why I spent a lot of time on my apartment trying to make it my perfect pink
palace all set with a velvet couch, an in in-home studio and skyline views of the
city. And while I love my apartment, I can get really sick of it.
I can get really uninspired. And if you work from home,
you know exactly what I'm talking about. But the good news is,
like many of you guys,
I'm an entrepreneur and that means that I can work from anywhere.
And finally I decided to make good use of my work flexibility for the first time.
This holiday break, the sun was calling my name, so I packed my bags and my boyfriend
and we headed to Venice Beach, California.
We got a super cute bungalow and we worked from home for an entire month.
The fresh air and slower pace helped to inspire some really cool new ideas for my business.
And now I'm hitting the ground running in Q1.
Airbnb was the one that helped me make these California dreams come true and in fact Airbnb
comes in clutch for me time and time again.
Whether it's finding the perfect Airbnb home for our annual executive team outing or
booking a vacation where my extended family can fit all in one place.
Airbnb always makes it a great experience.
And you know me,
I'm always thinking of my latest business venture.
So when I found out that a lot of my successful friends
and clients host on Airbnb,
I got curious and I wanna follow suit
because it seems like such a great way
to generate passive income.
So now we have a plan to spend more time in Miami
and then we'll host our place on Airbnb to earn some extra money whenever we're back on the East Coast.
So I can't wait for that. And a lot of people don't realize they've got an Airbnb right under their own noses.
You can Airbnb your place or a spare room if you're out of town for even just a few days or weeks.
You could do what I did and work remotely and then Airbnb your place to fund your trip.
Your home might be worth more than you think. Find out how much at Airbnb.com
slash host that's airbnb.com slash host to find out how much your home is
worth. Hey, app BAM,
starting my LinkedIn secrets masterclass was one of the best things I've ever
done for my business.
I didn't have to waste time figuring out all the nuts and bolts of setting up a
website that had everything I needed, like a way to buy my course, subscription offerings, chat
functionality and so on because it was super easy with Shopify.
Shopify is the global commerce platform that helps you sell at every stage of your business,
whether you're selling your first product, finally taking your side hustle full time, or making half a million dollars from your masterclass like me.
And it doesn't matter if you're selling digital products or vegan cosmetics. Shopify helps you
sell everywhere from their all-in-one e-commerce platform to their in-person POS system. Shopify's
got you covered as you scale. Stop those online window shoppers in their tracks
and turn them into loyal customers
with the internet's best Kimberding checkout.
I'm talking 36% better on average
compared to other options out there.
Shopify powers 10% of all e-commerce in the US,
from huge shoe brands like Allbirds
to vegan cosmetic brands like Thrive Cosmetics.
Actually, back on episode 253, I interviewed the CEO and brands like Thrive Cosmetics. Actually, back on episode 253,
I interviewed the CEO and founder
of Thrive Cosmetics, Karissa Bodnar,
and she told me about how she set up her store with Shopify
and it was so plug and play.
Her store exploded right away.
Even for a makeup artist type girl with no coding skills,
it was easy for her to open up a shop
and start her dream job as an entrepreneur.
That was nearly a decade ago.
And now it's even easier to sell more with less
thanks to AI tools like Shopify Magic.
And you never have to worry
about figuring it out on your own.
Shopify's award-winning help is there
to support your success every step of the way.
So you can focus on the important stuff,
the stuff you like to do because businesses that grow grow with
Shopify,
sign up for a $1 per month trial period at Shopify.com slash profiting.
And that's all lowercase. If you want to start that side hustle,
you've always dreamed of. If you want to start that business,
you can't stop thinking about if you have a great idea, what are you waiting for?
Start your store on Shopify.
Go to Shopify.com slash profiting now to grow your business no matter what stage you're in.
Again, that's Shopify.com slash profiting.
Shopify.com slash profiting for $1 per month trial period.
Again, that's Shopify.com slash profiting.
month trial period. Again, that's Shopify.com slash profiting.
Young and profitors, we are all making money.
But is your money hustling for you?
Meaning, are you investing?
Putting your savings in the bank is just doing you
a total disservice.
You gotta beat inflation.
I've been investing heavily for years.
I've got an E-Trade account, I've got a Robinhood account,
and it used to be such a pain to manage all of my accounts.
I'd hop from platform to platform.
I'd always forget my fidelity password,
and then I have to reset my password.
I knew that needed to change
because I need to keep track of all my stuff.
Everything got better once I started using Yahoo Finance,
the sponsor of today's episode.
You can securely link up all of your investment accounts
in Yahoo Finance for one unified view of your wealth.
They've got stock analyst ratings.
They have independent research.
I can customize charts and choose what metrics
I wanna display for all my stocks
so I can make the best decisions.
I can even dig into financial statements
and balance sheets of the companies that I'm curious about.
Whether you're a seasoned investor or looking for that extra guidance,
Yahoo Finance gives you all the tools and data you need in one place.
For comprehensive financial news and analysis,
visit the brand behind every great investor, Yahoo Finance.com.
The number one financial destination, Yahoo Finance.com.
That's Yahooance.com. That's yahufinance.com.
Honestly, you're teaching us so much.
I feel like a lot of people tuning in
are probably learning a lot of this stuff for the first time.
But one thing that we all are using right now
is chat GBT, right?
So everybody is sort of embraced chat GBT.
It feels like it's magic, right?
When you're just getting something
that is giving you something
that a human could potentially write.
So I have a couple of questions about chat GBT.
You alluded to how it works a bit,
but can you give us more detail
about how neural networks work in general
and what chat GBT is doing in the background
to spit out something that looks like it's written by a human?
The original inspiration for Neural Networks
was understanding something about how brains work.
In our brains, we have about roughly 100 billion neurons.
Each neuron is a little electrical device,
and they're connected with things
that look under a microscope a bit like wires.
So one neuron might be connected to 1,000 or 10,000 other neurons in one's brain, and these neurons, they'll have a little electrical signal and then
they'll pass on that electrical signal to another neuron. And pretty soon, one's gone through a
whole chain of neurons and one says the word, next word or whatever. And so the electrical machine,
lots of things connected to things, that's how people imagine
that brains work.
And that's how neural nets, an idealization of that, set up in a computer where one has
these connections between artificial neurons, usually called weights.
You often hear about people saying, this thing has a trillion weights or something.
Those are the connections between artificial neurons, and each one has a number associated
with it. And so what happens when you ask Chachi BT something, what will happen is it
will take the words that it's seen so far, the prompt, and it will grind them up into
numbers. And it will take that sequence of numbers
and feed that in as input to this network.
So it just takes the words.
More or less every word in English gets a number,
or every part of a word gets a number.
You have the sequence of numbers.
That sequence of numbers is given as input
to this essentially mathematical computation that goes through
and says, okay, here's this arrangement of numbers, we multiply each number by this weight,
then we add up a bunch of numbers, then we take the threshold of those numbers and so
on. And we keep doing this and we do it a sequence of times, like a few hundred times
for typical chat GPT type behavior, a few hundred times, and then at the end we
get another number, actually we get another collection of numbers that represent the probabilities
that the next word should be this or that. So in the example of the cat sat on the, the
next word has probably very high probability, 99% probability to be matte and 1% probability
or 0.5% probability to be floor or something.
And then what Chart GPT is doing is it's saying, well, usually I'm going to pick the most likely
next word.
Sometimes I'll pick a word that isn't the absolutely most likely next word, and it just
keeps doing that.
And the surprise is that just doing
that kind of thing, a word at a time, gives you something that seems like a reasonable
English sentence. Now, the next question is, how did it get all those, in the case of the
original ChatGBT, I think it was 180 billion weights, how did it get those numbers? And
the answer is, what it tried to do was it was trained and it was trained by being showing all this text from the web.
I'm what was happening was well you've got one arrangement of weights okay what next word does that predict.
Okay that predicts total is the next word for the cats out on the. Turtle is wrong. Let's change that.
Let's see what happens if we adjust these weights
in that way.
Oh, we finally got it to say Matt.
Great, that's the correct version of that particular weight.
Well, you keep doing that over and over again.
That takes huge amounts of computer effort.
You keep on bashing it and trying to get it.
No, no, no, you got it wrong.
Adjust it slightly to make it closer to correct.
Keep doing that long enough and you got something which is a neural net which has the property
that it will typically reproduce the kinds of things it's seen.
Now it's not enough to reproduce what it's seen because if you keep going writing a big
long essay, a lot of what's in that essay will never have been seen before.
Those particular combination of words will never have been produced before.
So then the question is, well, how does it extrapolate?
How does it figure out something that it's never seen before?
What words is it going to use when it never saw it before?
And this is the thing which nobody knew what was going to happen.
This is the thing where the big surprise is that the way it extrapolates
is similar to the way we humans seem to extrapolate things. And presumably that's because its
structure is similar to the structure of our brains. We don't really know why when it figures
things out that hasn't seen before, why it does that in a kind of human-like way. That's
a scientific discovery. Now we can say, can we get an idea why this
might happen? I think we have an idea why it might happen. And it's more or less this,
that we say, how do you put together an English sentence? Well, you kind of learn basic grammar.
You say, it's a noun, a verb, a noun. That's a typical English sentence. But there are
many noun, verb, noun, English sentences that aren't really reasonable sentences like I don't know the electron eight the moon.
Okay it's dramatically correct but probably doesn't really mean anything except in some poetic sense.
Then what you realize is there's a more elaborate construction kit about sentences that might mean something.
about sentences that might mean something and people have been intending to create that construction kit for a couple thousand years i mean are still started the time when he created logic he started
thinking about that kind of construction kept nobody got around to doing it but i think
chat you be t and lms show us there is a construction kit of, oh, that word, if it's blah, eight, blah, the
first blah better be a thing that eats things. And there's a certain category of things that
eat things and it's like animals and people and so on. And so that's part of the construction
kit. So you end up with this notion of a semantic grammar of a way, a construction kit of how
you put words together.
My guess is that's essentially what ChatGBT has discovered.
And once we understand that more clearly, we'll probably be able to build things like
ChatGBT much more simply than it's very indirect way to do it, to have this neural net and
keep bashing it and say, make it predict words better and so on.
There's probably a more direct way to do the same thing.
But that's what's happened.
And this moment when it becomes human level performance, very hard to predict when that
will happen.
It's happened for things like visual object recognition around 2011, 2012 type timeframe.
It's hard to know when these things are going to happen for different kinds
of human activities.
But the thing to realize is there are human-like activities, and then there are things that
we have formalized where we've used math, we've used other kinds of things as a way
to work things out systematically.
And that's a different direction than the direction that things like neural nets are
going in. And that happens to be the direction that things like neural nets are going in.
And that happens to be the direction
that I've spent a good part of my life trying to build up.
And these things are very complimentary in the sense
that things like the linguistic interface that
are made possible by neural nets feed into this precise
computation that we can do on that side.
How does this make you feel about human consciousness and AI potentially being
sentient or having any sort of agency?
It's always a funny thing because we have an internal view of the fact that there's
something going on inside for us. We experience the world and so on. Even when
we're looking at other people, it's like it's just a guess.
I know what's going on in my mind it's just some kind of guess what's going on in your mind so to speak and the big discovery of our species is language.
This way of packaging up the thoughts that are happening in my mind and being able to transmit them to you and have you unpack them and make similar thoughts perhaps in your mind so speak so this idea of where can you imagine that there's a mind that's operating.
It's not obvious between different people we can always make that assumption when it comes to other animals it's like well we're not quite sure but maybe we can tell that i cut had some emotional reaction which reminded us of some human emotion and so on. When it comes to our AIs, I think that increasingly people will have the view that
the AIs are a bit like them. So when you say, well, is there a there there? Is there a thing
inside? It's like, okay, is there a thing inside another person? You know, you say,
well, what we can tell the other person is thinking and doing all the stuff well if we were to look inside the brain of that other person always find
is a bunch of electrical signals going around and those add up to something where we have the assumption that there's a conscious mind that, so to speak. So I think we have always felt that our thinking and minds are very far away from other things
that are happening in the world.
I think the thing that we learn from the advance of AI is, well, actually, there's not as much
distance between the amazing stuff of our minds and things that are just able to be
constructed computationally. One of the things to realize is this whole question of what thinks, where is the computational
stuff going on?
And you might say, well, humans do that, maybe our computers do that.
Well, actually, nature does that too, when people will have this thing, you know, the
weather has a mind of its own.
Well, what does that mean?
Typically, operationally, it means
it seems like the weather is acting with free will,
we can't predict what it's going to do.
But if we say, well, what's going on in the weather?
Well, it's a bunch of fluid dynamics in the atmosphere
and this and that and the other.
And we say, well, how do we compare that
with the electrical processes
that are going on in our brains?
They're both computations that operate according to certain rules. how do we compare that with the electrical processes that are going on our brains that both
computations that operated according to certain rules the ones in our brains
familiar with the ones in the weather we're not familiar with but in some sense
both of these cases there's a computation going on and one of the things that was a
big piece of bunch of science i've done is this thing called the principle of computational equivalence, which is this discovery, this idea that if you look at different kinds
of systems operating according to different rules, whether it's a brain or the weather,
there's a commonality.
There's the same level of computation is achieved by those different kinds of systems.
That's not obvious.
You might say, well, I've got the system and it's just a system that's made from physics, as opposed to the system that's the result of
lots of biological evolution or whatever, or I've got the system and it just operates
according to these very simple rules that I can write down. You might have thought that
the level of computation that will be achieved in those different cases were very different.
The big surprise is that it isn't. It's the same. And that has all kinds of consequences.
Like if you say, okay, I've got this system in nature, let me predict what's going to happen in
it. Well, essentially what you're doing by saying, I'm going to predict what's going to happen is
you're somehow setting yourself up as being smarter than the system in nature. It will take
it all these computational steps to figure out what it does, but you are going to just jump ahead
and say,
this is what's gonna happen in the end.
Well, the fact that there's this principle
of computational equivalence implies this thing
I call computational irreducibility,
which is realization that there are many systems
where to work out what will happen in that system,
you have to do kind of an irreducible amount
of computational work.
That's a surprise because we have been
used to the idea that science lets us jump ahead and just say, oh, this is what the answer
is going to be. And this is showing us from within science, it's showing us that there's
a fundamental limitation where we can't do that. That's important when it comes to thinking
about things like AI, when you say things like, well, let's make sure that AI's never do the wrong thing.
Well, problem with that is,
there's this phenomenon of computational irreducibility.
The AI is doing what the AI does,
it's doing all these computations and so on.
We can't know in advance, we can't just jump ahead and say,
oh, we know what it's going to do.
We are stuck having to follow through the steps.
We can try and make an AI where we can always know what it's going to do. We are stuck having to follow through the steps. We can try and
make an AI where we can always know what it's going to do. Turns out that AI will be too
dumb to be a serious AI. And in fact, we see that happening in recent times of people saying,
let's make sure they don't do the wrong thing. We put enough constraints. It can't really
do the things that a computational system should be able to do, and it doesn't
really achieve this level of capability that you might call real AI, so to speak.
We'll be right back after a quick break from our sponsors.
Young Improfiters, I've been a full-time entrepreneur for about four years now. And I finally cracked the code on hiring.
I look for character, attitude, and reliability.
But it takes so much time to make sure a candidate
has these qualities on top of their core skills
in the job description.
And that's why I leave it to Indeed
to do all the heavy lifting for me.
Indeed is the most powerful hiring platform out there
and I can attract, interview and hire all in one place.
With YAP Media growing so fast,
I've got so much on my plate.
And I'm so grateful that I don't have to go back
to the days where I was spending hours
on all these other different inefficient job sites
because now I can just use Indeed.
They've got everything I need.
According to US Indeed data,
the moment Indeed sponsors a job,
over 80% of employers get candidates
whose resumes are a perfect match for the position.
One of my favorite things about Indeed
is that you only have to pay for applications
that meet your requirements.
No other job site will give you more mileage
out of your money.
According to Talent Nest 2019, Indeed delivers four times more hires than all other job sites
combined.
Join the more than 3 million businesses worldwide who count on Indeed to hire their next superstar.
Start hiring now with a $75 sponsored job credit to upgrade your job post at indeed.com
slash profiting.
Offer is good for a limited time.
I'm speaking to all you small and medium-sized
business owners out there who listen to the show.
This is basically free money.
You can get a $75 sponsored job credit
to upgrade your job post at indeed.com slash profiting.
Claim your $75 sponsored job credit now
at indeed.com slash profiting.
Again, that's indeed.com slash profiting
and support the show by saying you heard about Indeed on this podcast.
Indeed.com slash profiting. Terms and conditions apply. Need to hire? You need
Indeed. Yeah fam, I did a big thing recently. I rolled out benefits to my US
employees. They now get health care and 401ks. And maybe this doesn't sound like a big deal to you,
but it was surely a big deal to me
because benefits were like the boogeyman to me.
I thought for sure we couldn't afford it.
I thought that it was gonna be so complicated,
so hard to set up, lots of risk involved.
And in fact, so many of my star employees
have left in the past,
citing benefits as the only reason why.
And here I was thinking that we couldn't afford benefits
when it's literally not that expensive at all
and you actually split the cost
between the employee and the employer.
I had no idea.
I found out on JustWorks.
JustWorks has been a total lifesaver for me.
We were using two other platforms for payroll,
one for domestic in US, one for international.
We had our HR guidelines and things like that,
employee handbook on another site,
and everything was just everywhere.
Now everything's consolidated with JustWorks,
a tried and tested employee management platform.
You get automated payments, tax calculations,
and withholdings with expert support anytime you need it.
And on top of that, there's no hidden fees.
You can leave all the boring stuff to Justworks
and just get to business.
And with automatic time tracking,
it has made managing my international hires
a little bit more soothing for my soul that I know that they're actually working and they're tracking their time.
I mean, it's really hard to manage remote employees.
It's easy to get started right away.
All you need is 30 minutes.
You don't even have to be in front of your computer.
You can just get started right on your phone.
Take advantage of this limited time offer.
Start your free month now at Just justworks.com slash profiting.
Let JustWorks run your payroll so you don't have to.
Start your free month now at justworks.com slash profiting.
Next, I wanna talk about how the world is gonna change
now that AI is here being more adapted by people, it's becoming
more commonplace, how's it going to impact jobs? And also, if you can touch on the risks
of AI, what are the biggest fears that people have around AI?
More and more systems in the world will get automated. This has been a story of technology
throughout history. AI is another step in the automation of things. When things
get automated, things humans used to have to do with their own hands, they don't have to do anymore.
The typical pattern of economies, like in the US or something, is 150 years ago in the US,
most people were doing agriculture. You had to do that with your own hands. Then machinery got built
that let that be automated.
And the people, you know, it's like, well, then nobody's going to have anything to do. Well,
it turned out they didn't have things to do because that very automation enabled a lot of new types
of things that people could do. And for example, we're doing the podcasting thing we're doing right
now is enabled by the fact that we have video
communication and so on. There was a time when all of that automation that has now led to the kind
of telecommunications infrastructure we have wasn't there and there had to be telephone
switchboard operators plugging wires in and so on and people were saying, oh gosh, if we automate
telephone switching, then all those jobs are going to go away.
But actually what happened was, yes, those jobs went away, but that automation opened
up many other categories of jobs.
So the typical thing that you see, at least historically, is a big category, there's a
big chunk of jobs that are something that people have to do for themselves.
That gets automated and that enables what becomes many different possible things that you end up being able to do i think the way to think about this is really the following that.
Once you find an objective you can build automation that does that objective maybe it takes hundred years to get to that automation but you can in principle do that.
automation, but you can in principle do that.
But then you have the question, well, what are you going to do next?
What are the new things you could do?
Well, that question, there are an infinite number of new things you could do.
The AI left to its own devices.
There's an infinite set of things that it could be doing.
The question is, which things do we choose to do?
And that's something that is really a matter for us humans because it's like you could compute anything you want to compute.
And in fact, some part of my life has been exploring the science of the computational
universe, what's out there that you can compute.
And the thing that's a little bit sobering is to realize of all the things that are out
there to compute, the set that we humans have cared about so far in the development
of our civilization is a tiny, tiny, tiny slice.
And this question of where do we go from here is, well, what other slices, which now they're
possible, which things do we want to do?
And I think that the typical thing you see is that a lot of new jobs get created around
the things which are still sort of a matter of human choice, what you do.
Eventually, it kind of gets standardized and then it gets automated, and then you go on
to another stage.
So I think that the spectrum of what jobs will be automated, one of the things that
happened back several years ago now, people were saying, oh, machine learning, the sort
of underlying area that leads to neural nets and AI and things like oh, machine learning, the sort of underlying area that leads to
neural nets and AI and things like this, machine learning is going to put all these people
out of jobs.
The thing that was sort of amusing to me was that I knew perfectly well that the first
category of jobs that would be impacted were machine learning engineers, because machine
learning can be used to automate machine learning, so to speak.
And so it was once the thing becomes routine, then it can be used to automate machine learning so to speak and so was once the thing becomes routine that can be automated and for example a lot of people love to do programming.
Low level programming i spent a lot of my life trying to automate low level programming so in other words the competition language we built.
language we've built, which people are like, oh my gosh, I can do this. I can get the computer to do this thing for me by spending an hour of my time.
If I were writing standard programming language code, I'd spend a month trying to set my computer
up to do this.
The thing we've already achieved is to be able to automate out those things.
What you realize when you automate out something like that is people say, oh my gosh, things
have become so difficult now.
Because if you're doing low-level programming, some part of what you're doing is just routine
work.
You don't have to think that much.
It's just like, oh, I turn the crank, I show up to work the next day, I get this piece
of code written.
Well, if you've automated out all of that,
what you realize is most of what you have to do
is figure out, so what do I want to do next?
And that's where this being able to do
real computational thinking comes in,
because that's where it's like,
so how do you think about what you're trying to do
in computational terms so you can define
what you should do next?
And I think that's an example of the low level,
turn the crank programming.
I mean, that should be extinct already,
because I've spent the last 40 years trying
to automate that stuff.
And in some segments of the world,
it is kind of extinct, because we did automate it.
But there's an awful lot of people where they said,
oh, we can get a good job by learning C code, C program,
C++ programming, or Python or Java or something
like this. That's a thing that we can spend our human time
doing. It's not necessary. And that's being more emphasized at
this point. The thing that is still very much the human thing
is, so what do you want to do next, so to speak?
It's a good story. Because you're not saying, Hey, we're doomed.
You're saying AI is going to actually create more jobs.
It's going to automate the things that are repetitive and the things that we
still need to make decisions on or decide the direction that we want to go in.
That's what humans are going to be doing sort of shaping all of it.
But do you feel that AI is gonna supersede us in intelligence
and have this apex intelligence one day
where we are not in control of the next thing?
I mentioned the fact that lots of things in nature compute.
Our brains do computation, the weather does computation,
the weather is doing a lot more computation
than our brains are doing.
So if you say, what's the apex intelligence in the world?
Already nature has vastly more computation going on than happens to occur in our brains.
The computation going on in our brains is computation where we say,
oh, we understand what that is and we really care about that.
Whereas the computation that goes on in the babbling brook or something,
we say, well, that's just some
flow of water and things. We don't really care about that. So we already lost that competition
of are we the most computationally sophisticated things in the world? We're not. Many, many
things are equivalent in their computational abilities. So then the question is, well,
what will it feel like when AI gets to the point where routinely it's doing all sorts of computation
beyond what we manage to do.
I think it feels pretty much like what it feels like to live in the natural world.
The natural world does all kinds of things.
Occasionally, a tornado will happen.
Occasionally this will happen.
We can make some prediction about what's going to happen, but we don't know for sure what's
going to happen, when it's it's gonna happen and so on and that's what it will feel like to be in a world where most things around with a i.
I will be able to do some science of the eye just like we can do science of the natural world and say this is what we think is gonna happen.
What does going to be the infrastructure of a society already is to some extent, but that will grow of more and more things that
are happening automatically as a computational process. But in a sense, that's no different
from what happens in the natural world. The natural world is just automatically doing things
that are not where we can try and divert what it does, but it's just doing what it does.
For me, one of the things I've long been interested in is how is the universe actually put together if we drill down and look at the smallest
scales of physics and so on, what's down there? And what we've discovered in the last few
years is that it looks like we really can understand the whole of what happens in the
universe as a computational process that underneath them, people have been arguing for a couple of thousand years
whether the world is made of continuous things,
whether it's made of little discrete things
like atoms and so on.
And about a bit more than a hundred years ago,
it got nailed down.
Matter is made of discrete stuff.
There are individual atoms and molecules and so on.
Then light is made of discrete stuff, photons and so on.
Space, people had still assumed was somehow continuous, was not made of discrete stuff.
And the thing we kind of nailed down, I think, in 2020 was the idea that space really is
made of discrete things.
There are discrete elements, discrete atoms of space.
And we can really think of the universe as made of a giant network of atoms of space.
And hopefully in the next few years, maybe if we're lucky, we'll get direct experimental
evidence that space is discrete in that way.
But one of the things that that makes one realize is it's sort of computational all
the way down.
At the lowest level, the universe consists of this discrete network that keeps on getting updated and it's kind of following the simple rules and so on.
It's all rather lovely but that's computation everywhere in nature in our eyes and our brains the competition that we care the most about is the part that we without brains.
I'm not civilization and culture and so on have so far explored that's the part we we with our brains and our civilization and our culture and so
on have so far explored. That's the part we care the most about. Progressively, we should
be able to explore more. And as the computational X fields come into existence and so on, and
we get to use our computers and computational language and so on, we get to colonize more of the computational universe.
And we get to bring more things into, oh, yes, that's the thing we humans talk about.
I mean, if you go back even just 100 years, nobody was talking about all these things
that we now take for granted about computers and how they work and how you can compute
things and so on.
That was just not something within our human sphere.
Now the question is, as we go forward with automation,
with the formalization of computational language,
things like that, what more will be within our human sphere?
It's hard to predict.
It is to some extent a choice.
There are things where we could go in this direction,
we could go in that direction. These are things we will eventually humanize. It's also if you look
at the course of human history and you say, what did people think was worth doing? A thousand
years ago, a lot of things that people think are worth doing today, people absolutely didn't
even think about. A good example perhaps perhaps, is walking on a treadmill.
That would just seem completely stupid to somebody from even a few hundred years ago.
It's like, why would you do that? Well, I want to live a long life. Why do you even
want to live a long life? That's because whatever, that wasn't, you know, in the past, that might
not even have been thought of as an objective. And then there's a sort of whole chain of why are we doing this? And that chain is a thing of our time. And that will change over time.
And I think what is possible in the world will change. What we get to explore out of
the computational universe of all possibilities will change. There will no doubt be people who you could ask the question, what will
be the role of the biological intelligence versus all the other things in the world?
And as I say, we're already somewhat in that situation. There are things about the natural
world that just happen. And some of those things are things that are much more powerful than us. We don't get to stop the earthquakes and so on.
So we already are in that situation.
It's just that the things that we are doing with AI and so on, we happen to be building
a layer of that infrastructure that is sort of of our own construction rather than something
which has been there all the time in nature.
And so we've kind of gotten used to it.
It's so mind blowing,
but I love the fact that you seem to have like a positive attitude towards it.
You know, we've had other people on the show that are worried about AI,
but you don't have that attitude towards it.
It seems like you're more accepting of the fact that it's coming whether we like it or not.
Right. And to your point,
we're already living in nature,
which is way more intelligent than us anyway.
And so maybe this is just an additional layer.
Right, I'm an optimistic person.
That's what happens.
I've spent my life doing large projects
and building big things.
You don't do that unless you have a certain degree
of optimism.
But I think also what will always be the case
as things change,
things that people have been doing will stop making sense. You see this in the intellectual
sphere with paradigms in science. I built some new things in science where people at
first say, oh my gosh, this is terrible. I've been doing this other thing for 50 years. I don't want to learn this new stuff. This is a terrible thing. And I think you see that in there's
a lot in the world where people are like, it's good the way it is. Let's not change
it. Well, what's happening is in the sphere of ideas, and in the sphere of technology, things change.
And I think to say, is it going to wipe our species out?
I don't think so.
But that would be a thing that we would probably think is definitively bad.
If we say, well, you know, I spent a lot of time learning how to do, I don't know, write,
I don't know, I became a great programmer in some low-level programming language.
And by golly, that's not a relevant skill anymore.
Yes, that can happen.
For example, in my life, I got interested in physics when I was pretty young.
And when you do physics, you end up having to do lots of mathematical calculations.
I never liked doing those things.
But there were other people who were like, that's what they're into.
That's what they like doing.
I never liked doing those things.
So I taught computers to do them for me. And me plus a computer did pretty well at
doing those things. But it's kind of one and automated that away. To me, that was a big
positive, because it let me do a lot more, let me take what I was thinking about, and get the sort
of superpower to go places with that. To other people, that's like, oh my gosh, the thing that we really were good at doing
of doing all these kind of mathematical calculations by hand and so on, that just got automated
away.
The thing that we like to do isn't a thing anymore.
So that's a dynamic that I think continues.
But having said that, there are plenty of ridiculous things that get made possible by,
you know, whenever there's powerful technology, you can do ridiculous things with it.
And the question of exactly what terrible scam will be made possible by what piece of
AI that's always a bit hard to predict.
It's a kind of a computational irreducibility story of this thing of what will people figure
out how to do, what will the computers let them do, and so on.
But I'm, yes, in general terms, it is my nature to be optimistic, but I think also there is
kind of an optimistic path through the way the world is changing, so to speak.
Well, it's really exciting.
I can't wait to have you back on maybe in a year to hear all the other exciting updates
that have happened with AI.
I end my show asking two questions.
Now, you don't have to use the topic of today's episode.
You can just use your life experience
to answer these questions.
So one is, what is one actionable thing
our young and profitors can do today
to become more profitable tomorrow?
And this is not just about money, but profiting in life.
Understand computational thinking.
This is the coming paradigm of the 21st century.
And if you understand that well,
it gives you a huge advantage.
And unfortunately, it's not like you go sign up
for a computer science class and you'll learn that.
Unfortunately, the educational resources
for learning about computational thinking aren't
really fully there yet. And it's something which, frustratingly, after many years, I've decided I
have to really build much more of these things because other people aren't doing it. And it'll
be another decade before it gets done otherwise. But yes, learn computational thinking, learn the
tools that are around that. That's a quick way to jump ahead in whatever you're doing,
because as you make it computational,
you get to think more clearly about it,
and you get the computer to help you jump forward.
Where can people get resources from you to learn more about that?
Where do you recommend?
Our computational language, Wolfram language,
is the main example of where you get
to do computational thinking.
There's a book I wrote a few years ago
called Elementary Introduction to Wolfram Language,
which is pretty accessible to people.
But hopefully in another, well, certainly within a year,
there should exist a thing that I'm working on right now,
which is directly an introduction to computational thinking,
but you'll find a bunch of resources around both language that explain more how
one can think about things computationally.
Whatever links that we find, I'll stick them in the show notes. And next time, if you have
something and you're releasing it, make sure that you contact us so you can come back on
Young and Profiting Podcast. Stephen, thank you so much for your time. We really enjoyed
having you on Young and Profiting podcast. Stephen, thank you so much for your time. We really enjoyed having you on Young and Profiting podcast.
Thanks.
Oh boy, YapBam, my brain is still buzzing
from that conversation.
I learned so much today from Stephen Wolfram
and I hope that you did too.
And although AI technology like ChatGBT
seemed to just pop up out of nowhere in
2022, it's actually been in the works for a long long time. In fact, a lot of the thinking behind
large language models have been in place for decades. We just didn't have the tools or the
computing power to bring them to fruition. And one of the exciting things that we've learned about
AI advances is that there's not as big as a gap
between what our organic brains can do
and what our silicon technology can now accomplish.
As Steven put it, whether a system develops
from biological evolution or computer engineering,
we're talking about the same rough level
of computational complexity.
Now, this is really cool, but it's also pretty scary.
Like we're just creating this really smart thing that's gonna get smarter and
I asked him the question like do you think AI is gonna have apex intelligence
and take over the world? I record these outros a couple weeks after I do the
interview and I've been telling all my friends this analogy every time I talk
to someone I'm like oh you want to hear something cool? And I keep thinking about this AI,
if it does become this apex intelligence that we have no more control over,
he said, it might just be like nature. Nature has a mind of its own.
That's what everybody was says. We can try to predict it.
We can try to analyze nature. We can try to figure out what it does.
Sometimes it's terrible and
inconvenient and disastrous and horrible and sometimes it's beautiful. It's so
interesting to think about the fact that AI might become this thing that we just
exist with, that we created, that we have no control over. It might not necessarily
be bad, it might not necessarily be good, it just could be this thing that we
exist with. So I thought that was pretty calming
because we do already sort of exist in a world
that we have no control over.
You never really think about it that way, but it's true.
And speaking of AI getting smarter,
let's talk about AI and work.
Is AI gonna end up eating our workforce's lunch
in the future?
Steven is more optimistic than most.
He thinks AI and automation
might just make our existing jobs more productive
and likely even create new jobs in the future.
Jobs where humans are directing and guiding AI
in new innovative endeavors.
I really hope that's the case
because us humans, we need our purpose.
Thanks so much for listening to this episode
of Young and Profiting Podcast.
We are still very much human powered here at Yap
and would love your help.
So if you listened, learned and profited
from this conversation
with the super intelligent Steven Wolfram,
please share this episode with your friends and family.
And if you did enjoy this show and you learned something,
then please take two minutes to drop us a five-star review
on Apple Podcasts.
I love to read your reviews. I go check them out every single day.
And if you prefer to watch your podcast as videos, you can find all of our episodes on
YouTube. You can also find me on Instagram at Yap with Hala or LinkedIn by searching
my name. It's Hala Taha.
And before we go, I did want to give a huge shout out to my
YAP Media Production team thank you so much for all that you do you guys are
the best this is your host Hala Taha aka the podcast princess signing off you