L&D In Action: Winning Strategies from Learning Leaders - The Real-world Impact of AI: Education, Ethics, and Adaptation in the 4th Industrial Revolution
Episode Date: October 10, 2023We very well may be living through a historic inflection point. The point at which artificial intelligence grows from an esoteric machination to a tool that revolutionizes how most of society does its... work. But what industries and aspects of our day-to-day lives will AI impact most? And what should we watch for as that impact begins to be felt? This week, Thomas Bergen and Marc Zao-Sanders join the show to discuss AI’s impact on education, work, and our lives at large.
Transcript
Discussion (0)
You're listening to L&D in Action, winning strategies from learning leaders.
This podcast, presented by Get Abstract, brings together the brightest minds in learning and development
to discuss the best strategies for fostering employee engagement, maximizing potential,
and building a culture of learning in your organization.
This week, I have two guests, Mark Zao Sanders, CEO and founder of Filtered, and Thomas Bergen,
CEO and co-founder of GetAbstract.
Thomas started GetAbstract nearly a quarter of a century ago and has since become a renowned
expert on leadership and learning in his own right.
Mark is a strategist turned learning technologist and writer specializing in productivity and
habit formation. is a strategist turned learning technologist and writer specializing in productivity and habit
formation. His company, Filtered, is a tech firm dedicated to helping organizations get the best
return on their L&D spend. This conversation initially took place as a live session during
the penultimate week of Get Abstract's hashtag Get AI campaign. This campaign, which is live
through December 2023, made about 100 summaries and other
learning resources totally free on the Get Abstract platform. The goal is to demystify
the fourth industrial revolution by making high quality information on artificial intelligence,
machine learning, and automation a little more accessible. Getabstract.com slash getai if you
want to check it out yourself. Now, let's dive into the show.
Hello again, everybody. Thank you so much for joining us today for our second to last session of the Get AI campaign.
We're talking about the real world impact of artificial intelligence today.
Today, I have with me Thomas Bergen and Mark Zao Sanders. Thank you guys for joining me
today. Nice to be here, Tyler. We are going to jump right in with some questions about all the
things that are happening right now in the world that are developing new technologies, as well as
those things that have been around for a while already, the things that are kind of subtly in
the background, working in our businesses and in our everyday lives and the AI that's there.
But there was one specific topic that Thomas was most passionate about.
So I'd kind of like to start there, if you don't mind, Thomas.
You talked about Conmigo and the fact that in the future, we could all potentially have our own little AI tutors. And the fascinating thing to me is that as of November of last year, everybody born after
that date, theoretically, you know, they're after chat GPT, they're after this rise of chatbots.
And it almost feels like is this, you know, could this generation, in fact, be born into this world
where we have these virtual, digital, artificially intelligent learning assistants. So I'd love to
hear your take on what's going on there. One thing I was always interested in is in tutoring, in tutoring children to help them
learn faster. And what science has proven that if you have a one-to-one tutoring,
the probability that you are above average or even a good student is dramatically higher. I think
it's shifting two sigma curves to the right if you have a one-to-one tutoring.
And this is a very intensive job. It takes a lot of energy, a lot of time and
and we started even at home during Corona, a little tutoring company, and it's still running.
It's a nice endeavor. My son started it.
But now when I've seen Khan Migo the first time live, Khan himself was presenting it in St. Gallen at the conference where I was speaking.
I said, that's it now, that's exactly
what I was looking for. That's exactly now helping students, K-12 mostly until now, to
democratize somehow the access of tutors. And I'm a little bit sad that they are charging now 20 bucks per month for
for that service I would have seen rather giving it for free but but that
obviously it's much cheaper than what you are paying normally for getting
access to a tutor and obviously that's only a first step it's only a first step
Thomas what do you think about um so i'm going to take
the other side i mean of course you know we're really taken with um the developments recently i
am personally my company is we've incorporated gpc4 so we're fans as well but to just offer a
an alternative take with with tutoring one of the benefits of having a tutor, which I agree is
the most or one of the most powerful ways of learning, is that the tutor can pick up on
all sorts of subtle cues. So, you know, body language, timing. It's not just that sort of
the text and the words that are being uttered.
Now of course AI might get to those sorts of inputs in the not too distant
future but right now it can't do that. So do you think that, do you think that well
it is huge because those inputs are going to be coming and so we'll get all
of those benefits as well or do you think there are some limitations that might be felt for some
time to come? Mark I don't know but what I definitely think about that is the following
there are some aspects AI and an AI tutor at the way I've seen it right now is inferior to a personal tutor that really is helping over years, for example,
if you have a personal relationship and it's growing.
But what you are seeing on many tutoring platforms where you buy 20, 25, 35 bucks an hour to have a tutor,
very often you have a changing tutor and he doesn't.
And it's as well only an online interaction.
90 percent of tutoring right now is done online.
And those subtle signals that you could perceive if you are together with a tutor in the same room are mostly not perceived by the other side
because you're sharing a screen that you're working on.
Your picture is a very small one.
I think AI doesn't have such a big disadvantage.
And the advantage that it has, AI,
if you're working closely from the very beginning,
it starts to understand how you are ticking what what how
you what what are the problems really and it goes back to the basics of the problems it and it
doesn't offer you a Sam Altman wrote or in an interview said they had to change it the way
it's working or con had together with Sam Altman they had to
change it they don't want to show the solution they want to help you not you in the right direction
and it's it's working nicely I had the pleasure to use it I have to tell you I would have
experienced school totally different I was not in a family that we
had access to tutors and so I had to learn it myself and sometimes it felt very difficult and
this would have helped me tremendously. I really like the use case as well just this morning I
was teaching my son some maths and I went through the problem with him, guiding him,
and then he tried to confirm something, some understanding with me. Actually what he was doing
was giving an alternative example, but I didn't notice that and I just wonder actually not only
are there some things where AI is going to catch up in
human beings but there are some you know even with a private tutor it's not like
we tutor perfectly we have our imperfections and some of those you know
there are some advantages that the AI might pick up and I wonder actually I
wondered actually after we had that interaction my son and I wouldn't AI
have picked that up?
I even felt a bit as a dad, like, well,
there's some sort of limit to my help
and computer code might've done a better parenting job
in that particular instance, just that.
Yeah, it is really fascinating.
And presumably you think that Clippy and Siri
and smart speakers and all of those were just full storms.
They, I mean, they're technological advancements, but they weren't.
They haven't really they didn't get to a level, a kind of critical mass of quality that we could really use it.
And now we have hit that critical mass and now it takes off.
You know, obviously, I'm a little bit older than you guys, and I've seen Star Street bet the price.
And I was so fascinated by this AGI in the back that was guiding people,
helping people to be better, was a coach,
and helping the captain cook to really take smart decisions.
He still had to take the decisions himself,
but there was always an AGI, a general intelligence,
that helped him taking smart decisions.
And I truly still dream that whatever we are doing is heading in that direction, that it helps us to be better humans.
And it doesn't replace us at the end of the day, but it helps us being better.
And in tutoring, I think there is an application that could really do that.
do that. I was dreaming of having school like the old 2000 years ago, or two and a half thousand years ago in Athens, when you were discussing problems with Socrates. And he never gave you an
answer, he always just asked questions. But those questions led you to vastly improve your
own thinking and I hope it's heading in this direction. Yeah I've noticed that a
lot of the investment that's gone into AI apps for kids over the past 10-15
years there's been some quite smart stuff that goes on,
but it's not, and it's adaptive,
but it's really quite limited.
What you've got with this is the ability to,
and they're quite limited to domains of knowledge,
like arithmetic.
Arithmetic is pretty closed,
a little bit like chess or spelling.
That's also closed.
There's only a certain number
of words you're going to test a child on but with this you can now expand I mean really any topic
and actually the question becomes you know what do we actually want to restrict especially you
know amongst young people so yeah despite showing some cynicism at the start I mean I agree with
you it's a very very good use case. It's really
positive. Sam Altman, in an interview that he gave earlier this year, was suggesting that
as a private tutor, that may be the best and most valuable use case. All of the many use cases of
GPT-3.5 as it was then, but now GPT-4. And i think he was actively looking for a partnership with com and khan
academy because he believed this would be a very early positive use case um that would help to
reduce the fear that um that gpt will obviously create in in many aspects and will help to reduce those fears.
Yeah, it'd be nice to see it proved soon, you know, with hard data.
At the moment, it's more like, well, we can see the capability that that must do a lot.
We're still in the first year of that. So we're not going to have, you know, exam results and hard data to prove the efficacy of this new
technology but that will come quite soon you know it's still what ten months
since I mean they launched it in chat GPT they launched in what November
December of last year but still within a year the first year. I've seen some
studies now with classes in the United States that compared how fast children
were understanding the knowledge they needed to understand and it's quite
amazing the results are astonishing you will teach differently from I'm sure
private schools will adapt that immediately because it's just so superior.
It's amazingly much better.
It's more than one sigma.
It's more than one sigma.
Similar things are being used in organizations, in companies, in businesses.
I mean, maybe not quite like a tutor because as adults, you know, we're probably less inclined to need that sort of thing. But there is support in like workforce intelligence for identifying the best learning
opportunities and the best moments for learning opportunities when you're actually in the flow
of work, which is a huge thing in the L&D world. And also identifying your opportunities to grow
and to advance and to develop partnerships with certain people and to collaborate and to learn alongside or with others collaboratively. And this to me just feels like the next step up that probably isn't quite as advanced because we obviously don't take learning and our organizations as seriously as we do in the education industry and when we're younger.
But I'm seeing these things start to bubble up where organizations are actually taking in AI to help their people learn.
And I haven't had the chance to use these sorts of things. But I mean, what do you guys think about that? Is there a future where learning is as serious in our organizations as it is for our children because we've instituted this sort of AI guidance?
I think it'll be a lot more serious than it has been.
I think it'll be a lot more serious than it has been.
But I mean, I would just say that, you know, when in education, I mean, that's what the kids are doing when they're zero through to 18.
That is 100 percent of their time. I mean, there's some play, obviously, as well.
Whereas work is work and learning feeds into work. But you know it's always going to be a proportion of
the total time rather than rather than everything but I think you know one of the studies the other
studies that's come out recently which you might be hinting at Tyler is this Boston Consulting
Group and Harvard study of knowledge workers specificallyG, Boston Consulting Group consultants, that
were given some tasks, you know, very sort of, they were theoretical tasks, I
don't think was a real-world problem, but they was designed to be as real-world as
it could be. Shoe factory, I think it was. Yeah, and some market sizing and the
typical consultant stuff that you need, you get market sizing, writing a press
release, knowledge worker, you know, material. And
ultimately, I think there were three findings. So one was that the consultants were 25% faster
at completing their tasks, the ones that had access to generative AI. And I think it was
GPT-4. They were 40% better. Now, how do you decide better? Well, the way they judged it in
the study was a mix of human and AI markers. And the AI markers and the human markers agreed
very, very closely, which is interesting in itself. And then the other thing that came from the study was that if you looked at the lower performers and you add AI to their tool set and higher performers and add AI, both improve, but the lower performers improve more.
So there's a leveling that happens through the application of AI in this particular content.
That study has gone down really well and I think it was one of the first of its kind
to demonstrate that just in ordinary work alongside you, this term co-pilot is used
quite a lot, but alongside an ordinary knowledge worker can really enhance the quality of output.
I mean, speaking for myself as a user,
I do see it as a constant companion, a co-pilot now.
I have done for about a year, actually, since before we had chat GPT.
Because why would you, I mean, if it's useful, why would you not?
It's just one of the advantages that you can,
and now I would say that I'm semi-dependent even on it.
You know, if you took that away, I would feel vulnerable, weaker.
That's now in the work performance, you know, one small and silly example was my children.
We speak Spanish at home and the children, because we lived in the United States for five years, speak English with each other.
So German was always, now that we came back to Switzerland, German was always a big task for them.
And all the communication or every, all letters they have to write to the tax authorities or military service.
They always came to me and asked me,
Bobby, can you help me?
It doesn't happen any longer.
The quality of the letters improved so dramatically,
it's absolutely amazing.
Actually, Copilot starts to roll out to enterprises on November 1st,
I'm seeing more widely.
And there's a lot that's going
to be out there. I ultimately, the way they're pitching it, it seems I read that Microsoft blog
piece on it, which came out a few weeks ago. I think they want an assistant, a built-in assistant
that understands the way you work. So all of your scheduling, all of your little tasks, all of the
apps and things that you use. The thing that really stuck out to me is that it's going to understand
all of the apps and things that you use. The thing that really stuck out to me is that it's going to understand your data, the location of your data and that sort of thing, your files and
all of that. They did a very specific study where they figured out how much time people spend
searching for files every day, and it's too much. Also, this does bring to mind concerns for me like
data privacy and that sort of thing, because that's always sort of a concern when it comes to AI.
But, yeah, just what this is ultimately going to do really seems like, you know, are we all going to have assistants sort of at our disposal that just know where everything is as if we had an office?
But it's just, you know, our virtual workspaces, that sort of thing.
So we're moving toward a very different work world, that's for sure.
I think we're moving toward a very different work world, that's for sure.
One obviously big problem in the learning industry is how do we bring, or for many companies, our young leaders to successfully lead?
It's one of the biggest problems, biggest transformations in your business life from being not a leader,
of being the first time a leader and having a group of three five ten people and and how do you do that and we see that many are failing in this and i can really imagine if
you would have the a learning opportunity next to you to help not only a human one but as well
one that analyzes how you interact in emails and everything,
that could help tremendously.
And obviously, Microsoft is in a very, very, very good position to add those functions
to their AI tools.
What do you think, Thomas, of the tension between,
which Tyler hinted at, between privacy and kind
of functionality and convenience?
Because we all want the functionality and convenience
of something doing a really good job for us, with us,
and so to enhance our performance.
But we do start to feel a little bit uncomfortable
about it knowing intimate details.
I mean, some of it is quite intimate.
You know, when I'm doing something, with whom,
what files I'm working on, how much I'm working.
And if you think about who has access to this data,
I imagine it's a combination of Microsoft
and whoever the employer is.
I mean, in the case of the employer,
the interests won't always be perfectly aligned
with the employee.
There is a tension there.
So I wonder if regulation and privacy issues
will be a resistant force to the onset of this technology or whether
we're just going to go for convenience and the functionality is just so good that we
take the privacy hit and go with it. What do you think?
I was already surprised how much Instagram and Facebook and TikTok know about so many people.
So we seem to be willing giving up our privacy quite easily if the benefits are clear.
But I'm surprised about that in some aspects.
I wouldn't have thought that 10 years ago.
And I believe from the moment on, the tools will be very powerful.
Many employees will themselves say, look, it makes me so much better in my work.
It has to stay within the corporation or something like that.
I don't want that there are spillover effects to the web and
that those data are available for public on the web but there needs to be a somehow a fencing.
But for me within my company I will be willing to use it and if you as an employee don't feel
to use it and if you as an employee don't feel comfortable with that solution you will always have the possibility to opt out at least that that's how I would like to see it.
And then you have questions though of let's say you've got two employees and one of them's using
AI and one of them's not and the one with ai is doing better like the performance is enhanced
just like the studies are showing that's presumably then going to be reflected in
you know in compensation and pay and what have you and you do run into some potentially you know
ethical issues of well if i took this stance for you know ethical reasons i don't want the you know
the privacy but i'm being penalized in my career.
I wonder if that will have an impact as well.
I think there are a lot of questions
that haven't come up yet.
People are just so excited about this new brand new thing,
and we don't really know how best to apply it.
There are a lot of very good ideas.
So this is all over the next, well, just many years,
this is all going to shake out one way or another.
But I think there's a lot of unresolved tensions that will need to be resolved one way or another soon.
Mark, I even believe that we will adapt the structures of all our organizations to those new possibilities.
the structures of all our organizations to those new possibilities.
It could be, you know, when you're leading now a team,
five, 10, 12 people, so that's the max you can lead,
up to 20, but that's really, that's it.
Can't lead, but it could be that with those tools,
that's totally different.
It could be that we see suddenly emerge total different organization models because you can lead differently.
You know, the way our organizations are organized is still based on this Taylorism at the end of the day where you say this is your job and I control you.
this is your job and I control you. Obviously not always like that but you try to empower, you give
targets, management, by objectives and everything. We are a little bit further on than Taylor but still it is somehow a giving task and giving power and controlling a job and empowering job.
a job and empowering job and and it could be that with the help of AI we can organize we can come away from that quite dramatically I think that's true
in the workplace I think it might even affect dynamics at home it's also maybe
most obviously true in the classroom for kids where there have been moves towards a flipped classroom and different kinds of models.
But if everyone's got a personal tutor that has near infinite knowledge, the structure of classroom education can and probably should change.
Because I think with that, people have been saying that for a long
time I imagine you've had thoughts along the along those lines um Thomas that you know this is
imperfect but how do you change a system that's you know we've had for for centuries it's just
there's so much inertia you need a very big change to give us the data and the bravery to change the education of something, change
something that's so important, the education of our kids, that this might be that technology.
As I said before, I believe places where private education is possible and those institutions will experiment with that early on, will sooner or later find a way
to organise differently and will have much better results.
Because here in Switzerland we have a lack of teachers, we have a real problem to find
enough teachers.
I could even imagine that some communities, because it's organized on community level in Switzerland and state level, not on government, central government for all Switzerland.
So I could that some communities and states are early adapters in some classes and do experiments. I wouldn't be surprised. Switzerland is quite well known for doing things like that. And maybe the role changes
where they're you know more supervising teaching that is going on in a classroom rather than
so yeah the supply-demand dynamics might be shifted as well.
It's a little bit like we're watching a film, and a really exciting thing has happened.
And we're all just on the edge of our seats.
What's going to happen next?
I feel that a bit with this, that we're watching and observing, although we're part of it too. It's very hard for me to divorce what's happening from what has already happened
with technology that I sort of was, you know, that I grew up with. So like social media,
big part of my life using Instagram and Facebook and all those things for a while now.
And the paradigm for me when it comes to like ethics and regulation, as we were initially
talking about there is, or the example that I think of most is in my marketing background, I understand that
Microsoft advertising succeeds very well with a certain niche.
And that's generally older people who are kind of using those machines because Mac and
the Apple MacBooks have kind of taken over the younger generations.
But enterprises are very susceptible to Microsoft ads because of the
distribution of Microsoft computers and OSs to companies at large. That's where they still
succeed very well in their marketing and sales is with enterprises. So that's kind of what we're
talking about here. The OS is ultimately, it's like an office, it's a virtual office. And what
they're seeking to do is take all the things that you do in your day and put it into the computer. And in so doing, Microsoft and Apple, too, but anybody who creates an OS has created a system of advertisements that you see there.
which is it's usually like a travel destination or something like that because it's the background of the screen where you log in and it says want to learn more about this cool destination and you
can click that and it will take you right there in Microsoft Edge and then from there you're in
Microsoft Edge you're automatically using the Bing system and now you're getting Bing ads from there
and to me this is how the sort of advertising system like subtly infiltrates our world like
we're all kind of aware of this now and we can automatically or we can switch things from whatever search engine or whatever browser we want to use.
But I'm thinking like is the next generation of technology that starts to come into our work life, this AI system and like copilot, is that going to figure out a way to take that data of when we're doing what at what time and with whom and start sticking little ads in there?
They're going to figure out a way to do that as well, much like they have with the OS.
And I sometimes fear maybe the next and that sort of thing. So you guys
ever have thoughts about this when AI is taking up so much of our attention and it's helping us
so closely with our work, is that sort of influence of advertising and the things that we see on
social media and television and all of our media, is that going to be something that we have to think about regulatorily is that where we're going as
well it's got to get paid for right so you know we can't get these benefits technological benefits
um without it being paid for somehow and that's why you know we all almost all of us opted into
google and the web and search and we do pay for it through you know google ads and some other
revenue streams that they that they have um and you know you might say that and obviously like
you said the same has happened with social media and there's controversy there.
But, you know, really what's going on is that you're letting this technology get closer.
The reason it can help you is that it's getting closer to your brain, to your mind.
That brings the benefits. That also means that also brings a big opportunity to get you to change your behavior in some way by this, by that.
opportunity to get you to change your behavior in some way by this by that so unless it can be economically resourced without that I think that the same thing
will happen me why would it be any different to Google Ads social media
Facebook ads this is the same thing it's getting closer to our to our minds that's um
yeah Mark I what you are describing is possible way but we have to be very
careful at you know because with AI you have the possibility not with like
Google Ads where you searching for something and you know the possibility not with like Google Ads where you searching for
something and you know the first five or ten links are sponsored and on the right
hand side you have ads and they know what you are interested in. It could be
that the influence is being taken at the much deeper psychologically deeper level
you know that the information and the solutions you see
will lead you over time in the direction of this or that.
And that would, in my
opinion, undermine a big value of
AI if it's used properly.
If at the end there's a product or a political agenda
somehow
embedded into
the large language model
that would be very sad for me. I would really struggle then to use it the way
I've been using it until now.
But even now, Thomas, you've got Microsoft's own almost half of OpenAI.
Now they can ring-fence the work in theory,
but there are also some design decisions that go into the training data.
So what training data is available?
Is that the kind of training data
that is going to benefit Microsoft and Bing and LinkedIn Learning and, you know, some of the
brands and companies that lie within Microsoft or not? I guess what I'm saying is that, you know,
even now, there are questions about bias in all sorts of respects
but one of them in terms of commercial bias and potentially in terms of you know bias towards
Microsoft and it's quite hard to test for this sort of thing so I mean I completely agree with
you that what I would like as a user is to have a you know a pure agnostic technology that
really is on my side and trying to help me and not having the you know
motivations of others but when others have engineered that technology and
others are paying for it I just wonder if that's the way the world works you
know obviously there seems to be that sooner or later, every big company will have their own AI as well.
When we are looking at the possibilities in Switzerland, we have many pharmaceutical companies and they don't want to share the data with us.
their insights. But they want to teach a large language model because it seems that one of the big opportunities we see is helping to guide research through insights you are gaining
with the help of AI. And they don't want to share those large language models with others. That's why they spent those 50 to 100 million to
ramp up their own. That's why some companies that are producing those chips and media, for example,
just had an incredible run this year. And one professor of the University of St. Gallen came to us and said, Thomas, could you help us, for example, to establish a test scenario to test all those large language models if they are still producing the right knowledge?
Because we are searching all this new content and you have to test it with fresh content
and I think one of those tests will be soon related. I don't know if we are
heading in this direction, it was just an idea, but one test will be how strongly is now this large language model or this HEI influenced by some corporate interests
or government interests or is it still independent? Yeah, so detecting bias. Detecting bias, absolutely. Detecting bias. Because over time,
this could happen without you even wanting it to happen, just by the content you are using
to train that large language model. But you have to be aware of that. Otherwise, you can't use it
But you have to be aware of that, otherwise you can't use it properly in a corporate environment in the long run. But it is very hard. And I'm saying all these negative things.
Like I said at the start, you know, I'm a big, I'm a real enthusiast and a doctor personally and professionally of these technologies.
But I play chess. I played chess since I was a kid.
And if you're a chess player, especially over that period of time, 80s, 90s and then last 20 something years.
One thing that we've noticed is that computers were worse than the best human players.
Eventually they overtook in the late 90s. And now that now you don't even have matches between the best computers and the best human beings.
One of the changes that took place that enabled really rapid improvement of chess playing computers was using neural networks and moving from rules based engine, which basically are really understandable in terms of humans.
They'll say, well, this position is good because I've got more space for my bishop and my king is safe.
And that's exactly how a human being might rationalize the chessboard.
But when they moved to a neural net and basically the alpha zero model, one, it was a lot more effective.
But two, you just couldn't tell why it made the moves
that it made. And exactly the same with all of these. So yes, we want to detect bias, but it's
practically impossible in a neural net, almost by definition. You're giving it training data,
you've got a very sophisticated neural net which may have ways of improving itself
and you let it run and hopefully you like the results.
But if you don't like the results, then on what basis are you judging that?
Because you can't go back into the rules and say, oh, there's this calculation here
and we just need to make that adjustment.
Yeah, it's not any longer if then.
You know, we started programming basics and it was if this
happens then do that and those programming those neural networks will
be different and the results will be different but what's important for me is
for the user to understand what biases exist you know and if you're talking to
a person that's not different than talking
from me to Mark or to Tyler, I would like to know who they are. What is the
personal view on some important topics? Do they believe that human rights are
relevant or not in the background? and what values drive Mark to take
a decision. And sooner or later we need to know from our AI what values and what ideas
is driving this AI to produce insights.
And we need to measure that somehow, otherwise,
because the outcome could be totally different
in one environment than in another. And you need to understand that,
otherwise we won't be able to somehow as mankind to improve. Yeah and there's one issue with
the bias that we've just both been talking about there's also just in terms
of belief and too much belief in what the AI is producing in fact coming back
to that Harvard and Boston Consulting
Group study, one of the findings was
that there was a certain group of human user
that started to just rely on the outputs from the AI.
And because they're very often really believable,
so it's not just that there might be some bias in there.
They're not really checking.
After a while while you just okay
i'm just going to let it um let these answers uh go and that's that i think is a problem for a
couple of reasons i mean one if the ai is getting it wrong you don't spot it but also in terms of
you know who we are and what we're doing as human beings if we we let the AI take over the thinking and we're just so much a passenger,
you know, there's a question of purpose then. Absolutely, that's exactly, that's a very good
example, I like it, Mark. I think in this article it's even said that 20 or 22 percent, the results
were in by 22 percent inferior by the people who used AI compared to the other group in case the AI
was not trained with content that helped to solve this problem. But the users couldn't detect that
the AI was not trained with or was missing that information because it made it up.
And this is obviously very crucial, very, very crucial.
Yeah, the hallucinations, which are, they're not just hallucinations,
they're very convincing hallucinations, which is the worst, the worst kind.
So that's why it's kind of fun at the moment.
We need to be, you know, it's co-piloting
because it's exactly that.
You can't fully trust the other pilot.
You need to stay aware,
but there are some benefits too.
So let's talk about the organizational for a little bit.
We have about 15 minutes left.
The two of you are founders and leaders of businesses
and you're engaging with this every single day.
Thomas, I see in our chats how sort
of fervent you are about keeping up to date with what's happening in AI, how many questions you get
and how it's impacting businesses. And I think mostly what I see in the news when I'm looking
at other leaders that are advising the economy and the industry is a lot of concern. The SoftBank
CEO out in Japan said something very out there the other week,
saying, you know, if you do not learn rapidly as an organization or as an individual,
you will fall behind and you will in fact be forgotten, like sort of like a pretty,
you know, macro social warning there. But, you know, a lot of different authors and writers
are saying, like, you need to be, as an individual,
willing to learn, willing to adapt here more than ever in the past.
And as organizations, you need to move fast on these things, because other organizations are going to move fast.
And it just feels like a self-fulfilling prophecy, where we all need to take serious,
rapid action.
We have an abstract on a book called Rewired.
We have an abstract on a book called Rewired. The author says that we should have about 70 to 80 percent of our digital talent in-house very soon. So we shouldn't be sort of outsourcing as many things, but we need to bring those things in-house so that we have more control over the data sets and more visibility on these things that we've been talking about in regards to the AI and the things that we build. Machine learning operations will be very important. So
incorporating or having people that actually ensure that the AI is developing correctly as
it goes when we build things. I think about McKinsey, who they, it was many months ago,
shortly after ChagGPT 3.5 was announced, they announced their own sort of internal AI that
takes all their hundreds of thousands of cases, all the information in there and turns it into its own little like large language model.
And they allow their consultants to use that, much like probably what BCG was doing.
But for other organizations of different sizes, you know, do you guys agree that we have to be moving really, really fast here, that we probably have to be bringing people with AI knowledge in-house?
I know, Mark, you know,
you've had an AI tool for a while. You even kind of went up against it much like you were talking
about AlphaZero and chess. But what do you guys think? Do we have to move fast and do we have to
really think about this in a very serious way? Yeah, I think we do have to, I mean, I know that
Thomas and GetAbstract are. I know that we are.
And we're talking to a lot of companies that are. I mean, I would also just say, though, that I remember when the Internet became a thing.
I was just leaving university then and it was really becoming popular.
And there was this talk about, you know, there would be this two tracks.
There are the people that adopt it and learn more and learn from each other and just
become this sort of like hyper educated class and then there'll be everyone else and that was a wrong
analysis because the internet was so useful to everyone not just useful entertaining to everyone
that it was it that it had wide scale, near total adoption.
We as human beings, we adapt to what we need to do.
It's a survival instinct.
So I think if this really is as groundbreaking and useful,
it won't be that there are some people that just ignore it.
It'll be that everyone just comes along.
Of course, some people will be faster, some companies will be faster, but that in general, the motion of travel will be very much towards this version of AI
and the various developments that come.
So, yeah, we're moving fast and I'm enthusiastic,
but I do think that everyone will come along, not because they not even consciously,
but because it just is a much better, more effective way of going about your life.
But do you feel that some companies will be left behind if they don't move faster and keep up with the competition?
I think a lot of people are more concerned about that these days.
I don't think absolutely.
I don't think, well, if they're six months too late,
then they go out of business.
They're six months too late,
that will cause them some issues down the line,
and they'll be behind.
But just like some bad business decisions will cause you
to lose out to some of your competitors.
And then they will level up.
And some maybe don't move fast enough at all,
and they will actually go out of business.
But I think that'll be the minority because the evidence will be so...
The difference between this technology
and some of the AI developments we've seen over the last 25 years
is that one of the differences,
is that you've got hundreds of millions of people using this.
You never had that with any of the other developments of AI.
So just in terms of pure numbers,
it's kind of inescapable,
and it's obviously it's continuing to grow.
So I think it's just,
this is one that human beings are opting into, and there'll be winners, there'll be some winners and losers, but everyone's going to come along.
Thomas, what do you think?
There's some different points that you have to look at.
First of all, or at least I try to look at it from different angles.
One is an individual and the other one is a company okay i think people
individuals mostly the people who can use it at least i see it in schools now
they use it or many many i think chat gpt was the fastest used application in the history of it within three months i don't know how many were using it but it was an adaptation rate that was
just mind-boggling because you immediately felt it makes your life easier in many aspects.
So people will immediately adapt and they are using VAL-E
and all the other tools that are out there
and you see daily now coming out new stuff.
Obviously if you are a software engineer you're using it
because it increases your productivity tremendously so that that's one aspect
and I I believe many fears are not even coming from this side I think the
adaptation on corporate level, if it
has an impact on your business model, if you are now suddenly in a competitive
situation to what AI can do and with's level. Imagine that India was the
service center for the world. We introduced now two months ago or three
months ago Q&A AI that the first application at Git Abstract
was a very easy application, just a Q&A application.
We feeded it with all the questions we had
over those 20 years, with all the answers we had.
And guys, it was immediately so good,
so good that people who were working for GitAbstract, in case somebody couldn't
put in his credit card number, it's really a problem. Really a problem. And for some
nations who are now heavily in this servicing region, adaptation will be quite quite um quite a tough one and you've made me think actually
you know you've reminded me thomas that um you were as i was saying before well you know everyone's
going to pick it up there are certain industries where you need to be so on top of this um like
some of the you know customer um customer customer services and services and automating that,
it's just, it's really a big part of,
or going to be a big part of your industry.
I mean, possibly within 12, 18 months,
massive, massive change.
So there are things where you've really,
really got to be on top of this.
Massive changes, yes.
And if you're not doing it,
you are out of business.
I believe one of the industry studies as well directly attached is the learning industry, to be honest.
This, what's happening, is disruptive for the learning industry.
And we will see solutions that are so much better adapted to help people to learn faster in schools and in corporations.
This is, it will be mind-boggling, will be mind-boggling. So if you don't adapt that,
I cannot see a big future for you within the next two, three years. In learning
specifically? In learning specifically, yes. Yeah, I was going to ask you that question,
what sort of time scale
are you thinking that's over and you think two two three years yeah i think that's probably right
um my impression is that you know from the people in learning and development that i speak to
um and because we're a tech business we're generally talking to those that are more
you know interested in um ai but in, they say there's a lot of interest,
but where it can be deployed best, how to trust it, the questions are still there 10 months on.
So I think 2023 has been very much more about awareness and getting people a little bit more comfortable with the idea and obviously some experimentation.
I think 2024 will be a year where there are some deployments that are starting to really get some traction and possibly the following year, you know, some major changes.
You know, some businesses that have been big businesses in our industry may not exist anymore.
Simple as that.
That probably won't be in next year so much, but I think 2025.
So similar sort of timescale to you, Thomas.
And as well for us, Get Abstract, we move now into micro-learning heavily
and learning of helping groups to learn together more
efficiently. Human interaction is still very relevant on the one side of our
business model where we use our abstracts and all we have learned. We
just launched this AEI yesterday or day before yesterday that's trained with all
our abstracts as a search function plus or
search AI function.
And then on the other side of our business model, obviously, sooner or later, somebody
needs to figure out what knowledge on the web that's available for free is really fake
free and what is not.
So what we are doing now for 25 years for books and podcasts and videos,
we will just do now for all the content that's produced out there and say,
look, that's certified.
We tested it.
We looked at it.
But we have to adapt.
We have to adapt.
We do have one really good question from one of our audience members about
learning more individually.
And if we can circle back to
that, I think that's a good way to finish up. Jillian asks, how do we combat if we move into
using AI in education that we won't be training our children to become too reliant on trusting
of the AI and just, you know, becoming passive learners in that sense? So how do we teach our
children to keep that co-pilot mentality and really question if the information is accurate and actually makes sense. We talked about how Conmigo kind of is being built with that concern,
moving people toward the answer, but, you know, different disciplines, that's going to be a much
bigger challenge, especially sort of, you know, the non-positive sciences, the less hard sciences,
abstract things. But ultimately, what do you guys think about that? How do we make sure that our
children don't just become sort of complacent passive learners if they end up using a tool
like that? I'll say two quick things, Thomas, if that's why I jump in. One is that in the immediate
term, we can point out some of AI's failings, like, you know, these hallucinations. Sometimes
it doesn't get the logic of quite simple maths, right?
So there's just some examples where it's not working out here.
And then longer term, I think we do need to reconceive what the strengths of human thought
and human capability are.
But once we do that, then we stress that those are the areas where we really need to sort
of retain our autonomy and that's a strength relative to AI, make that known.
So I think that's part of the education and that would be part of how education changes
over the coming months and years is understanding where we're good and where AI is much better.
And of course there's more and more of where AI is better but it wouldn't be
everywhere. What I would like to add as well is the following. You know that the
learning part of school or of University of College, the learning part, what is
Pythagoras and what does it mean a square plus b
square is equal c square and how to solve now to calculate a triangle and
all that. This will be done very efficiently with the help of such tools.
I think and I spent thousands of hours in school to learn it. And this will be very efficiently done.
But what this allows us at the end, it frees up time to have much more meaningful discussions in school.
And suddenly there's a lot of Socrates-like discussions again and type of schooling.
And there, obviously, one important part will be what is true and what is a lie.
And philosophy suddenly that was moved out of many curriculums will again be an important part how to discuss and reason and
and to detect what's right and wrong and how to develop our society and and and how to raise
children and discussions and the ability to talk to each other that we totally lost in many of our societies. It's not taught at school, but we will suddenly have the time to do so.
If we are more efficient than the other one, we will have time to do the really
important one where we feel so much pain right now that it's not working.
So I'm very, very, very positive on the possibilities, how society can move, in what direction it can move if we use it properly.
It's like every invention. A hammer can be used to put the nail into the wall and fix a house, and it can be used to slash somebody on his head.
can't be used to slash somebody on his head. It has several possibilities, but
I truly believe we as mankind are capable of using it properly and taking full advantage out of it.
Finger crossed, yes. Thank you for that message, Thomas. That's a great way to wrap up, I think. Everybody for attending, thank you so much. We are finished now. And just for your knowledge, we do have one more session next week. It's going
to be with Stephen Miller, author of Working with AI. So we're going to do that next Friday.
This session here will also be made available as well. All of the sessions that we've done
over the course of Get AI, they'll be made available as videos subsequently. So probably
in about a week or two, and you'll receive emails about that. But again, thank you so much for joining us,
Thomas and Mark. It was a pleasure chatting with you guys, and I will talk to you all offline.
Thank you, everybody.
Bye.
Thanks, guys.
You've been listening to L&D in Action, a show from Get Abstract. Subscribe to the show and
your favorite podcast player to
make sure you never miss an episode. And don't forget to give us a rating, leave a comment,
and share the episodes you love. Help us keep delivering the conversations that turn learning
into action. Until next time.