L&D In Action: Winning Strategies from Learning Leaders - L&D Sentiment 2024: AI Top of Mind, Concerns about Resources, and Big Hopes for Showing Value
Episode Date: March 26, 2024It’s that time of year again: The results of The L&D Global Sentiment Survey are in! And this year, to no one’s surprise, Artificial Intelligence reigns supreme. In fact, AI “won” the primary ...question, “what will be hot in workplace L&D this year?” by an unprecedented margin, earning more than 21% of selections from the 3,200+ respondents. But of course, we want to dig deeper, so Donald Taylor joins us to discuss the free-text responses he received, how respondents are feeling about AI (and why), the change in significance of social learning, a hopeful focus on showing learning value, and much more.
Transcript
Discussion (0)
You're listening to L&D in Action, winning strategies from learning leaders.
This podcast, presented by Get Abstract, brings together the brightest minds in learning and
development to discuss the best strategies for fostering employee engagement, maximizing
potential and building a culture of learning in your organization.
In this episode, I welcome my first returning guest, none other than Donald H. Taylor.
As it is the end of quarter one, Don and I have our now annual chat about his L&D global
sentiment survey, where he asked thousands of practitioners what will be hot in workplace
L&D this year.
If you haven't checked out the survey yet, the points of emphasis are unlikely to surprise
you.
We're talking about AI folks, but I promise we do our best to provide cogent analysis
that delves deeper than common speculation.
Of course, there's much more on the table
in this conversation as we spend time
looking at the open-ended responses given in the survey,
which communicate concern, curiosity,
and plenty of hopeful perspectives
on achieving new learning goals.
If you don't know Don,
he's a globally renowned L&D thought leader,
chair of the Learning Technologies Conference, author, consultant, keynote speaker, and king of the cool LinkedIn
head shop.
Let's dive in.
Hello, and welcome to L&D in Action.
I'm your host, Tyler Lay.
And today I'm joined by Donald Taylor.
Don, it's great to have you on once again.
Thanks for joining me.
Great to be back, Tyler.
We spoke about this time last year,
maybe a little bit later in the year to chat about the global sentiment survey that you do annually,
one of the most important resources in the L&D world. And I'm excited to chat about it once more
because we have some pretty radical changes, maybe predictable, you know, maybe expected radical
changes. But we're going to go over, you know, a few of the different questions that you have on there the main one that just goes over you know
what's hot in L&D as well as some of the free text responses we'll spend some
time on those I would like to kick off by asking what happened that so many
more people responded to the free text question previously it was about 40% of
people chose to answer that optional question. Now you are at 94 this year.
I took the survey this year because I felt, you know, as if I deserved to having done
this podcast for a year now, there was no $50 Starbucks card.
There's no Amazon gift or anything like that.
But nearly everybody answered that free text response.
So what the hell do you do to get more people to share?
Are they more eager? What's going on there?
free text response. So what the hell did you do to get more people to share?
Are they more eager?
What's going on there?
The question is, what's your biggest L&D challenge
this year?
And I think the answer is there for you straight away.
The people had a lot more in their minds
that they wanted to answer.
Actually fewer people this year answered the survey
than last year, about 18% fewer.
And I think the reason for that is that the headline question is what will be hot next year? Everybody
knew what the answer to that was, AI. People weren't too crazy about answering it, but the
people who did come back to the survey, those people had stuff to share and they really wanted
to share their concern. Whereas last year and the the year before 40% of people answered the question,
this year was 94%, which is over 3000 people sharing what troubles they've got this year.
And by the way, if I give everybody who answered it, 3000 people, a $50 Starbucks card,
that's $150,000. Tyler, that's a lot of money.
I can't afford to sponsor to that degree, but I'll buy you a coffee next time I see
you.
Cheers.
Appreciate it, Don.
I assume you ran these free text responses through some sort of AI tool.
I believe you've done that in previous years.
No, no, no, no, no, no, actually.
No, no, it goes through an algorithm.
Yes, it goes through.
It's not very complicated, but things that sift through it.
But I've not found yet, and each year I look,
an AI tool which does things to my satisfaction. So what I do is I, it's a slightly laborious
methodology. I know what the keywords are because there are tools you can use to count the words.
I count the words, find out what the most popular words are, just determine which ones of those are
important or not important. So the word learning
is not important because it doesn't tell us anything about the answer, whereas the word
time is slightly ambiguous, but is important, and the word budget certainly is important.
So we break these words out, we search for those words, and using those words automatically,
with these algorithms, categorize something like 64% of them.
Not everything can be categorized at all.
Um, and then we go through and try to categorize everything else manually.
Gets it up to about 84% that we can categorize. So it's, it's a, it's an algorithm and human intervention based approach
rather than what I would love is being able to just press a button and everything
fall into their categories nice and neatly.
Give it a year. I think the next one you'll be able to do that. I do believe that AI came out as
the most popular term this time in the free text, right? I saw a post or two where you mentioned
that. Can you tell me what it was? Was it excitement, apprehension, critique, description of the
current use? Are people jaded? What's the attitude? What's the feeling around AI in this free text response? So the main question, what's going to be hot this year? The response was AI. It was AI by
an enormous amount. So 21.5% of people responding said AI is going to be the hottest thing this
year. And that was in the past, if you get between 10 and a% and 13%, you're top of the pile.
That was an enormous change, and it was uniform across every group and so on.
Okay.
AI is super hot.
It's also the first time that we've had AI be not just what was topping the question
about what was hot, but also topping people's concerns.
It was easily the thing people most concerned about.
60 people answered just by saying AI. That was all the thing people most concerned about. 60 people answered just by
saying AI. That was all they wrote in that box. They doubled down for you.
Yeah, really. And by the way, of course, when I say AI, of course, we're searching for AI,
also IA because of Intelligence Artificial and all the Romance language variations on that.
A.I. and artificial intelligence. We have to find all those terms.
But yeah, AI came out as being the thing people are most concerned about.
And I think that was what was driving it.
And if you look at the, we group things together.
So there's a group of words and a group of responses which tell us that
some things that the comment is about technology.
Of the nine categories that we put these responses into, technology was
number nine last year. This year, it was number one.
Interestingly, it wasn't just AI. The term AI was the most popular of all terms this year.
But it wasn't just AI that drove that category. Actually, the word technology
was more used this year as a concern than in previous years.
It's almost like being worried about AI has driven a general
concern about technology as a whole. I think that sort of sums up people's attitude this
year towards AI and technology that it's large and it's unknown and it's a bit scary.
In this vein, AI can also represent or sort of cover many of the other options in that question number one.
So adaptive and personalized learning, for instance,
a handful of these, even just like proving value,
proving the value to the business, utilizing AI
to demonstrate the value of learning,
to hire leadership, that sort of thing.
AI kind of seems like it can be a catch-all
or it can be a solution to what's actually on the mind of these folks. Do you think that that has to do with
why AI is also just kind of taking over in this case?
Yeah, for sure, for sure. Because it's ill-defined and because it's a catch-all, as you say,
and because it can power so many things, it took votes away from a number of other things. There's no question about it.
But interestingly, there's a bundle of things which together either rose up or didn't fall down very much this year, which are distinct to the rest of the table.
So, yeah, AI shot up and the noise of that seems to have drowned out everything else. But the other things that rose up this year
were personalization.
And undoubtedly, that is the top use of AI
in MD's mind at the moment.
Sorry, after creating content faster.
So after content, personalization's the main thing.
And then learning analytics.
Now, both of those things actually rose up this year.
And skills-based talent management,
which relies on AI typically, fell by 0.1% so almost
nothing. So there was a chunk of options which held their ground and then everything else pretty
much fell away. So I think yeah, you're absolutely right that it's a really mixed bag of options,
that we have things which are methodologies, things which are technologies,
things which drive other things
and which other things can rely on.
I'm interested in what's going on in people's heads.
What are they excited about and enthusiastic about?
And there's a clear trend when you look at it
from that point of view towards the idea of data and skills
as being what people are thinking is important right now
versus delivery mechanisms.
Whereas, certainly in 2016,
if we go back to when we first settled on having a list
of 16 options to choose from, you go back that far,
delivery, creating content and delivery was much more,
I think, on people's minds as being hot.
Now it's all about the data, all about the skills.
So I think you're right, AI is a weird,
catch-all term to have, but I think if you look beyond that
to how the table's performing, it does tell us
what's on people's minds.
Technology more broadly, of course,
I think also covers all of these things.
And in one of the segments of your reporting on this,
I don't know if it was the first announcement that you made
when you completed the survey
and you released that first report,
but you had a few different categories
that represented what those free text responses
broadly fell into.
And one of them was resources, I believe.
I think that was a large one.
Can you clarify what exactly that means?
Is that time and budget, as you mentioned before,
is that those plus other things?
What exactly do you mean by resources? Yeah, it is time. It's time. I mean, time is ambiguous. I actually have to go through anything
that mentions the word time and put it in one place or another and sometimes in both, because
a comment could fall in more than one category. But if somebody mentions the words resource,
cost, fund, finance, budget, or variations of that. So it could be financing, budgeting, or whatever.
Then that's a resource issue.
And time either falls into that or it falls into other issues,
normally around delivery.
So it's either people don't have time to attend the training,
that's a delivery issue, or I don't have time to do this job,
that's a resource issue.
And actually one of the really common themes that came up in the comments this year was that old idea of, oh, I'm being asked to
do the same amount with half the resources or one phrase was twice as much with half as little.
And that was a really common theme that came through this year that people feel under-resourced.
Even though people are demanding a lot, it's not like people are
saying we're being underused.
There's a lot of demands being made, but the resources aren't there to support it.
So this gives me pause and a reason to think about what AI represents to learning
and development professionals right now.
Because to me, what I've seen, and I think we talked about this last time on our call
is that, you know, AI really took over media social media in particular you know
LinkedIn it really it won the algorithm and people are still talking about it on a
regular basis it's it's a very important topic it is minimally understood by the
masses you know just defining it alone but utilizing it and understanding it
more importantly within an
organization. And I just want to dig a little bit deeper to try
to identify what exactly it is that causes so many people to
just write the word AI in their free text, in addition to
respond AI is the top thing. Is it that many of these folks have
used AI themselves, even if it's independently or sort of like
for their own individual work with chat GPT?
Is it because it, like I said, won social media
and algorithmically was just like so pervasive
and just abundant on LinkedIn and Twitter
and all those spaces?
Is it, you know, it's just a viral target.
Is it because so many services have promised it?
I'm sure you saw whenever you went to a live event
like I did, like a conference,
every single booth had AI on it. And you know, whether it was because there's a new application
or because we're developing a new application, every ed tech provider is trying to or has already
created something that is arguably AI if isn't, you know, AI enabled that sort of thing. So
have most people actually used AI with success? Is that why they're
saying that AI is what's hot? Or is it because it's just been so pervasive and it's this sort
of self-fulfilling prophecy of it's everywhere, we feel it, but we're not quite using it yet?
What do you get between this survey and the AI and L&D survey? What do you think that resulted in
such great excitement about it in their responses here?
It's important to remember that the survey, this survey asks people what they feel. It's
all about what they're doing. Now, Eglir, Vinas, and I have done our work trying to
establish what people are actually doing with L&D and AI. But here, we're just asking people,
what do you think? What do you feel? And I have come to the conclusion after 11 years
of doing the survey that there's a thing called the wordscape, which is it's like the landscape
that surrounds us and affects us unconsciously. And the wordscape that surrounds us affects us
unconsciously. Or to put it another way, as you say, if AI wins the algorithm on LinkedIn and
elsewhere, then you can't avoid it and you're
going to mention it. So the example I use of this is firstly, yes, on LinkedIn, it's all pervasive
and FN Molec, who we've all got a lot of time for, does a great job following AI and explaining it
to people who's an associate professor at Wharton, I think. And he said in October, I can't keep up.
Now, if he, whose job it is to keep ahead
of this, can't keep up, the rest of us are doomed. So there was that real feeling that
we were just being hit by this deluge of stuff online, but it wasn't just online. It was
also in press, it was on TV, on radio, if you're listening to it. I know this is a podcast,
but I'm going to hold up for
Tyler, the front page of a newspaper in the UK, which is a bit like, if you're in the US,
it's a bit like National Enquirer. So it's the sort of newspaper that would have normally a soccer
star or a pop singer or perhaps a member of the royal family or some quirky story about a TV soap
on it. But here's one- Or a bit of the royal family or some quirky story about a TV soap on it. But here's one trustworthy news.
Yeah, exactly. So here's here's a story.
It's a story from from October.
And it says, Psycho Scumbag, chat bots aren't all bad.
And the story is illustrated with a Terminator like skeletal
AI robot terminator on the side of.
Yeah. Of course, with the glowing red eyes.
Yeah.
No, normally, except expect a story, a humorous story about a
sausage roll or something on the cover of this, but we've got a
Terminator instead. That was one story they ran, I checked and I
was able to find very quickly 10 stories that the Daily Star ran
last year. On a, It wasn't really a story on
AI. It was a bit of scaremongering on AI. But that's all part of this pervasive wordscape.
If you walk past these in the gas station or in the supermarket or your newsagents and
you're just going to get yourself out a chocolate bar or whatever, you see this stuff, it just
sinks into you. You can't avoid it and
AI becomes part of the wordscape. So it was so pervasive, we couldn't escape it. We will never,
I don't think, ever see anything like this again. I say that now. I don't think we'll ever see
anything hitting 21.5% on the survey. It is the result of an entire year of those words being in every channel that you pay attention to.
Doesn't mean people are doing anything with it. Doesn't mean people are using it. Doesn't
mean people understand it. It just means people are reflecting it in their sentiment, which is
what the survey is all about. Okay. Well, you showed me that from the UK. I've seen plenty of
similar, even newspaper articles in addition to just what I see on LinkedIn, but the LinkedIn stuff is for sure global
It's coming from people in every country. Yeah, I have you know, for sure connections in every country, but geographically
How did this break down for you? AI was pretty popular
It seems like across the board but was was this you know all over the world
I think you also check by sector and that sort of thing
What was did you have any, you know,
do you have any percentage breakdown as to where it was most popular, where it was least, or was it
pretty even across the board? What can you say about that? Very, very quick top level view is
that it was absolutely unanimous worldwide. So we have 10 regions of the world. It was number one
in each region, but there were 10 places people can work. They can work for L and D, they can work
for a vendor, they can work for education. Number one in each of those sectors.
There are 16 key countries
where most of the voting comes from.
It was number one in 15 of those.
Oh, what's that like?
Yes.
Thailand had reskilling and upskilling as number one.
And generally in Southeast Asia,
AI did not score as high as reskilling and upsk-skilling. So in Thailand, re-skilling and
up-skilling was high. In fact, of all of the regions, Southeast Asia was the area where there
was the least interest. It hit 16% there. The US was the area with the greatest interest, where it hit
23%. Now, the interesting thing about that is that that's a shift from last year, because last
year the score was not nearly as high in the US.
In fact, the areas that led interest in AI last year in 2023 were Turkey, strangely,
and Israel, which were places...
I mean, Israel has a very strong startup sector.
There's a lot of government investment in IT. Turkey
was a surprise to me. Now, that didn't follow through this year. This year, it was like
a leveling up. I mean, Ireland, for example, which has a very strong tech sector, had a
score of 4.3% for AI last year. Sorry, 4.2% for AI last year. This year, it was 20.3%.
And it was almost like all of the European countries,
all of the countries which were voting
at different levels for AI last year,
everybody rose up to more or less the same level,
which was about 20% or more.
So it was almost like it had taken a bit of time
to catch on in a few places,
but then everybody got on board
and everybody was
voting for it. But the US leads the way, which is interesting. US leads the way, North America leads
the way and US leads the way. And Southeast Asia is kind of at the other end of that. And in the
middle is Europe and UK, which make up together about half of the voting. Sorry, I should just
be clear. When I say Europe and UK,
I'm not making a political point.
It's just that geographically,
it's easier for me to say,
well, we've got the European votes,
we've got the UK votes,
and they're big enough to be differentiated.
Okay.
Are there any other significant percentages
that you would like to throw out there?
You mentioned Thailand's number one
was upskilling, reskilling, I believe.
That's some kind of a thing.
Yeah, that's a bit crazy.
Yeah.
Anything else from the UK or any other high voting populations that were unique,
even if it was something that was below AI, but any big jumps in a specific country or a specific region?
Well, Italy was the first time we've had a good result from Italy this year.
And of all the European countries, that was the country that voted the most for AI with 27%, which I don't know. I mean, in Europe, I associate UK a bit with AI
because we have a strong startup sector,
Sweden certainly, the Baltics as well.
Italy, I can't explain.
I haven't done my webinar with Italy yet.
Every year I really look forward to doing my webinars
with the individual countries,
chatting through the results with them.
I'm doing that tomorrow, Tyler. So I'll be able to send you an email tomorrow saying exactly why I think Italy did so, voted so heavily for AI.
No, there are no other crazy percentages that really stand out geographically.
I do think that what we're going to see is a come down next year, a sort of hangover from the craziness of AI this year.
And I'm very much looking forward to seeing
what happens when people wake up
after the AI binge drinking
and how big is the hangover?
What are they thinking?
Which options then recover and which don't? Because one of the options that really took a hammering this year, or a collection of options
that took a hammering this year, was the value options. So there are three options, showing value,
consulting with the business, and performance support, which had been almost level for five
years, in fact, trending slightly upwards, which is unheard of.
And this year they all took a hammering by about the same amount.
It can only be due to AI's popularity.
So if popularity of AI sinks next year, will they recover?
Let's see.
You also mentioned that there was sort of an end to the pandemic effect where things
like social learning and collaborative learning
have officially kind of gone down in favor of tech-focused things. I'm curious as to whether
you think this is an identification that AI and technology can actually serve us, potentially
serve us so greatly in the way that we learn, that we need to do less of this collaborative learning,
that we can actually optimize more precisely and in a more nuanced way with adaptive and personalized learning.
And we actually don't need as much social learning
or we don't want as much.
Or if it's just a natural,
like AI is going to be more important this year
and we will come back to social and collaborative learning
and that will kind of over time remain as important
as we've already decided it is.
Because if there's one thing that I've learned
through this podcast and speaking to 40 something guests
so far, almost everybody agrees that some form
of social learning, collaborative learning,
if done correctly, cohort based, if done correctly,
it seems to work really well.
Whether it's you learn something
and then you teach your colleagues,
you teach your colleagues your own sort of realm
of expertise, whether you're learning in a group,
sort of like a classroom setting,
the university system
is still somewhat popular out there. It seems to me like it
works from all the experts that I've spoken to. So I'm hoping
that it's just kind of a recognition that AI is hot this
year, and that this isn't a we're getting rid of the social
learning aspect. What do you think about that? I'm sure it's
not going away entirely. But are people less enthusiastic about
social and collaborative learning now that we have potentially these great tech solutions on the four?
Look, it's a great question, and you frame it really well, Tyler. What had happened was that
collaborative slash social learning had been going down for five years, and then 2021 came,
and it lifted up, and 2022 came, and it lifted up even more. This is during the pandemic.
And then 2021 came and it lifted up and 2022 came and it lifted up even more.
This is during the pandemic.
Then, and during that period, we saw these votes going up
and we thought, well, what's going on?
Then afterwards, it fell quite a lot
the first year after the pandemic.
And then this year it fell again by the same amount.
Quite dramatic drops.
Again, it's important to notice
this is not people sitting down,
thinking through a complex series of ideas
before making a vote. The thing is designed to be a pulse check on sentiment. At least half the
people answer this in two minutes or less, and that includes writing into the free text answers.
Right? So it's very much, how do I feel about this right now? So people aren't having this whole
debate in their heads typically around, is it AI? What should I do?
It's just how do I feel about it?
Which is the point of the survey.
And I think there are two things going here.
One is the absolute sense during the lockdowns that we need people.
And it wasn't reflected in we need people for learning.
Possibly it was.
But it was more we need people.
And so anything about collaborative, I'm going to take.
I think it was more we need people. And so anything about collaborative, I'm going to tick. I think it was that simple. Plus going back to the wordscape idea, during lockdown, the word social was being
mentioned everywhere, social distancing. So if you see the word social on something, it has more
resonance for you. Sounds a bit weird, but I'm pretty sure that had an effect as well. And then
afterwards, you've got the contrast with AI. Suddenly, AI is sucking up everybody's attention.
And this thing, which was really important,
at a visceral level, people really felt it.
That had gone, and suddenly, there was a new kid in town,
and all the money was going on that.
So you express a wonderful set of contradictions and ideas.
I don't think all that was going through people's heads.
I just think it was, we need people to, oh, look at that.
I honestly think it's as simple as that, as the switch in voting.
But I have brought you, Donald H. Taylor, onto the show to tell me authoritatively what
you think is going on.
So if you, if I didn't realize this, if social learning was actually going down for the several
years prior to COVID, I mean, that's still people's feelings and that's still just a broad indicator,
but does that give you a reason to think?
And just based on your experience in the field
and all the companies that you've worked with
and the people you've spoken to,
do you feel as if social learning is decreasing?
Collaborative, you know, people-based learning
is going to decrease over time.
And now perhaps even more rapidly,
while we try to figure out how to do like hyper-personalized and adaptive learning with all this tech? Do you think it's going to go down?
Collaborative and social learning took off in 2006 with the publication of Jay Cross's book,
Informal Learning, the Natural Pathways and something. I can't remember the whole subtitle.
Okay. That really drove interest in the whole idea of social collaborative informal
learning for a number of years. And prior to 2016, the option of collaborative social learning was
on the up and it peaked in 2016 and it started coming down. The question is, are people talking
about it? No, people aren't talking about this right now. And if people aren't talking about it,
it's not going to be reflected on the survey.
Doesn't mean it's not important, it is.
It absolutely is.
And I've literally just before we spoke
was talking to a fabulous startup
who's doing great work with getting people to learn
by sharing stuff together.
And there are a number of people in that field
I could mention who are doing really good work.
So there's interest in it and they've got customers.
If they've got customers, it indicates that people are interested in this idea.
So if it's coming down the survey, all that means is that people don't think it's hot.
So it could be one of two things.
Either, well, people aren't talking about it so it's not hot, or no, we've got this
one covered.
We know what's going on with mobile delivery, for example, which had had a steady downward progression until last year when I took it off. So I think it could be one of those two
things. I also kind of think that when the AI, I'm not going to say bubble bursts, because I don't
think it's ever going to happen. I think AI is going to become part of our everyday lives,
like electricity is. If you look at what happens in one or two years time when we are able to use
AI, like we use electricity to power things, what would it be powering? Well, you could
do an awful lot of personalization and supporting people in their learning and pairing people
up in the right groups using AI for social learning. So I think it may well see a resurgence.
By the way, personalization had also
been on a downward trend since 2016. It'd been going downwards for seven years until last year.
It picked up this year and that's on the back of AI. No question about it. AI goes up. People think
AI personalization, yes, and they give it a tick in the box. So as we see more applications and
more practical uses of this weird amorphous thing called AI,
which nobody could put their finger on, as soon as that starts resolving into different things
which actually are useful, one of those is going to be social and collaborative learning,
and I expect to see an uptick.
One example perhaps of this is AI being involved in upskilling and re-skilling.
You recently did a talk, I think it was sort
of like a webinar type thing, a talk on AI as a tool for re-skilling and upskilling.
Again, this is an example of how AI kind of is a catch all for some of the other options
on there.
It can be used to achieve X, Y or Z. What is the sentiment around that right now?
Is AI a valid tool for teaching? Not in and of itself, obviously it has to be combined with
something, but do we trust it enough to enable our teaching to optimize and make our teaching better?
For skill mapping and assessing, you know, workforce intelligence, that sort of thing,
I think has been around a little bit longer than a lot of other realms of AI for defining and creating
learning paths, something like that. How deep are we into trusting AI with specifically
upskilling and rescaling right now, that realm of teaching?
Well, I mean, that's a whole podcast in its own right. It's a great question, Tyler. Let
me give you the general sentiment, then let me tell you what I think. So I think generally
people are super excited about the idea of skills because it's something the World Economic
Forum has been talking about.
We know going back to January 2020 that their report on the reskilling revolution was what added rocket fuel to the whole debate about reskilling and upskilling.
So we know that's huge. All right. Okay.
We know that managers and executives love the idea of skills because it sounds like something you can put your finger on and say somebody can do this or can't do that and therefore that will help us plan for the future.
Great. Okay, so executives board in, everyone's talking about it because the World Economic Forum
is talking about it and it looks like AI can help us understand skills better. Great. So the general
sentiment is at last we've cracked the skills code
and we can understand or infer the skills people have
in the organization by looking across the digital footprint
and understanding what skills they have.
Now, people are doing this
and they're doing it quite successfully and I'm delighted.
I am concerned that this idea of AI fueling
what I call skills-based talent management has
been oversold. And there's a risk that at some point, because the claims have been so
high, and the expectations raised so much, at some point there'll be a reckoning where
it just proves a little bit difficult to actually make this work. And for me, the key issue
here is that
if you really want to use skills-based talent management,
if you want to help people develop in their jobs,
recruit people and so on,
you need to have quite a sophisticated view of skills.
You can't really say someone can speak French or not,
someone can play the piano or not.
What level can they do that to?
That's where it starts getting difficult
because unless I've got something which tells me,
right, your level two looks like this
and level three looks like this,
and to get from two to three,
I need to do these things to improve my ability.
Unless you've got that, you don't have what you need
to drive a proper skills-based talent management approach.
And it becomes a bit too simplistic and binary.
In some cases that works,
but I think for the sophisticated needs
that people are looking for, it's not enough.
My concern then is that we're gonna have a bit of a slump
and enthusiasm for this in the future.
So can we do it?
Yeah, I think we can.
It can be done on a simple level, very effectively.
The more sophisticated hopes that we've got,
I worry that it won't
be, I'd be delighted to be proved wrong, because it would be great for everybody if this could
be a tool that finally cracks the difficult issue of identifying the skills people have
gotten in need.
Yeah.
Thank you for articulating that because my first two episodes on this whole podcast were
Simon Brown and Robin J. Suthawson.
And we talked at length about skills
and what they mean in the AI future.
I think it was Simon who, he was previously at Novartis,
he described how they had identified something
like 5,000 skills that had some sort of significance
in their workforce intelligence system.
And that made my head spin.
I was like, that seems like far too many.
And I can imagine what many of those are.
And I can imagine they break it down into like,
if you're a programmer, like different coding languages.
And I started to think about my own job.
I'm like, I'm sure podcasting is one of those skills.
And I started to think like, what does that really mean?
It's not a binary, like you have this skill or you don't.
It's like, maybe you've done this thing once or twice, and that is enough to say that you are capable of doing it but are you good at it? And if this is
a business, this is the world of capitalism, are you good enough at it to promote your company in
such a way or to benefit your company in such a way that it grows? Or maybe you're not good enough
at it but you can convince others that you are and then it brings the company down. So that to me
has always been a big question is how do we go beyond identifying skills and actually
sort of quantifying, qualifying them and the strength that we have and all of that.
And being able to do that is good in its own right because it enables people to know where
they are. But it's also for me the key point of skills-based talent management is to be able to
So for me, the key point of skills-based talent management is to be able to give managers a tool to have a dispassionate conversation with employees to say, Jane, I know you want
to get into this job, but I can tell you, you haven't got this skill at this level.
And I can tell you that because these are the behaviors that we expect to see.
It could be soft skills, could be hard skills, whatever.
Let's say you're doing webinars.
One of the skills you need to be a level three webinar host is the ability to respond rapidly and fluently to questions and chat, and you're not doing that fast
stuff at the moment.
Maybe you can do it.
Let's put you through a training program.
Let's give you a stretch assignment, whatever, to get you there.
That is what ideally skills management is about.
It's about those conversations that are between manager
and individual to help them develop.
And everything comes down to that very simple
but crucial part of management.
If we don't have that observable behavior,
then I don't think it's as useful as it should be.
In most cases, I think this comes back to, you know, where do humans continue
to play a part in the AI enablement?
And this is a really good example of managerial oversight.
And I think it also emphasizes the importance of maybe reconsidering what our
managers actually do on a day-to-day basis and how serious they are about
promoting their people.
John Boudreau, who is the co-author
of many books with Robin Jayson, I just mentioned, we spoke about this on a live webinar actually
several months ago. And I think one thing that I think is going to happen is that managers are
going to sort of profile what a manager is and looks like is probably going to change over time
and their ability to utilize these tools and taxonomies and that sort of thing will probably
evolve as well. Is that fair to say?
I think you're 100% right theoretically. I hope we get there sooner rather than later. My concern
is that it's a complicated thing and changing, in particular changing the tacit assumptions about what people do in the workplace, like
what do managers do, is a huge ask because we just assume, without it being spelt out
by anybody, that their job is to keep people in line and to feed information upstairs and
to do a bunch of other managerial things. Increasingly, that business about managing
the flow of information is not part of their job, but absolutely ensuring
that they've got the right talent in the organization, staying in the organization, growing in the
organization. That's an increasingly important part of the job for the future. And I personally
believe the organizations which succeed best are going to be those which take that view
of management seriously. Yeah, I agree. Speaking of feeding information upward, inevitably each year on the survey,
I'm pretty sure you're getting responses, especially in the free text option, at least,
and this is also represented in showing value. But the folks who say that they want to prove
that learning is more important than executives think it is, or
is in fact the most important thing to a successful business. That learning is just absolutely
critical because I mean, we see it each year now, you know, sort of layoffs and that sort
of thing. Unfortunately, there's always resources is one of the concerns, but I think more people
and there's a couple in one of the posts that you sent out on LinkedIn, talking about mindset
or just value of learning.
That was the focus of their free text response.
So I'm, I'm curious if you suspect that these people who respond in this way to
the survey, see this as like a communication challenge of theirs, where
they have to find a better way to demonstrate to their leaders that learning
is valuable and that they already have the resources, they just need to
communicate it better and maybe find a few cases that will really prove
it to the leaders or if it's not yet possible to demonstrate because we have
limited ways of showing our ROI and analytics and data but perhaps with new
technology perhaps with some sort of AI application it will then become easier
to convince executives that that's the case do you have any sense or is it a third option or something like that? Do you
have any sense what people are thinking?
I have a sense of what people are thinking and I have a sense of what the reality is.
So my sense of what people are thinking is, is showing the value of learning is difficult.
And that's true. It's not, it's not insuperable. It's not impossible, but the words value, impact, ROI, investment,
metrics, data, all go into one back bucket called data and analysis impact.
But the key word in that whole bucket is value, which is the one that gets chosen the most.
And people want to show the value and they think it's difficult.
They don't say why they think it's difficult. They don't say why they think it's difficult. My assessment
is that most people think it's difficult because they're missing some magic formula.
That's not the case, of course. It's very easy to describe how you show the value of learning.
In a monetary sense, it's also easy to describe how you show the value of learning in a practical
sense, which is you've just got to meet the business demands.
The business demand is not, give me a training course. The business demand is performance focused. Something's not right. We will make it better. And as a result of that, performance will be improved.
The problem is that L&D typically is looking for a way to get that information without having to go through a difficult conversation with the manager or the executive to say,
you say you want another training course, why? What's the problem we're trying to fix here?
That difficult conversation is where you actually get to the real issue of the value of learning. So I think that people undoubtedly consistently say it's a problem.
And I think the problem is partly because the executives don't understand the value of learning
and don't see it as a strategic thing, but it's also partly on learning and development,
not taking the right approach in quizzing and asking questions and
listening to the right answers to be able to deliver something that provides value to the
organization rather than something which provides content to the employees. Do you ultimately think
that technological advances will just make that easier for the LND leaders to communicate?
that easier for the LND leaders to communicate?
No, I don't think it'll make it more difficult. I think it's a red herring.
I think that it is too easy to say,
ah, we'll be able to do it once AI can let me.
No, I think you can already have that conversation.
You can already map the impact.
You don't need any tool to help you.
What you need is to be able to understand
what people aren't telling you
when they answer your questions
and to drill down further into that.
It's nothing to do with technology.
Okay.
Got a couple more questions for you.
I wanna look back at what we spoke about
in our last conversation, again, about a year ago.
That was a year ago.
You can't expect me to remember that.
And I will remind you, we spoke of Bolton and Watt, which was
they're the original steam engine creators, I believe that you talked about them in your book. Yes. And what happened was, you
know, when when the steam engine was invented, you know, society
advanced in a very serious way. And many companies were just
moving much faster and growing more rapidly with that new source of power. And what it seemed to me
is that those with existing resources to handle such technology advanced much
more rapidly and you know got a bigger share of the pie is kind of a simple way
to put it. And I posited a sort of organizational capacity curve when I spoke with Egle about the
AI in L&D report that you guys put out, which is that the same thing could happen and maybe was
happening on that chart. She wasn't totally sure. I don't think you can know based on this specific
survey and how you guys did it. But it seems like there was a handful of folks that were in the
implementation and utilization phase. There of folks that were in the implementation
and utilization phase. There were plenty that were in experimentation and that sort of thing.
But ultimately, it seemed to me like, yeah, the folks that had the businesses that are larger,
the incumbent tech companies that are not only have bigger budgets, but probably have tech folks
within them are just going to be more likely to be early adopters of AI things and
test them out. They'll have a safe method for testing them and just even be more successful
at testing and then fully implementing those things. And to me, it's a question of whether
those companies will suddenly see a rapid series of successes that will result in them getting a
larger slice of the pie
because they're more capable of testing out
more complex and more serious AI tools.
When I asked you this last year,
it was just several months after ChatGPT,
so many AI tools were kind of under development.
We're still not in any sort of, you know,
AI has fully taken over phase,
but I'm curious if you've seen any changes
that make you think that those more capable,
larger organizations with greater tech presences,
kind of like the steam engine question, will be able to run away with their success because they've adopted, they become early adopters of AI or even early innovators in AI.
Do you think that might be happening or could happen?
I think it's a great question. I think the ability is one thing. And undoubtedly, if people have the tech savvy, they'll be able to
do what you describe, adopt, and so on. But there's another part
of that if an organization has the technical nows, it's
actually probably correlated with a pro tech pro
experimentation pro change mindset. And the mindset is actually as important probably
as the technical ability,
because adopting AI at the most basic level
probably isn't very complicated.
Most of the barriers that Eglo and I identified
in terms of people adopting AI
weren't about how do we do it technically,
it was about how do we make sure our data is secure. People are
absolutely afraid. I was on a webinar earlier today where people are saying, I am petrified
of using AI because I don't know what I could do wrong. Now, if you've got that sorted out,
and it's not usually a technical question, it's just a matter of guidelines and understanding
what you should and shouldn't do. If you've got that sorted out, then you're ready to go.
understanding what you should and shouldn't do. If you've got that sorted out, then you're ready to go.
But actually, it's the whole mindset behind,
I think there is a way to understand
what's right and wrong here,
and I know who to go to to check it and find it out.
It'll be so-and-so in the tech team.
And if you've got that mindset
and the ability to go and do it,
then yeah, you're more likely to adopt it.
So I think you're right that they'll be more widely adopted. I don't think it's a technical thing. I think it's a mindset
thing. Of course, if you adopt it more rapidly, you'll also make more mistakes. So you have to
have a mechanism for making sure the mistakes are stopped and tied up as soon as possible,
or they may be suboptimal rather than mistakes. So yeah, there's a lot going on there. Will they
necessarily steal a march? That depends, doesn't
it? Because you could adopt AI for a bunch of frivolous reasons that don't help anybody at all.
So you have to have the business link in with the whole adoption thing as well. So it's more than
just, hey, the tech companies are going to take over the world. It's the ones which have a good
integrated view of the world, which allow a good rapport and relationship between
the business facing and the tech side that are going to do best out of this.
That was my question. Our tech company is going to take over the world. So I'm glad you came out
and said it where I wouldn't. All right. Final question for you. We have the LinkedIn Workplace
Learning Report released recently, and I don't want to get too deep into it because that's a
whole other podcast. It's a good report. Everyone should read it.
Yes. Their introduction, the first headline says L&D powers the AI future.
It's just a quick paragraph after that.
But one of the sentences, as AI reshapes how people learn, work and chart their careers,
L&D sits at the center of organizational agility, delivering business innovation and critical skills.
What you just described briefly in there was, you know, how some tech people might play a really big role in in this sort of innovation and adoption,
understanding more complex tools.
But ultimately, AI isn't actually that hard to adopt.
And that's kind of the point is that these tools eliminate some of those
underlying tech, tech, tech capability necessities.
So what do you think? Does L&D actually power the AI future? Are we really at the center of
organizations being able to adopt and learn and understand and successfully implement AI?
L&D has never been at the center of any organization unless it's an educational establishment.
So the idea that a technology will suddenly move it to being in the center, I'm
afraid, is nonsense.
Is LinkedIn just flattering L&D folks with this statement?
No, no, they're reporting what they've seen.
And, you know, they've, and it's a good report and people should read it.
But I think, I cannot agree that AI is suddenly transforming L&D because that's not what I see.
It's not what I get from talking to people. The one word that people use in 2023 when I was talking
about AI was overwhelmed. People felt inadequate in the face of a torrent of stuff coming at them
about AI. The idea that suddenly AI has transformed the role of L&D is wrong. Now,
has it transformed the potential for L&D? Yes, it has. It has. But it doesn't happen automatically.
I think that's the key point. L&D is faced with an opportunity here, using AI tools, and I'd go
wider than that and say, using the wide range of data tools, including analytics,
including AI, that are available to us as a result of a variety
of things, but including cheap processing power, SAS,
and the ability to deal with data like never before.
The result, which includes AI, the result of all that
is that we can do an awful lot more than we
ever could in the past. But you've got to take advantage of it. You've got to step through the
door and be unafraid to make things happen using these new tools. So I don't actually agree with
the headline, but what I'm going to say is it's based on a premise that learning and development
is ready to take advantage of it. And my understanding is that most people aren't. There's a small number
of innovators and early adopters. I have no doubt, Tyler, that all the people listening to this podcast
are innovators and early adopters ready to seize the nettle and seize the day. There are an awful
lot of people for whom they're overworked,
there's too much going on, they're one person trying to service 10,000 people, they haven't
got the time and the bandwidth to make this happen. And that's the place where I'm afraid
L&D is not going to be transformed by AI because they just don't have the resources to make
it happen.
Okay, well, Don, thank you so much. Before I let you go, I agree with you on that statement,
unfortunately. I opened up this report and I was just like, okay, LinkedIn, thanks for
the motivation, but I don't see it that way. Especially the word overwhelm has been heavy
in my conversations as well.
And I'm not trying to have a go at LinkedIn. I think it's a good report. I think people
should read it and keep
themselves abreast of what's going on. But I can't agree with agree with the interpretation,
which is that, hey, suddenly we're the center of things because we're not.
Yeah, I fully agree. That doesn't mean that it's not something that we take action on,
of course.
Yeah, exactly.
So, Don, where can our folks learn more about you? And if there are any specific action items that you would like to throw out there today,
since the survey just came out and I know you're doing lots of things lately,
but you know, your website, anything else, where do you want people to go?
You know what, I always say it's the end of a presentation or the end of a podcast,
but hopefully the beginning of a conversation.
So the thing I would love people to do is to connect with me or to follow me on LinkedIn
so we can keep having the conversation.
I'm sure in the show notes,
you'll have the links to the report and to everything else,
but please, the one I really want you to click on
is the one which will be the link to me on LinkedIn
so that we can continue having the conversation.
Yes, we will include both the link to your LinkedIn
and to the report and probably just to your website as well.
Put them all in there.
But again, thank you for joining me.
Wonderful conversation for everybody at home.
Thanks for joining us.
We will catch you on the next episode. Cheers.
You've been listening to L&D in Action, a show from Get Abstract.
Subscribe to the show and your favorite podcast player
to make sure you never miss an episode.
And don't forget to give us a rating, leave a comment and share the episodes you love.
Help us keep delivering the conversations that turn learning into action.
Until next time.