L&D In Action: Winning Strategies from Learning Leaders - Practical AI Insights: Understanding and Adopting Generative AI for Learning Leaders
Episode Date: April 9, 2024Last episode we discussed the “feelings” of thousands of L&D pros, via Don Taylor’s Global Sentiment Survey. Nearly one out of every 4 respondents had AI top of mind, though the sentiments varie...d from speculation to confusion to excitement, and few addressed application. Enter: Ross Stevenson, an L&D veteran who has been filling an ever-growing gap by regularly educating the L&D world with practical insight on using generative AI and AI-enabled learning tools. In this episode, Ross shares his advice for getting started individually and organizationally with AI in learning.
Transcript
Discussion (0)
You're listening to L&D in Action, winning strategies from learning leaders. This podcast,
presented by Get Abstract, brings together the brightest minds in learning and development
to discuss the best strategies for fostering employee engagement, maximizing potential,
and building a culture of learning in your organization.
This week, my guest is Ross Stevenson. Ross is chief learning strategist at Steel V's Thoughts,
a brand he founded to help L&D pros
improve their performance with technology.
Having previously worked at Tesco, Trainline, and Filtered,
Ross has a breadth of experience
as a learning technologist and developer
with industry-leading organizations.
I discovered Ross through his AI for L&D crash course
after I Googled the words AI for L&D crash course after I googled the words AI for L&D
crash course.
It was immediately clear Ross had marketing chops given the SEO victory.
More importantly though, he is among the few L&D thought leaders who have dedicated significant
time and energy to teaching practical steps for working with AI.
I was eager to bring Ross on to help cut through the speculation and conjecture around AI in favor of some
actionable insights. Let's dive in.
Hello, and welcome to LND in action. I'm your host Tyler Lay
and today I'm speaking with Ross Stevenson. Ross, thanks for
joining me today. It's great to have you on the show.
Thanks for having me.
My first question to you very simply, you write a lot about AI
in the learning world,
specifically generative AI.
People have been dabbling with Chad GBT for a while now, copilot, various other resources.
It seems like you've tested out a lot of them yourself.
I am curious what you think the greatest value add for gen AI is right now,
specifically for learning professionals and learning leaders.
Is it what power it has for the individual
to optimize one's own work?
Is it something systematic,
something that can be applied organizationally right now?
What is the greatest value add that you can describe
or that you have seen so far for Gen.AI in the learning world?
Yeah, so it's probably a bit of both of those.
So I think from the L&D professional
and L& D leader standpoint,
evidently there is that first as most industries
with how can I optimize this stuff?
How can I do it at speed, maybe higher quality?
So, you know, that's definitely kind of one part of it.
And a lot of my conversations with companies
usually start off in saying,
how do we optimize what we're doing?
How do we streamline what we're doing?
We don't want to end up doing an extra 20 hours
of admin a month.
I've heard this chat GPT thing can do that.
So can it and how can we do that?
So I think in the L&D pro side,
there is the optimization of the workload.
There's still understanding that digital technology,
as in how is it gonna reshape the career of people
in that industry?
And that's also kind of a good point to touch upon.
I think for the end user or what people would call learners in most fields is
for them, what I see with a lot of kind of case studies at the moment,
and even being at companies where they're doing a lot of these tests is
one is that personalized learning.
So everyone's always spoke about the industry,
you know, definitely the last decade about personalized learning. No one's always spoke about the industry, definitely the last decade about
personalized learning.
No one's actually got the solution to that.
That's usually ended up being,
oh, we'll put their name in an email
or their name on a course.
That's kind of the level of personalized learning
or using recommendation systems
in many of the LXPs and MSs.
Whereas we look at
systems like ChatGPT,
like Copilot, like Perplexity,
large language models for the first time
can actually give that personalization to say,
well, this is the role I do,
this is the age bracket I sit in,
this is the industry that I sit in,
these are the specifics that I need to know
at my stage of the career.
I think that's a really, not in a beautiful thing,
but incredibly valuable thing in order to do that.
And with the current technology that we have had, we've not been able to crack that at
all.
No one's been able to say, how can we personalize it to that level?
So that is one.
I think the second bit is, obviously Josh Berson kind of coined this back in 2018 is
learning in the flow of work.
And my observation is I don't think he meant learning in the flow of work as in people, everyone leaves their screen to go to an LMS or LXP and then move from different software applications to find the information.
I think he generally meant if I could stay on one screen and get all the information I needed or if I could be in one place and learn the things I knew that'd be great. And if we look at large language models, like particularly Co-Pilot,
that sits in a lot of people's browsers.
For me, that really is that kind of learning
in the flow of work.
And I've seen it in organizations
where they've been using Co-Pilot
and they'll sit there
and it's not necessarily a training session.
They might be in a meeting.
They might be talking about some complex topics
and people are using Co-Pilot at a meeting to say,
okay, so we'll look at this framework.
You know, what are three kind of key things
that I need to understand about this framework?
And they're learning kind of in that moment
where it's happening.
Now the industry doesn't really kind of coin that
as quote unquote learning,
but it really is people saying, oh, okay,
I'm able to move my kind of performance gap here
and saying, I didn't know about this thing.
And now I do know about this thing.
I know a bit more about that.
And if I wanna go and delve deeper, I can potentially do that
with a human if I want to.
So yeah, but in some L and D people, it's definitely on optimization, using
new technology to maximize what we're already doing from human capability.
And then for the user, these are just the kind of use cases we're looking at now
is a hundred percent personalized learning, you know, really getting that learning in the flow of work and then
building the infrastructure on there because we're, you know, we're still so early in this
journey.
My first job was in publishing coming out of college about a decade ago.
I worked for one of the major American publishers and the most important thing that we were doing
at the time, I was in sales, I was selling textbooks essentially to university professors
and courses. And the most important thing that we were doing at the time was transitioning to
digital going from, you know, hey, here's a big old physical textbook to here is, you know,
an ebook version of that, but also here are additional resources to just help your learning,
pedagogically sort of change things up.
And adaptive and personalized learning
were huge at the time.
Every competitor of ours and my company at the time
was trying to create the best, awesomest,
coolest adaptive personalized learning tool.
And I remember very specifically,
most of them took information
exclusively from that company's you know existing content base if you will so you
know from the textbooks that they had made for themselves and they kind of you
know just took the resources and planted them on top of those textbooks and a lot
of the professors that I spoke to weren't so interested in that sort of
thing they were also just kind of disinterested in the competitive nature of publishing at the time.
And they were becoming more interested in these sort of like open source resources
and utilizing things that were, you know, free to their students,
but also just took information more broadly instead of from a specific set of experts,
that sort of thing.
And what this has me thinking about is, you've written about these things as well,
is that a lot of really high level sort of AI implementations
happen to be, don't happen to be,
but seem to come from organizations
that have a really strong capacity for content.
McKinsey and Lilly, for instance,
which you've written about,
I remember reading about their creation of Lilly
probably a year ago at this point, where they essentially took their existing precedent,
their sort of giant library of cases and research that they had done from past work, and they
turned that into sort of an LLM.
And they made from that a tool that they could refer to.
And I'm just curious what this says about most organizations,
because it feels like those who have really strong existing
content bases can always do that next thing in technology,
where you can take the adaptive, personalized,
or just the really strong database play,
because you have all this information.
You can put a new product, a new tech product on top of that,
and then roll that out as something new, or just use it for your own internal optimization.
So my question here is, just with all this in mind, what can your average organization
learn from the fact that somebody like McKinsey or I can't remember the other one that you
wrote about as well, maybe one of the banks, I think. But what can we what can
you know, a regular old organization that doesn't have
this sort of record keeping history or precedent? What can
we can we learn from the processes behind what they did
there? Or, you know, what can we actually learn from exactly what
they achieved? And then how can we pursue perhaps the same thing
with maybe more limited resources more limited?
Yeah, I hope the first one to call out would be probably people same thing with maybe more limited resources, more limited. Yeah.
I think the first thing to call out would be you probably people
listen is probably haven't got the billion dollar budgets that
are sitting behind some of those big companies.
But I think that the first thing is, is data, right?
I think you've kind of quite rightly pointed out is that these
large language models only work well with good data in, if you
push it data and you're going to get shit
out, but that's pretty much the end of it. And I think that's where we see a lot of things
not going well at the moment is people don't really understand. And it's not really for the
end of the world too, but at least partner with technology people in the business to understand
that what is that content architecture that sits under it? So businesses have two options really
when it comes to the data bit.
It's either, do we just buy a very standard pro license
to these tools and allow people to kind of, to an extent,
not run wild, but give them guard rails
in terms of what can you input in company data?
Now, I think that is probably more advantageous
to a lot of people in startups and scale ups, where they're
kind of more happy to kind of put in some data that's maybe
some of its confidential, maybe some it's non confidential, and
then to use the sources that chat GPT Claude and perplexity
already have to do that. The other option for these people is
to then say, well, you know, we've got the money, and we're
in an organization like 10,000 people plus, we'll go to enterprise level.
We'll get to enterprise tool.
And fundamentally what that means is,
you could go to OpenAI today and say,
I want to select the ChatGPT Teams model,
where basically it's a blank slate,
you get the technology,
and then you have to provide the training database
for that version of their model
and the same for the enterprise level as well.
And then what you have there is basically,
so in the machine learning world,
they call it fine tuning
where basically you fine tune your company's data to that.
So it may be an example,
if in some strange world an L&D team decided
we're gonna get a small large language
will not to help us with skills identification
and skills development in the organization.
You could buy one of those kind of off the shelf
without any of their own database,
have your own skills database
and plug it into that to do that.
Now, a lot of people are not gonna do that.
And there are very few people even doing it
at the moment in time,
because some of it is slightly complicated
and there's other risks in there in terms of who owns data,
where does the data go back to?
How has it been housed on different servers
and whatever countries in that?
So there's huge kind of implications
that happens with that.
So in terms of the learning, it's definitely going to look,
you need to get your data in order.
You need to be really clear on if you buy this tool,
it's not saying, let's just give it all the crap we've already got, and just hope it makes sense
of it. I mean, it will definitely make sense of that in some ways.
So, you know, I made a joke, I think, in our call about someone
called Dorothy has got a secret SharePoint somewhere, and she's
locked up all the kind of key processes for the finance team.
It's like, you know, you could connect that into the large
language model, and it will be easier to find. But it still
doesn't take away from the fact
that the actual architecture of your data in the organization is probably not good.
And you could probably use a large language model to help you
restructure that in that way.
So a hundred percent, you know, you want to look at the data element.
I think the next element that makes these things successful is that there are
people in these organizations who are already
advocates of this technology.
And they're already working with, I think the examples that we were
talking about as an example, they have been working with open AI, which is
obviously the big sexy company, which is doing a lot of generative AI tools at
the moment.
So for some of those companies, it may be wise to partner with someone who are
specialists in that field where you can buy technology from them. They can help lead you in that way as well. But
advocacy in the organization in terms of really understanding what are the opportunities with,
if we're talking about conversational AI in particular, so large language models, and
what are the limitations of that as well. The reason why I say that is because I must
admit every call I've had this year with a client in some way has ended up being in unrealistic expectations of what the technology can do.
And I think that needs to be put into check immediately.
And we can see in those case studies where, you know, McKinsey were very clear on, we have three specific use cases in is to give better access for our research to clients and also internally.
And it's to allow people internally to use that information in human conversations and
being able to do that at speed and quality.
What they weren't saying was, oh, we want to bring in this large language model to automate
all these things that it can't do to solve all this mission critical problems at the
C-suite level, which is not going to do you very well.
So having that advocacy, having the real understanding
of what is your use case to finding that problem
and making sure that technology can solve that problem
is the other main component of that.
I mean, the last component of this is gonna be
how it's introduced into the organization.
I mean, L&D can learn a lot from this
as they can learn a lot from the marketing world in terms of how are you actually positioning this in the organization?
And the worst thing you can do, and we see this a lot of L and D tools I have over my
time is you just said that one email one day and say, Hey, by the way, we bought this thing.
We've called it this.
Here it is.
Here's a quick FAQ on a PDF, go and figure out how to do that. That's not
the approach these organizations took. They brought people on board when they were acquiring
the technology, they had people on board to kind of look at what's the data governance?
How do we introduce this to the organization? What are some of the ways that we can do that?
So it's definitely the final point would be do not sleep on how do we actually introduce this
and how do we help people understand how to use it? Because I would say that although if you go on
social media, it feels like everyone is using generative AI and you know, they're all optimizing
their workflows. They're making millions of dollars and coming in. And the reality, it's not actually
that if you go to most organizations,
and I'm still shocked by this when I say to people,
how many of you have used a conversational AI tool?
It's nowhere near as many people as I expect.
I might be in a room of 200 people,
and I would say most of the time,
it's only about 20% of people that have even tried it.
They might have heard of it, but they've never tried it.
So there's that issue of how do you move people from what
I've seen in a digital world to how do I actually apply that
practically and meaningfully. So in some sense, those will be the
core bits that I would pick up from those case studies.
Isn't in the Ross Stevenson canon, I believe Dorothy is also
the one with too many cat pictures in her SharePoint.
She is. Yeah. Yeah. Yeah. I did my research.
So I think you make an interesting point there about how to roll it out, because I have I've worked in marketing for many years now.
And just understanding how much it takes to get somebody's attention away from whatever they are doing and focused on something new, even when that thing could help them greatly
is so much more than an email
or even a single line of communication.
You gotta hit them from multiple angles.
You gotta have champions who are advocating for the thing.
You gotta have it fully fleshed out
and demonstrate it in some cases.
And there's just so much to do there.
I do wanna follow up and just ask,
we're talking about data and putting data into these tools.
What kinds of things can we put in there?
So I'm thinking, can we utilize
our existing learning content libraries?
How effective is that sort of thing?
Because in a lot of cases,
learning teams are working with outsourced
and also internally developed content libraries,
which I do think ends up limiting those sorts of things. Are there solutions as to how we can
use all that data, all that content? Just like best practices, I assume, if you have like a some sort of a
confluence page or whatever with just, you know, all of your company's documentation and that sort of thing.
You know, do you have any specific advice for how to actually,
or what to utilize and how to actually safely and effectively,
I know you don't want to talk too much about governance and ethics and all of that,
but people have been concerned about putting your data into these systems and is it going to get stolen,
is going to get reused, whatever.
Do you have anything quick to say about just the data that you are putting in?
Yeah, definitely. I think from an organization's perspective, I think the number one rule is just
buy an enterprise tool. Don't just don't even chance going with, it's not even open source,
but what we call the freemium tools or even the ones where you pay $20, $30 a month.
It's not worth it. There's many, you know, Samson is a really big one over in South Korea,
where there was an issue there where they allowed people to use chat GPT
and it went absolutely horribly wrong
when I started feeding it sensitive financial data,
loads of other confidential information.
So the main thing you can do in terms of the data element
is buy an enterprise tool where it is not sending data back
to the supplier and on the supply night,
make sure you're with one of the big suppliers.
So, you know, make sure you're with a Microsoft,
an OpenAI, an Anthropic, one, you know, one of the big ones.
Don't be going out and seeing Fred on Google says,
I've built my own large language model
and I'll give it to you for half the price
and let you do whatever because, you know,
who knows what exactly, do you know what I mean? Like who knows where that data is
going? So it comes to supply a selection. And of course, you know, the way you can do that
as well is that a lot more AI companies now are getting certificates in America and EU
in terms of their security standards, which is obviously very favorable to understand on that as well.
But from a data standpoint, for any kind of like smart forward
thinking company, it's just go for the enterprise tools because.
You can't account for human error and it doesn't really matter how
many upskilling programs you do, how many times you tell people the
human is always going to be in the loop. And there's always an era of, you know, poor authority times you tell people, the human is always gonna be in the loop
and there's always an error of, you know,
poor Dorothy I'm picking on today,
but if she's sitting there and going back
and puts in a huge financial report for the year,
just thinking, oh, I can do this, it's fine.
And it's actually not fine.
You don't really have control over that.
Whereas if you have an enterprise tool,
then you're safe because it's locked down
and you can do that.
So, you know, in summer, I'd say, look, buy an enterprise tool and pick you're safe because it's locked down and you can do that. So in summer, I'd say look,
find enterprise tool and pick the right supplier.
Now don't be going out with any old person
who has popped up in the last kind of six to 12 months
because you go online and everyone's got an AI company.
Everyone's got a large language model
that they're tailored or whatever.
So go directly to the source and back to our case studies,
the McKinsey's and all that lot,
they were working with OpenAI
or they were working with Microsoft.
So talk to those companies who know what they're doing.
They've, in Microsoft standpoint,
they've got the infrastructure, they've got the history.
You know, like I say,
it's not gonna be someone's mate around the corner
who started doing this
and wants to kind of bring people on board to do it. So yeah, be safe rather than sorry.
Yeah, I, you know, Don Taylor, my last guest on the show, when he and Egle Winoskaite did
the AI and L&D report, they had three different categories of barriers that they kind of went
over and I don't remember exactly what they were, but looming large, and at least two of them, I remember pretty clearly was like issues of security and governance.
And even just internally, it seemed like executive teams had decided against using anything until
they felt safer kind of seemed to be the theme. Like, hey, let's just kind of pause on this
until we know for sure that our data will be safe and we're not breaking any rules here.
It seemed like there was just a lot of like insecurity
and like mindset insecurity about it all.
And I think we're clearly moving in the direction of,
these are large companies getting certified
and approved and that sort of thing.
So I think over time we'll probably see,
more rapid adoption, but.
Oh, definitely, definitely.
I think those companies to be honest, could do better.
Oh, yeah.
I'll actually say in what's on offer 100%.
You know, from a marketing standpoint, you should know,
I think there's so many companies that I speak to.
I'm not even aware of enterprise versions of co-pilot or chat
GPT or a team's version.
So it's even that standpoint where maybe some of those fears
start to be actually settled.
If there was more of that and less of on social media, someone going here's 25,000 prompts that
you need to know, or your life is over by tomorrow. I mean, that's, you know, that's how it goes in
life. It takes time. I think there's a big curve in terms of moving from what some people might say feels kind of quite gimmicky and quite unsafe right now
to actually being more productive and progress.
But we'll see, we'll see how that goes.
Speaking of 25,000 prompts, that's my next question.
So on Steal These Thoughts, you talk a lot about prompting
and how important that is
from an individual user perspective.
It is clearly one of the more sort of coded skills
that we can utilize with Gen.ai right now.
I just wanna go over that really quickly.
So there are prompt libraries out there
from large organizations.
You talk about a few of them.
You have your own little lessons on prompting,
but what do you say about teaching prompting
at the organizational level?
If an L&D person wants to go in and make sure that their folks understand how to use these tools most effectively and really
optimize their use, what is the best way to help somebody understand prompting for that
purpose right now?
Yeah, I think there's a couple of things. I think the first thing is you kind of have
to unbundle the way people use Google right now, because what people try to do with these tools is behave like they would with Google
and just give some keywords. And then as we both know,
what happens is it's the hunger games fight for whoever got the best SEO on the
day and pushes up the article where those keywords appear.
That's not how these tools work. And that's the first thing,
because people go to me, Oh, this tool is rubbish because I put in, you know, help me find this and it didn't do it.
That's, it's not, it's a complete different things.
You've got deterministic system in Google.
You've got a probabilistic system in generative AI.
So that's hard at the beginning because people have been used to this frame of
input and information to digital tools for 20, 25 years.
So you kind of have to help people reframe that a little bit.
I think the second bit is to then help people look at,
I always say, imagine it like an intern.
It's your digital intern.
When an intern joins your team,
they don't know what they don't know.
So you want to provide context,
you need to provide specificity.
You also need to buy constraints,
so to just go off and do loads of kind of crazy
random things and try to help stop hallucinations.
And what I've also seen in, so research papers
that have come out as an example is that it usually takes
about eight prompt interactions until you can get
like a pretty decent quality response.
And that makes sense because if you think about
if you're conversing with the tool
and it's trying to understand you more,
trying to understand the task
and the context behind that task,
then once you kind of get to those eight prompts,
it can do a far better job of helping you
versus just, you know,
I'm gonna give you a couple of sentences.
And, but again, it's all contextual.
It all depends on what you're doing
and how big the task is. If you're asking one of these large language models to do kind of like
multiple steps, then you want to break that down. You want to be really clear on the context and the
tasks. Now, if you're just doing something very simple, like what's the weather today in Portland
or something, then obviously that's a different thing. You're not gonna go crazy on that.
So I would finish off on that.
Prompting is definitely important,
but you have to also look at it as it is a conversation.
And I think what prompting I have found
in my own personal use and with teams and doing training
is that it encourages people to think more critically
and trying to think clearly. And what I mean by that
is, you might understand this also being a marketer and writing as well, is that I think, you know,
clear thinking is clear writing and clear writing is clear thinking. And I think the same about that
with talking to generative AI tools is that in yourself, you need to be clear on what you're
trying to achieve. And when you're inputting that
into this tool, it actually challenges you to structure that in a clear way so this tool can
understand. I think there's some, I can't prove it's a call, I think there's some quasi way, it kind of helps you
probably get better at human communication because then you think, oh, if I have to give all of that
information over to artificial intelligence,
what must other humans need
when I'm having a conversation with them
when they don't know something about the topic
and I need them to help me with something?
So I think there's definitely a byproduct of,
prompting can definitely lead to hopefully clear thinking,
more critical thinking.
And I think those bits are needed.
You're not gonna get good responses
unless you spend time on am I structuring this clearly so it can be understood.
I'm glad you started that segment off talking about unbundling Google a little bit. I have
been thinking a lot about other AI skills, if you will, other things that we need to understand about machine learning
and how AI has already impacted our lives. And I have learned a lot about how social
media platforms work and a little bit about search engines as well in terms of their algorithms
and the machine learning behind those tools. And it just seems like it's growing ever more important that we understand at least
superficially how all of those things work, how audiences are targeted with content and how
advertising and outside forces influence the things that we use regularly in our daily lives
to understand the world, to learn and to consume content in particular.
So I just want to ask you, are there other AI-oriented
or machine learning-oriented skills
that you think that we should develop,
whether it's gen AI or if it's just
sort of other categories of understanding what it
is that we're working with?
To me, it's getting a broad understanding
of how algorithms work and how they impact what we consume and what your market consumes, whether you're
in marketing or if you're in another, you know, field within your company. What do you think about
this sort of like new era of skills? We're talking all about, you know, skills these days in general.
So what are those skills that you think are most critical for people to develop going into the future?
Specific machine learning AI is a view and I got a view on the human side as well that kind of helps are most critical for people to develop going into the future? I think fundamentals is essential. I mean, if you're using these tools,
and it still shocks me because many people don't really understand how Netflix's recommendation
system works. Why do you get showed certain posts on TikTok? You know, how do these algorithms keep pushing out stuff and
inherently in society is in your best interest to figure out why
is it I go onto Google and I search for gardening pots and
then my YouTube turns up and suggest about 20 gardening pot
videos. And then I jump over to Amazon. And then I'm starting
getting pushed to this. There's all this in the background of
this structure. And it's, it's really no different with generative AI
where, you know, people still get confused.
They say AI, but generally what they mean is generative AI
or a large language model,
but they'll keep using the word AI.
But obviously artificial intelligence is the main umbrella
which was coined back in 1956 over in the Dartmouth Conference.
And there are many different subsets
that sit underneath artificial intelligence.
Obviously, machine learning being one of them,
generative AI being part of the machine learning family.
Now, I'm not saying people need to be
machine learning experts, but just knowing that
and then understanding the model of,
as we spoke about earlier,
Google being a deterministic system,
which basically means if you go into Google and say,
I wanna fly from London to New York,
I wanna do it this time on this airline,
that'll be factual.
If you do it 20 times out of 20 times,
you'll get the same response.
If you're to do that with a generative AI tool,
it doesn't kind of work that way
because it's a probabilistic engine.
So, sorry, a probabilistic system, which means it's a probability engine. So there's a great talk and I'll send you
the link afterwards from one of the heads of AI products at Spotify, where he breaks
this down in terms of the connect people make this more complicated than it actually is.
And what these large language models do is they try to guess the next word because they've
got all the world's data and they're going going into that data. And they're saying, well, if I
say how the next word might be R, and the next word might be
you, what it's trying to do is kind of look at what would be
the preferred response based on the input, not always the
factual response. So that's why you have these hallucinations.
Now that in itself, when I explain that to people, they
didn't start to kind of tell you, okay, so that's why I get
wrong answers. It's not because then start to kind of tell you, Oh, okay, so that's why I get wrong answers. It's
not because there's some kind of dark force at work where it
doesn't want to give me the right information. And it's
kind of making it up for that. If you can help people
understand these are the reasons why these things happens, much
like we have a common understanding in SEO best
practices, and you know, cookies that track you and all of this
different stuff. And we know why these things happen
because we have that common understanding.
We need to have that same common understanding
with generative AI in,
let's spend half hour on the fundamentals of generative AI,
why it does these things,
why you as a human need to then use skills
like analytical judgment on any of the outputs to say,
is this correct?
I'll go and check those references. I'll make sure that they are what the tool actually says it is
instead of the other narrative where people generally go to me, oh, it's rubbish. Oh,
it doesn't do what I want because they, their frame of reference is so different at the moment.
Their frame of reference is Google where their frame of reference should be,
well, this is how generative AI works.
That will take time,
but the first thing I do with any company
is we don't talk about prompting,
we don't talk about tools.
I'm literally just like,
what is your understanding
of how generative AI tech works today
and where that sits in large artificial intelligence?
If it is, Ross, I have no idea what you're talking about.
The immediate thing is, right,
we need to focus on the fundamentals.
We're not talking about the rest.
So fundamentals, 100%.
You don't need to be an expert.
You do need to be savvy.
Of course, I think more skills will come over time.
You know, with this, I think as we get into this era of,
I suppose we call AI agents at the moment,
where more people are taking large language models
to the next level, where currently these tools
are very good at one task or two tasks,
but what people really want to do
is have something that can do multitasks.
So we could say as an example, you have an idea,
you wanna feed it to an AI agent,
and then that agent would take that idea,
it would expand it, it would turn it into a first draft of a blog post,
it would build social media posts,
and it would do all that in multi-steps
without you having to do anything
and then send it to you for review.
That will be something that would definitely come up
in the future in people to understand
how do I have more of a builder's mindset?
And if I'm trying to optimize my workflows,
how do I build the right component for that in that world with that technology? But that's not, that's kind
of like a probably a year or two years away from that really coming into the mainstream
domain. But I think outside of AI, MML, ML even, I think human skill wise, I can't say
enough around, you have to really double down on our best human characteristics
of critical thinking, emotional intelligence, analytical judgment, you know, bias detection
as well.
I think bias detection is going to be a really big thing with these tools.
You see already in popular media where we're talking about, you know, upcoming elections
in America, people are very worried around deep fakes,
around information coming in from large language models.
You know, I was talking to a company the other day
and we were just having like a discussion around,
because voice cloning's got so good now
that people are starting to use kind of like passwords
or keywords with family members
to make sure it's their actual family member
they're talking to, not some voice clone that someone's created to actually call them.
So there's bits in there in terms of the human side of,
we need to really get good at those to get better at using generative AI,
because generative AI is beautiful in its opportunities,
but it's also given us new problems that we need to teach people in.
How do you have the right skills to navigate some of those things like, you know,
deep fakes or voice cloning,
things that we all thought were kind of sci-fi,
maybe six, seven years ago,
but now really are our reality.
And we're gonna have to deal with like,
is that a real picture?
Is that a real person who wrote that?
Where is the sources?
So it's a different layer of human skills for sure. But again,
I think if you connect the fundamentals of how does generative AI work to that, I think that picture
becomes a lot easier to understand. Because if you haven't got the fundamentals, you're not going to
understand why there's bias, why are there deep fakes, why is there all of this stuff that I need
to watch out. So at this moment, those are the things that I would definitely
encourage people to invest in.
No doubt.
I, you know, I wouldn't even look at tools or optimizing workflows.
If you don't get the basics.
I fully agree with getting this sort of baseline understanding, and I
wouldn't be surprised if a lot of organizations are sort of skipping that.
Uh, the LinkedIn workplace learning report came out in the last month or two I think and they
break it down in several ways kind of where we are. AI is at the center of it
all it seems. I think it's even in like the title of the report they're talking
about AI and they're saying that L&D is now sort of at the center of
organizations thanks to AI and because it gives us the opportunity
to educate our folks on what's happening.
I think Don Taylor and I are a little bit skeptical
as to sort of how serious that claim is to be taken.
But at the end of the day,
they give a few different ways of viewing sort of how
we are using AI, learning about it,
and generally upskilling ourselves
for the fourth Industrial Revolution, for generally upskilling ourselves for, you know,
the fourth Industrial Revolution, for the future of technology, and for AI.
They observe three different stages that organizations can be in for an upskilling or re-skilling
process.
The third stage, the final stage, as they put it, is the measurement stage.
The middle stage of actually sort of it, is the measurement stage. The middle stage of actually sort of doing
is called the activation stage.
And I think before that is just kind of like research.
And 95% of companies are in those first two stages.
About half are in the activation stage
of re-skilling and up-skilling,
which sounds like great news.
But only 4% and 5% in the year 2022 and 2023,
4% and 5% of organizations were in that measurement stage.
And it seems like this is a way that LinkedIn decided
to frame how reskilling and upskilling works.
But I do think that's a really critical thing to think about,
is once we've determined what skills we're
going to change or augment or
add to our organizations, how are we measuring the efficacy of that very serious change?
We are looking at a lot of jobs that don't exist right now probably in the future. We're
going to be dealing with jobs that don't currently exist. My jobs that when I was studying in
school pretty much exist in the world and that wasn't even you know with some sort of huge technological innovation but ultimately I'm curious if you have any sort of observation about this phenomenon
where very very few companies are in that measurement stage I'm wondering if it's because
we are so desperately trying to predict what skills are actually going to be needed that
this like upskilling and reskilling process has a degree of guessing and
just kind of like you know throwing spaghetti at the wall and hoping that something sticks and has
efficacy for our organization and a lot of companies are seeing in that activation stage
that the skills we're actually skilling toward upskilling or reskilling toward maybe aren't as
value as valuable right now as we initially thought they would be or wanted them to be
and there just isn't enough security around like,
this is what we need to learn
that will have value for our organization in the future.
So let's do it, let's assess how it went
and then adapt from there.
It feels to me like maybe we are just doing
a little too much guessing with that reskilling process
and it doesn't really require the measurement process
because we're just unsure.
I don't know, do you have any thoughts
on why so few companies are in that measurement stage?
A lot, a hundred percent.
I think you're hitting a nail on the head.
To be quite honest, it is like,
I use an analogy here in the UK of sprain and pain
where every company I've been at is the thing.
And that is the big problem in our industry.
And it's funny, right?
Cause I do feel like in some ways
that we are very blindsided with the
shiny new object in AI right now.
And we continue to forget about the classic conundrums we still face.
And measurement is such a huge thing in this industry.
It has been for the last few decades.
And, you know, to your point, it is because, you know, generally
leaders just turn up and go, well, I think we need to do these four things this year.
I haven't got any data to prove that,
but because I'm a leader, we're going to do that.
And then LND team going to execute on that.
And unfortunately that happens too much.
Like the amount of people I speak to where they're like,
oh yeah, so my leadership team have given me
a top-down message to say,
we need to do these five skills this year.
I'm like, okay, so why aren't they working?
What are you doing currently?
Like, what's the data from the organization?
And unfortunately, generally people go,
well, I didn't ask that and I haven't got that data,
but we have to do it because they told us to do it.
And I get that, there's that friction
because you don't wanna be like the bull in a China shop.
You don't wanna be like,
it's a bit of when you see minority level as well,
would go and say, no, Mr. C-suite leader,
I'm not going to do that because, you know,
we've got this data from our organization.
That is one of the problems.
I think the second problem is,
and to what you just said as well is that,
we have grand visions of the skills
that we want people to have
and we think they should have them
because we don't really have any industry-wide agreed upon data collection method on that, an interrogation method. Everyone's kind of off doing their own things, which may work,
which is great, and a lot of them don't work. But what I find a lot of teams don't do is,
I've noticed in my career for the last 15 years is I can't really call a time I sat in a room
and anyone naturally has actually gone,
well, how are we gonna measure this?
So how do we know this is successful?
So if we're gonna spend the next five years
on a human skills transformation project,
and we identified these 10 skills,
how do we know that's been successful
after those five years?
I make a lot of that is because you generally don't know
because there's so many points,
there's so many hard data points that you can get,
but there's a lot of other data points
that you can't get as well in terms of,
well, what are people doing with this stuff?
How do we show that someone's improved
their critical thinking?
And I mean, unfortunately it becomes too much of like a DaVinci code style novel problem
to solve where people are like, well, don't really know, but we'll figure it out.
We'll figure it out as we go along. And to your point that no one does figure it out,
we kind of just keep going. We do this stuff. And then what happens is we very much fall into the
ploy of vanity metrics.
And then we'll do this thing of X amount of people
had a touch point on this content or this workshop.
So they must have learned this thing.
They're going to get better.
And then we ask no questions when performance reviews
come around and nothing has changed.
And then people end up in the business, unfortunately,
and all those kind of scenarios.
So I think the measurement piece is still huge.
And it's good to talk about,
because right now AI,
it's like, it's the same thing I say to people all the time.
It's like, if you're going to introduce
a large language model to your organization,
what is your barometer for success with this?
What is your criteria?
What are you looking at to say at the end of the next year,
if you've gone and spent 2.5 million pounds on a large language model integration, how are you going to justify
that to stakeholders outside the vanity metrics of 80% of the organization logged on once?
I think that's our big issue is that we don't really know how to assess skills effectively
yet.
And that's across the board.
There is no kind of, there's loads of kind of frameworks out there.
But in terms of something that's agreed upon and LND people aren't fighting about online
or behind closed doors, it's really difficult for that.
So I mean, it's not, I haven't got a solution for it.
I can tell you now I've spent many times ahead of L&D, cracking my head off the wall,
trying to get senior leaders to agree on just three metrics
of this is what successful look like.
So there's a lot playing in there.
It's not just the L&D products kind of in control of that.
It's also the organization and the constraints
that you have in the organization,
predominantly with leaders,
when you're setting a strategy
alongside the business strategy to say,
well, okay, this is cool.
You know, as an organization, we want to develop growth mindset as an example.
But what are the markers around that and how do we measure that?
And what generally happens, like I say, is that that conversation just gets
lost as we go on and then we get to the point as you were talking about is you
get to the end of a program and then people are like,
maybe that was successful, maybe not. And I'll find the article and send it to you as well.
It's a couple of years ago from the Financial Times
here in the UK, they went to the government here
in Whitehall where all the PMs are.
And they basically said,
so how much have you spent on training
in the last five years?
And they said to them,
we spent 180 million pounds on training in Whitehall, but we can't tell
you on what and we can't tell you how effective it was. I was reading this article and I was just
like this as a taxpayer, it's wonderful for me to know that we're paying all this money, but you
can't attribute anywhere. So it's a, it's an industry problem and to, you know, people will listen to this and they might
think, well, to be quite honest, maybe we should solve that.
And so I'm looking at AI and I put it, yeah, you're probably
right, because we're going to fall into the same regime again,
if people will go, we can build these skills of AI.
Amazing.
But how are you going to measure that?
Like where, where is that return on investment?
Are you going to be in that situation as well?
So there is a lot to unpack then.
I think no one has a golden solution to that.
I think there is very much a industry wide conversation on
and a meaningful conversation on what does that look like
in practice?
As we spoke about AI, a lot of people love to talk about
research, research is great, but what we need is companies
that are actually doing this or they're experimenting
with it and to understand. That was why your question early on was very good around the
process of McKinsey is what was their process? So something has been successful and they've
been able to put a financial metric or performance metric. How did they do that? And then how
can we systemize that? So more of that and probably less of the dubious speculation
in conferences where people are like,
we need to do more measurement.
And it's like, all right, captain of your statement,
how do we do it?
I don't know, but I want to tell you that.
I want to tell you we need to do more measurements.
So yeah, that would be my two cents on it really.
Mackenzie really is kind of the paradigm of measurement.
I believe their slogan is still
what gets measured gets managed.
And that's to me, that's perfect
because it's an acknowledgement
that you can't always achieve what you want
but you can manage everything that you have
with the right numbers and such a large consultancy
kind of acknowledging that and having that as their focus
I think is really, really critical to keep in mind,
especially now that they've done
what they've done with AI.
So that's a very important point.
We're running up on time here.
I wanna ask you a couple more questions.
So you wrote a very long piece
on the structure of a modern L&D team.
At the start of the piece, I wanna quote you here,
somewhat cryptically, you write,
"'Perhaps the first question should be
"'if we even call it an L and D team anymore,
but I'll leave that debate for another day.
Ross Stevenson, the day is today.
We're debating now.
What did you mean by that exactly?
And how can I play devil's advocate on the other end?
Yeah, definitely.
My love thing there was I've caught with him.
I mean, people get angry when I talk about it.
That's the thing.
I figured it's what you're talking about.
Maybe, yeah, maybe you'll understand this more
as a marketer because I am a big
believer in the right branding and using that to position your products and your brand correctly
in the business. And I don't think we do that well. I think examples I gave in that article is,
if I said the words to you, procurement, finance, sales, you have a pretty good understanding of what they do and their contribution to the organization.
But when you say learning development to people,
I kind of find this, this like quasi cult-like guru
type thing going on where like,
well, I'm not really sure what they do.
I know they give me some compliance training
that I don't want to do,
but I'm not actually sure what is their? What is their contribution? I mean,
the problem with that is that it creates a really bad brand. So
when you want to do something that is actually going to
uniquely position the organization to improve
financially and improve from a performance perspective, it's
really hard to do that because people just don't know what you
do. They don't know your USP. And I mean, if I had my way, I would just literally call teams
performance, like learning and performance teams, but people
that want to do that, they want to kind of have it traditional.
And I mean, the issue that we have in our industry is that
we're very much shackled by the educational model.
So people are very familiar with school systems and university
and college systems worldwide. We've then replicated that system in corporate and called it LND. So it then
becomes very hard for people because they associate difficulties at school or college
with exams and having to pass and doing tick boxes with LND. So for me, there's very much
that branding problem. And I think it's about whatever you want to call it.
It's just about repositioning that brand.
How can you and your organization say, this is who we are.
This is what we do.
This is how we contribute value.
And this is how we do it.
I mean, if you could do that and do it well, I think you'd do good
things in your organization, but I've yet to come across an L and D team
that has done that well enough where all these great things
that they are doing that are gonna help the business,
you know, optimize, make more profit.
They're gonna help people in their careers
because they're gonna build skills that they can use
at the organization and beyond.
They don't get that engagement
because the branding is, it's not right.
So for me, it's like, how can you get that branding right?
It could be any day, it could be learning and performance, it could be whatever you want to
call it. It could be I've seen people call them performance institutes. You know, people still
call them training departments. The main thing is about how do you get your house in order in terms
of your brand, and what you do in that company. And you want that very similar to other parts of your organization.
If you say sales or marketing, people know.
People know what they do.
They know what they contribute.
They know how that builds the organization.
With LND they don't.
And that's the great shame because to my point earlier,
we spend all this money on programs
and we can't measure it.
Maybe one of those problems is
because people don't actually understand
how what the team does contributes to an individual
and to the organization as well.
So yeah, my main thing would always be,
and people will hate me saying this,
but look at the way marketing teams and marketers do it.
How do they position their brands?
How do they build that brand
and get people really clear on what their USP is? There's a reason why that
stuff works. There's a reason why when we see Apple advert, we
buy from Apple, we know what Apple does, we get super
excited because Apple's doing something like, doesn't matter
what Apple does, they can release a chair tomorrow, and
people will get excited. But it's because they built that
brand where, you know, it's you don't have to get to that cult
like level with Apple, but you have to invest in saying, I say who you are, what do you do?
How do I know as an individual, how you contribute to me?
If someone says to me, I'm going to work with markets.
I'm going to, okay, these are the people that are going to come in.
They're going to help me get really clear on my messaging.
They're going to help me get that messaging out.
So then we can either bring in more revenue, more customers, whatever it is.
You say L and D to someone.
They're like,
what are that person doing?
How are they gonna help me?
They're gonna give me a PowerPoint,
they're gonna give me some, you know,
something online somewhere.
So I think that is a big thing to look at.
But I've been banging about that for years,
but I think a lot of people see that as a contrarian fault
in terms of, you know, a lot of the pushback I get
is why should we do that? We're not marketers.
I'm not asking people to be marketers. However,
we have a holistic view in life.
I think sales and marketing are a huge part of what we all do as humans
in any aspect of our work. So I'd encourage people to do it.
Whether they listen to me, probably, you know, another question.
Well, you are a marketer and I've read your website front to back.
You definitely are using some of these tactics.
My last question to you is what do you do as an educator, as somebody who's
also teaching people and convincing people to read and learn from you?
What are the things that you do?
Do you have a few quick tips for how to get people to engage with your content?
Oh my yeah.
I'm I love to tell you that I have some playbook or some strategy, but I honestly a few quick tips for how to get people to engage with your content? Oh my, yeah.
I love to tell you that I have some playbook
or some strategy, but I honestly don't.
I've just picked up stuff here and there
and learned from different people.
You have good headlines, you know,
a little bit of scarcity,
a little bit of exclusivity here and there,
making it accessible in terms of, you know,
three tips, five tips, numbers.
Oh God, yeah, yeah.
I mean, I think there's, especially if that,
I would say that's increased in the last two years. And I could
pinpoint to why. So if you want to go and learn from the people
who I learned from, I learned from I read a book called the
art and business of I think digital writing from Nicholas
Cole. Okay. And I also read a book from Anne Hadley called
everyone writes I think it's the newest edition. So they
seriously up my game in terms of copy, in terms of headlines, in terms of understanding,
emotion, power words, all of that stuff. Outside of the field, SEO game, there's tools that can
help you with that. I think online, people like Brian Dean, who I studied from many, many years, probably the last seven, eight years, I picked up a lot from him in terms of SEO. But I think from the actual, I like to think from the conversational standpoint is the reason why people read my news that I ask people this because I'm still confused about why thousands of people read it. I think it's generally because I
write like I talk, so I don't change my nature. You know, I use a lot of memes, I use a lot of gifts. You know, my bloody newsletter is called Steal These Faults. I haven't called it Ross
Stevens' Learning Academy or some lame name like that. I've tried to purposely put myself on the
point of, I am not like what someone gave me the greatest compliment
when they said you don't look like a nine to five guy I've got long hair skinny jeans and tattoos I
am most certainly not a nine to five guy thank you that's a compliment I take that but that's
what I try to produce in my own kind of like marketing and what I try to show people that is
who I am so I might drop some random Taylor Taylor Swift gift in my newsletter when I'm talking about products and all
this kind of stuff. And I think it's the key for me is just yes,
right in your style, but be clear the game you're playing,
we're all playing in these kind of games. And you're not like
one person that's not really systematically going to shift
that system, I am not going to change the world of SEO or the
attention economy. But what I can do is understand how does that world work and how can I apply it to my work in a way that I am comfortable with and that I would write and, you know, make it make sense for me.
So, like I said, those are the resources that I've learned from.
You know, they would give you a lot in terms of crafting headlines, crafting good copy.
I mean, the main thing to do is if you go look at an article for me five years ago,
and you go look at an article for me in the last 12 months, I would like to say that the
superiority of now is amazing, but that's continuous learning. I try to practice what I preach is,
if I want to get better at these things, I need to go and seek people who are good at these things
to understand how do I do that?
And that's what I do myself pretty much.
And like most people, I'm always a work in progress.
So I'm most certainly not the best marketer.
I've kind of fallen into this strange world
of growth marketing recently
with all the stuff that I'm doing.
And I love learning more.
So I love building copy and landing pages and courses
and all this kind of stuff.
But I think in some, it's like much to the AI point,
you need to go out and experiment and explore.
You need to go and do so.
Go online, do those things, experiment.
You'll come up with what I have.
You've got to look at my old stuff.
I've had some terrible daft headlines
or I've made stuff that's too SEO optimized.
I've made stuff that's so curve ball that I don't think anyone would understand it
apart from my own mind.
So yeah, it's just learning how to do it.
And it will take time.
Like it's not going to be this thing where you might go online to YouTube and
someone says, become an overnight marketer and do it.
It's not going to happen.
It is years and years and years, as you know, of working, learning, doing all this stuff, um, and growing.
So that is the cheesy answer, but that is my honest answer in terms of what you
can do.
I appreciate those recommendations, though, the names and the books.
Um, but in addition to that, I want to make sure that our listeners do
check you out as well.
So steal these thoughts.com, um, any other places, any other spaces that you want listeners to come and find you,
to reach out, to contact you, to view you.
You do a lot of these appearances.
What else would you like to pitch right now?
Yeah, so the only thing would be that I'm on LinkedIn.
I play the social game on there.
So you might see stuff, you might not,
depends how much the algorithm loves me on that day.
If you enjoy the kind of faults that I'm putting out there,
you can have a weekly newsletter, which is also called Steal These Faults. on that day. If you you know if you enjoy the kind of faults that I'm putting out there,
you can have a weekly newsletter which is also called still these faults. Of course
that is my attempt at fighting the algorithm. So if you actually want to see my stuff and
learn more about all these bits that we've talked about, you know, come and join us there
for a conversation. Happy to have you. Wonderful. Well Ross, thanks for joining me today. This
was a great conversation. Hopefully I can have you back on sometime. For everybody at
home. Thanks for joining us.
We will catch you on the next episode.
Cheers.
You've been listening to L&D in Action,
a show from Get Abstract.
Subscribe to the show and your favorite podcast player
to make sure you never miss an episode.
And don't forget to give us a rating,
leave a comment, and share the episodes you love.
Help us keep delivering the conversations
that turn learning into action.
Until next time.