L&D In Action: Winning Strategies from Learning Leaders - AI in L&D: Barriers, Progress, and Success Stories in Learning, One Year Since the Launch of ChatGPT
Episode Date: January 16, 2024Artificial Intelligence has made a noticeable impact on the world of learning–especially since the launch of ChatGPT and the advent of generative AI. However, the degree to which AI-enabled tools ar...e being implemented and delivering positive impact for organizational learning remains unclear. To help determine what kind of progress has been made in the world of L&D, Egle Vinauskaite, co-author of AI in L&D: The State of Play joins us on this week’s episode.
Transcript
Discussion (0)
You're listening to L&D in Action, winning strategies from learning leaders.
This podcast, presented by Get Abstract, brings together the brightest minds in learning and
development to discuss the best strategies for fostering employee engagement, maximizing
potential, and building a culture of learning in your organization.
This week, I speak with Egla Vinoskaite.
Egla is co-author of AI in L&D, the State of Play.
This report, co-authored by Donald Taylor and presented as a component of his Global
Sentiment Survey, takes a deep dive into the presence of artificial intelligence in our
L&D systems and processes, and looks at the responses to such interventions among practitioners.
Egla currently serves as Learning Strategist and Director at Nodes.
Since earning her Master's in Human Development and Psychology from Harvard,
she has served as a Learning Consultant and Advisor, Instructional Designer,
and Educational Advocate at no fewer than 15 different organizations.
In 2020, Egle was named the Learning Awards Rising Star Award Gold Winner. In this conversation,
we focus largely on the AI in L&D
Report, which was sponsored by GetAbstract. Let's dive in. Hello, and welcome to L&D in Action. I'm
your host, Tyler Lay, and today I'm speaking with Egla Vinuskaita. Egla, thank you so much for
joining me today. It's great to have you on the show. Thank you for having me. You are the co-author of the AI in L&D, the State of Play report with Donald Taylor.
I've had him on the show previously.
He was one of my earlier guests.
We discussed his global sentiment survey and a handful of other things.
But I've looked up your work and the things that you've done, and I decided that I think
you're a more valuable person to speak to about this specific report.
Don't tell him I said that.
But I'm very happy to have you on to go over some of the findings that you guys
have come up with and kind of see where we're at right now. Because I went to DevLearn earlier this
year and I also went to ATD. I've been to a lot of events where you have a lot of vendors, a lot of
people talking about AI. There's a lot of vendors who seem to be putting AI into their products.
There's a lot of individuals working in L&D, learning design folks, and even leadership type people in learning who
are looking for something. For a long time, it felt like people were looking for something that
was really going to make a big change in their organization that comes from artificial intelligence.
There are those who kind of have a good idea where they're at, and it really feels like this report
kind of puts that into perspective in a really effective way. So thank you for the
work that you've done on this, first of all. And I would like to start off just by asking,
as a co-author of the report, was there a specific finding or piece of insight that
surprised you the most? Oh, the one that definitely surprised me was how many organizations are not using AI in L&D. And the number, well, the
percentage was around 40%, which is a lot given that it's been a year since ChatGPT was first
released and AI has been everywhere, super prominent in conferences, in articles on LinkedIn,
social media, and so on. So a huge chunk of those respondents
said that they had experimented with AI,
but they hadn't implemented anything,
which for me raises the question of why, what happened?
Was it the lack of knowledge
about how to make a business case,
lack of skills to use AI and to draw value from it?
Was it the lack of trust in the technology itself?
Were these some sort of organizational constraints? So yeah, that's definitely what surprised me the most and
what makes me want to dig deeper into that. Yeah. And so you actually do list three classes
of barriers as to the things that are making this difficult for organizations. We are going to dive
into those at some point for what it's worth. So I think we'll come back around to it after we talk about some of the important findings that I discovered.
But for some context and some clarity, would you mind just kind of describing
how the survey was done? How many people, kind of who you reached out to and how you ultimately
collected the data? So we had 185 respondents, people from L&D. We reached them through our
collective LinkedIn networks, as well as
emailing them and sending out a newsletter, asking for responses. And we had a good selection of
people both on the vendor side, as well as internal L&Ds, pro-seniorities from more junior
level or more specialist level people to senior leaders. We also had a few charities represented
as well. So it was a good spread across the industry. Yeah, I think one of the important things that
I noticed is how you sort of demarcate who is responding to what so you don't just give like,
you know, this is the most common response from this multiple choice question. But you say this
is the most common response from key role players and from non key role players from
other people in L&D. And if you don't mind, if you can
recall, what were the ways that you actually broke it down and looked at individuals differently?
And also, I think in some cases, it was organizational versus freelancers, that sort
of thing. Can you elaborate on those distinctions that you made a little bit?
Yeah, so not just freelancers, but vendors as well. So one breakdown was whether you were
in-house or whether you
provide services for L&D. So that's one breakdown because obviously motivations as well as
imperatives to use AI might be different. If you are a vendor, for example, an e-learning provider,
for you to take up and use AI, it's a bit of a no-brainer in comparison to someone who works
at a large organization with very
complicated decision-making processes and perhaps less at stake when it comes to creating content
past. So that was one. Another breakdown we had was key decision makers, so senior level people
versus, say, specialist designer level people. Because again, generally speaking, the conversation
about AI and L&D, it's happening at various levels. So it's everywhere from how do I script
something to how do we create a skills-based organization enabled by AI. So it was important
to see what matters to various levels of people, how that differs so that we can have some new ones in our analysis.
Yeah, that was very eye-opening for me to see those delineations. And I will get into why
shortly. I want to start off with the where are we now question component of the report,
which is sort of like the first thing that we address. And you already alluded to that by saying
that, you know, most people, I think it was 40% are saying that they haven't really done anything yet, that they haven't integrated any AI tools.
There are, it looks like six responses from the question, how would you describe your progress in using AI in workplace L&D?
And it's degrees of how integrated the AI is, is where those six responses go.
It's from no intention to it's already sort of extensively
integrated into work. And the two most common responses, AI is integrated into some parts of
our work. That's actually the most common response at 35%. So that's actually pretty high up there.
And then the second most common response is one of those in that no category, which is experimenting,
but not implementing. So just to give some of the clear numbers, hopefully paint
a picture for the listeners. That's interesting to me because in between those two things is
piloting and testing. So actually sort of integrated at a pilot level. And then of course,
there's also the sort of extensively integrated option that's at the top level of that. But
these two options around like the high 20s to 30% to me almost indicate what types of tools make the most sense
for your organization, but you have to be able to test it safely, put budget into that. And I think
of large organizations with substantial budgets as pretty easily capable of this sort of thing.
But when you get into smaller and even medium organizations where budget is limited,
sometimes the capacity to actually assess and determine what could really help us from this burgeoning novel technology
is also much more complicated. And to me, that's why like piloting and testing might not be as
prominent because even those organizations don't really have the capacity sometimes in people
to actually put something into work if they don't really feel very strongly that it's going to end up working out because it could be an inefficient or budget costly type endeavor.
So the experimenting but not implementing phase seems like what those organizations are getting
into. Piloting and testing is kind of a step beyond that. And my prediction is that a lot of
large organizations, they probably have budget to not really have to worry too deeply about their
piloting and testing
and they can actually just kind of start with some tools that have you know good social proof that
have good data behind them and that they feel confident with and in some cases i feel like they
have people who can make strong assessments large organizations have more tech savvy people they can
say this is most likely to work so what i'm seeing is just sort of a natural curve of organizational
capacity again i could be sort of fictionalizing, but when I was speaking to people at these live events and,
you know, on this show in the past and just my guests and the L&D people that I meet,
it does seem like certain organizations are just much more capable than others of pulling in
technology quickly and throwing it in there. It's still something that ends up being in, you know,
some parts of the work as opposed to extensively integrated. But it does seem like there's a curve of organizational
capacity. Am I sort of in line with this based on what you've seen? What do you think? What really
resulted in the distribution of these answers? Yeah, so it's difficult for me to say and to
draw any conclusions like that based on our survey, because the sample was too small to
explore the reasons behind and we
didn't ask, frankly, the questions to actually do that justice. But I think you're onto something
when it comes to organizational capacity. I just would not necessarily say that the distinction is
large versus small organizations, for example. Obviously, budget is a huge thing, but it's not
just that. But speaking from my personal observations over the last year,
I would say that different sized organizations face different challenges.
For example, large organizations may be able to afford AI,
but their decision-making processes are much longer.
And they are often done at the strategic business level and not at the L&D level.
They have legal reviews.
They have IT integration issues,
because their tech stack is just so huge and there are additional security requirements and
things like that. And perhaps even a more pronounced fear of reputational damage. So
there are lots of potential headwinds there. And on the other hand, we have ChaiGPT and Claude,
which are free, GPT-4, which is very affordable
at a small scale.
So smaller organizations, they're generally more nimble and perhaps more free to experiment
and innovate as long as they don't use AI for confidential content.
So they have things going for them as well.
And there were a lot of very interesting applications that came out in our
survey that were shared by people who are either consultants, freelancers, or people working at
consultancies, which are obviously not your multi-billion appropriations. So I would say that
budgets and organizational capability, they're definitely considerations, but it's
a much more complex challenge than just the cost.
Yeah, absolutely. And like I said before, we'll get into the barriers after we go into some of the
specific responses that I thought were most interesting. So most respondents expect faster
creation of learning content to be the biggest benefit of AI in their organizations or,
you know, in L&D. I do think that there are a couple of directions that this response represents,
faster creation of learning content. But to be more clear, you guys asked the question,
you know, what will the greatest benefit be? I believe it was a multiple choice question,
and there were about six or seven. I'll talk about some of the others as well. But
that faster creation of learning content was the predicted or observed biggest benefit overall by
the entire population of responses. There are some notes there that we'll probably get into as well,
like key role players versus others. But that seemed to be the most popular answer. And I think
of a few things. Improved accessibility of knowledge and info and synthesis thereof is
something that can result in faster creation of content. If you have some sort of a librarian type
AI or if you have some sort of search improvement tool that can get you to better knowledge faster,
I think that helps with this sort of thing with content creation, course creation. I think you can also have something like
ChatGPT create scripts for you and simplify information into course formatted content.
And I think that both of those things have risks. I think that sometimes people want to abbreviate
the logistical parts of their work and, you know, make things move faster from
this is knowledge that I've gathered to it is now a presentable piece of some sort. It's now
a presentable asset, whether that's a course or a piece of content that goes live on the internet
for marketing or something like that. Generative AI has made people, you know, make things faster.
And I think in a lot of cases, it's shown itself to be a little bit dangerous in the current state, at least, you know, early chat GPT, that sort of thing.
But also just finding the information and using AI to synthesize maybe more quickly.
There are probably some steps that might be eliminated, like using certain kinds of subject
matter experts, creating the content with SMEs, you know, is something that could be
simplified or expedited, perhaps. But anyway, I see some risks here. And I'm sure the people that are responding
to this question in that way also are somewhat aware of the risks. But I just want to go over
this. I do agree that there are some risks with increased speed of content creation.
And what are the things that we should also watch out for from your perspective?
Yeah, when it comes to risks, definitely. And we have some commonly understood risks that we've been talking
about for at least the last half a year so the first one being the quality of outputs text outputs
video audio outputs especially in non-english languages and when it comes to working with
the specialized content so there is that the other one is quality of sources. So asking the question,
is the information AI is trained on, is the information that AI is drawing from, is it
accurate? Because checking the content came up as one of the blockers to using AI is just frankly
too time consuming to check that the content is accurate, it's written in the way it was supposed
to be written. And for a lot of people just made the whole use of AI not worth it. Another related consideration
and risk is potential copyright infringement. This is something that does come up quite often
among larger organizations. So yeah, so these are your, I'll say, everyday risks. But just as you
were saying that exactly, is creating password content always the right thing?
I love this question.
And there are a few things to add here.
It's like the first is the risk of outsourcing too much to AI.
And I would say that don't leave AI to do the entire job.
Guide it and QA it and know what good looks like.
For example, right now, we have the ability to script videos really fast. But that doesn't mean that you should do an entire hour long synthetic video
course. It's just an awful user experience. So you really need to know what you're doing,
what you're asking the AI to do and just not get too crazy with it. And more broadly,
a major risk that I see is focusing too much on content creation in the first place. Because the faster worse analogy has had a real renaissance in the past year, I would say that we can create things faster. But are we creating the right things, which I think is something that you were also talking about?
talking about. Yeah. I mean, synthetic video is something that I've seen. I think you're probably referring to like Synthesia, AI and similar tools where, you know, we can make avatars of ourselves
that are very lifelike and sort of make courses like that. And that's a good way to not have to
summon a subject matter expert for, you know, two days of eight hours of shooting for a long form
course. You can summon them for about an hour and, you know,
get that job done by just having them record their face and voice and then putting a script into AI. And then something comparable to that person actually creating a live recorded course can
then be given to learners. And there's a question of quality. There's a question of efficiency and
budget and, you know, how much it actually saves and helps. But just to look on the positive
end of all of this as well, what are some other examples, if you have any, of expedited course
creation from sort of like the free form answers that you saw in the report that you received?
Was there anything else that was notable about how people are actually creating or sourcing or
developing content faster? I mean, sourcing, creating, designing, developing,
there were a lot of answers, over a hundred of them.
So it's difficult to sort of give you a standout
because obviously there were some really nifty examples of,
okay, I need to upskill myself on some sort of subject matter.
So instead of asking AI to just give me what I need to know
about like a framework or technique related to this topic, I use AI as a conversation partner
to better understand the nuance behind it. So things like that, or for example, one thing that
I thought that, oh, that's an interesting idea was integrating AI into, I think it was Adapt,
the authoring tool, so that you can answer as a free text entry to whatever question. It didn't
go into too much detail of what it actually looked like, but pretty much instead of giving
this templatized feedback to everyone, they integrated AI in the background so that it
generates feedback based on your answers, which I think is a really simple but it appears well a really powerful use of AI beyond just actual content creation of script me a video, that sort of thing.
Yeah, that sort of integration is really interesting.
Personalization of learning was the most expected benefit among key role players, key role L&D respondents.
So those who are sort of like higher on the decision making totem pole, perhaps, and in leadership roles.
I want to spend some time with that.
There's a handful of ways that adaptation and personalization have been introduced over time.
Did you get a sense of, you know, what those were?
Are you familiar with those things yourself?
I just want to talk a little bit about that option as well.
Yeah, so I think that when they were answering that question, what they meant were third
party curation and adaptive learning tools.
So, you know, the tools that assess your knowledge and serve or adaptively create content
based on your knowledge gaps or your interests, your goals, and so on.
So when they were answering that question, I would assume
that's what they had in mind. Lots of organizations do some version of that, especially since content
curation has become part of many LMSs. But myself, I personally view personalization as a larger
topic beyond duration and adaptive learning.
Another thing I consider to be a flavor of personalization
is co-pilots and assistants for internal knowledge management,
where I ask a question and the assistant gives me a contextual answer
based on company's internal data wherever it's stored.
So how to book a holiday,
what services are most suitable for
this particular client, which personalizes your experience. It really does help for you to perform
in your work in a customized way. So I find these applications very intriguing. A few respondents
were experimenting with this at the time of the survey. None in our survey had deployed it at scale. And outside of the report,
I'm getting a sense that many are spooked by the data privacy and accuracy implications of
such assistance. And I think that they're waiting for first movers to show the way to thread the
path. For first movers to suffer before they jump into the pool themselves. In fact, make all the
mistakes, make the headlines, and yeah, learn from their mistakes. And then there's
this third flavor of personalization, which is skill development simulations. For example,
AI coaches were conversation bots. And why do I consider them to be personalization?
That's because when you say something in your natural language they give
you feedback and respond to you based exactly on what you said picking up your mistakes your style
of language your context that you presented in your answer which is when you think about what
is personalized learning that's exactly what that is so that's a step change from multiple choice
e-learning scenarios that we have had so far.
So yeah, that's personalization for me.
Then there are a handful of other answers in that question.
What are the expected benefits of using AI and L&D?
Improving efficiency and reducing costs is also a big one, actually very close to first
place.
But I think a lot of people kind of understand where that might come from.
I would like to hear if you have any great examples of that.
But other interesting options that had less respondents, fewer respondents, facilitating
information discovery, providing extra skills practice, identifying skills, providing extra
knowledge testing.
So these are things that you've already alluded to, I think.
But, you know, having some sort of a coach in that.
Any other interesting free text responses that revealed anything here that you can recall that you'd like to address?
Oh, yeah.
One example that comes to mind is using AI to generate future capabilities.
So you take a detailed industry's what they call future state report, a detailed report including supply chain partners, stakeholders, technologies involved in what that future state of the industry is going to look like, and ask AI to generate a set of capabilities, and then
align that AI-generated set to your existing database, find gaps, ask AI to fill these skill
gaps on the list, and so on. So the final list was obviously checked by a human, but the respondent reported that 90% of capabilities in the final set were AI-written and they required minimal adaptation.
So I think that's a really interesting way of using AI, especially in the context of skills-based organizations where the discussion is going these days.
Yeah, that's really fascinating. 90% is an encouraging number. There's also mentioned
in one of the notes that you made as the authors, using AI to scan internal knowledge resources to
help contextualize learning. I'm curious if you think that these efforts are likely to stack on
top of each other where existing learning experiences pull in things
from those knowledge resources and result in future learnings. They result in some sort of
lesson and then that itself becomes a lesson, kind of like precedent, legal precedent, historical
precedent within an organization's library, that sort of thing. I think of McKinsey's Lily,
which was announced,
I think, back in August of last year or something like that. It sounds a lot like a librarian that just pulls from their, you know, hundreds of thousands of cases that they've successfully
or unsuccessfully done. So, you know, just kind of scanning through everything like Google and
looking for the right keywords, pulling out, you know, evidence and tactics and
whatever it is from specific cases, specific accounts, and then, you know, giving that as
a learning resource, you know, oh, you're working on something in this world in this industry.
Here's what we've done here. Here are some specific examples that you might want to read
up on to understand, you know, what kind of tactics you could use in this case. Do you think
that other kinds of companies outside of, you know, this sort of like big consultation and those who can like essentially develop their own tool, do you think they'll be
able to achieve this with maybe different kinds of data sets as well? Is this something that more
and more companies are going to have like a sort of an AI librarian that pulls on their knowledge
resources and helps people learn using those internal things? I think we need to be discerning
in terms of what kind of content lends itself to such applications
and in what context. Because there are a few questions that you need to ask yourself as an
organization. First, do you need that content to always be accurate? Because AI has an inbuilt
in the way it works. There is always a margin for error. And the second one is, do you intend
to keep the human in the loop?
And a librarian is a good use case because what it does, it pulls up resources and excerpts that
might be useful for you, perhaps to support your learning, to give you examples, to give you
scenarios that you can consider, or even clarifications like a virtual teaching assistant.
But you, as a human in the loop, you take it from
there, right? So when it comes to McKinsey case studies, it is, I guess, a more advanced version
of what you've always been able to do is just a really supercharged search. At the core of it,
obviously, it's much more sophisticated. Now, content creators and repositories have been
playing around with this already. But in short, if getting something wrong isn't the end of the world,
and the human is an active participant, regardless, there are meaningful and impactful
cases of using AI and LND for this like a librarian of sorts. Now, if you need AI to serve you complete, highly technical and precise procedures that you intend to use
outright, the probability forever that AI inevitably introduces, however small, might be
too high. For example, for drug prescription or legal cases, that sort of content. So I think
it's about understanding the limitations and the risks and whether your context calls
for it.
And in very many cases, a human needs to be present to actually double check, okay, so
what exactly does this case say?
I actually have to read it in full to understand the nuance and don't just trust what's being
served to me. Yeah, ultimately, though, I think it's really important to think,
will AI simplify organizational capacity to preserve knowledge in an effective manner? So
I've seen plenty of organizations that as they sort of make their digital transformation,
one thing that they didn't quite think of is the ability that that gives them to just record all that their employees know about the minutiae of their jobs but also about
the industry and the work that they're doing collaboratively with others and with partners
and that sort of thing you know digitizing everything really increases the capacity for
knowledge preservation and utilization and i see what you're saying about just, you know,
having a human involved there. But my initial thought about like, can this stack that knowledge?
If we do find a way to more or less accurately pull the resources that make the most sense in
different cases, do you think that's a noble goal for us to strive for is to have a librarian that
can in fact do that and to use AI for knowledge
preservation. Do you see those things as primary goals of AI in the near or deeper future?
I think this is definitely a noble goal to strive for, but right now it is about figuring out
the details of how that is going to work in a way that actually achieves the purpose and doesn't
put the organization in hot water.
So as of now, you do think that's probably more risky than it is a good idea overall?
I mean, as an industry right now, we are in a situation where everyone is experimenting.
And that's the thing, McKinsey, what they're doing, it is a good experiment to see how it
works and to see what the limitations of that are and how to use it because they are not
experiment to see how it works and to see what the limitations of that are and how to use it,
because they are not doing anything for the end user who's relying on the accuracy of their librarian to perform a job with very high stakes. So that's a great use case. And it is about
various organizations seeing how they can make that work in their context, and some are going to
benefit greatly from it. Yeah, I mean, I like McKinsey as a first mover
in this case that, you know, we'll see how that goes. They have a lot of data. So I think that's
going to be a good case study if and when we can observe how it has worked. I'm not sure how much
data they'll be willing to reveal because it is an internal tool, but I'm sure that with their
position in the market as a consultative global leader, they will probably hope to, you know,
as a consultative global leader. They will probably hope to, you know, share some things that demonstrate not only what happened here, but their prowess in that space with AI and everything.
So I guess we'll kind of see if they report on that. So let's get into the barriers, then the
things that make it hard to adopt AI. There are three categories that you go over, three classes
of barriers, technology, business, and individual. Do any of these stand out as the most common? Is
there anything that really felt like it was most commonly referred to or free text described,
that sort of thing? So there were standouts within categories. So for example, within technology,
data privacy was definitely at the top, sharing proprietary or sensitive data with AI tools,
that was coming across really clearly. And that's been the case for the last year in
most of the conversations that I've had with learning leaders. Among business blockers,
I would say for large companies, it's compliance. In some cases, it's outright restriction when it
comes to AI tools. But in other cases, it's just a lack of clarity about what is acceptable that prevents people
from actually using and trying to imagine how they can apply these tools in their work.
And for smaller companies, it was often the cost of time to proficiency with AI.
So something that we talked about in the beginning, which is about even if you can
access, if you can afford the licenses for AI,
it is about getting to a point where the AI output is worth the effort.
It doesn't require as much editing, as much checking, and it results in time savings.
So there is that.
And at the individual level, it's simply trust.
Trust in AI sources, trust in AI's outputs, even trust in that AI is going to
keep the data safe, which strongly relates to what I just talked about when it comes to technology
barriers. So I would say these are the standouts. When you talk about compliance, is that also
trust-based? You know, from like an IT perspective, is compliance generally like
the IT folks, they make the decisions as to
what kinds of tools need to be included or what kind of tools are safe for an organization,
and there still is a lack of trust and a knowledge that data privacy is not fully
secure at this point. When you say compliance, what exactly do you mean there?
Yeah, so in the survey, what you're talking about was a different category. It's related. It's also another one of the business blockers, which is related to AT. But when I say compliance, I mean just organizational
or industry regulations about whether we can use AI and how we can use AI. When it comes to blockers,
there were either restrictions and saying that as the organization, either we decided that we're not deploying it
right now, or we're going through a process to decide whether we're doing that. But in the
meantime, you cannot use it on your own devices. We're not creating you instances of GPT. Or it's
the organization not communicating that at all. And people are being like, okay, so what are we
allowed to do? I don't want to get in trouble. I don't want to embarrass myself and the organization.
Okay, so let's back up then and maybe define AI a little bit. Because in the report,
there is a brief note about the saturation of the term, which I think we've all experienced
at this point. I mentioned the conferences that I went to. If you've been on LinkedIn for even
five minutes in the past year, you've probably seen a dozen
people posting about their thoughts on AI and in some cases, like, you know, how they're
using AI and what they're doing maybe with chat GPT or a bigger tool.
It has gotten the SEO treatment.
It has been hyped up.
It has been utilized the term AI and artificial intelligence.
They've both been utilized as a way to create
brands. You know, it's so hot right now that people are just sort of grabbing at the bit.
I think it's slowed down, but for a while people were grabbing at the bit to utilize that to
further their own platform and to further their own product or whatever it is. And it was mentioned
in the report that people are seeing it among vendors basically everywhere. And it's really started to obfuscate what AI actually means and what it actually is and which tools are truly utilizing AI.
This happened to me very seriously when I went to ATD.
It just felt like every booth was like, hey, artificial intelligence now.
And you could tell that not all of it was like a serious application.
And in some cases, it didn't feel like artificial intelligence at all.
All of it was like a serious application.
And in some cases, it didn't feel like artificial intelligence at all. So how do we address this, the issue of hype and oversaturation and obfuscation of the definition of AI?
And what frameworks can we actually use to identify what truly is artificial intelligence?
has proposed three categories for AI solutions in HR, which I find quite useful in helping me think about and sort of categorize, okay, I see this thing, what kind of AI is it if it is AI?
So his three categories are added on AI, built-in AI, and built-on AI. So if you think about
AI that's added on, you can think of some application which has some generative AI that helps you create a title for your post or some image or to generate some content.
So you know what I'm talking about right now.
A lot of applications have the sort of thing where this little magic wand that you click and it suggests some content for you.
So that's what he calls AI, which is
added on. So the other one, the AI that is built in, we can think about it as something like content
recommendations, where the engine helps you recommend content. So that's an example of built
in AI. And applications that are built on AI, so AI-native applications, they are the latest generation.
And you think about some talent platforms, especially the ones that are operating in
the SBO-related, skills-based organization-related field, which are built on advanced AI models.
And they do things like they create a workforce graph and they allocate skills to where they're
needed in the organization.
So the entire application is just built on AI.
So like workforce intelligence and talent intelligence platforms, those sorts of things.
Yeah. So he has some examples here.
Applications such as Eightfold AI, Glode, Vmarie, SeekOut as built on AI, sort of AI native applications.
And I think when you were talking about your conference experience, this is something
that I share as well. One spurious claim that some vendors make is that they are powered by AI when
it's actually just a little algorithm that doesn't learn itself. It's just an algorithm. Or even more
often, they use AI, but they use it in the most rudimentary way. So kind of technically, yes.
So for example, they integrate generative AI to
generate draft content on a topic without much fine tuning and pretty much do the same thing
that you would be able to do just using your good old Shad GPT yourself. And some vendors don't even
have in-house machine learning expertise. But the way they are positioning their product,
they make it sound like some other vendors that have
worked with a team of machine learning engineers for a decade on proprietary models. And these are
not the same. But I think because of the hype, vendors, they're forced to be noticed, as you say,
it's SEO issue to be noticed, they have to put AI in their name, because they're just going to be
overlooked. Maybe sometimes for gritted teeth, I would imagine, because suddenly a lot of products just change their slogans.
And when it comes to hype, I think it all comes down to buyers, meaning internal L&D professionals
educating themselves to be able to discern what type of AI or what kind of application and depth of AI a vendor is talking
about. On top of that, obviously, you can collaborate with your IT team or hire professionals
to advise you. Obviously, these are not mutually exclusive, but I think there is no going around
the fact that you just need to know what kind of products you're buying. The powered by AI thing, I've definitely seen that
a handful of times. And I can just like see the marketing team in a Zoom meeting. Like,
how do we convey this to the customer? What's the verb that we use in our new slogan? And
I just know that they were thinking exactly what you just described. You know, powered by
doesn't sound like it's really true, but it doesn't actually make any claims about
the strength or degree to which this is a proprietary algorithm. So powered is a perfect
word. I can just like see that happening in those marketing meetings. And I saw that myself and my
first thought was like, yeah, but like, what does powered mean? You know, so I get what you're
saying there. The report also includes this common statement about evaluation data. It's very hard to
determine an ROI and L&D. And in this case, you guys describe, you know, good evaluation data,
strong evaluation data for learning solutions as just generally hard to come by. And AI,
it has limited capability to address that right now outside of drawing some insights from, you
know, a large swath of open-ended questions and a couple other small things. Do you think this is a function that
will expand and make that evaluation data better just by virtue of how we can actually look at
open-ended responses and then have AI pick out keywords and really synthesize that data?
Can we expect better data analysis just as AI grows, or does that come from the learning design itself? What do you think we
have to do there? When it comes to AI in learning and learning measurement, I think people generally
have expectations that are way too high. They think that AI is going to be like magic, that
it's just going to come in and finally we have all our our why's. But the thing is that measurement
goes beyond learning design. Good evaluation starts
with good data, period. AI can analyze data sets impressively well, but you need good data
in the first place. And I would say that impact measurement starts not with AI, but with L&D
asking different questions. So how do we connect our work with the business? What performance
indicators are we moving? Do we even know what KPIs people that we're serving care about? How do
we get access to that? How do we automate that access, perhaps integrate those databases so we
can measure and iterate often? So when you think about it from that perspective, AI is but a workhorse for the last mile.
Now, at the more granular learning design level, continuous measurement and learning
design has been possible for a long time, much like with website analytics, even before
AI.
Again, it's more a matter of intent and forethought than the advent of generative AI. And it's called data-informed
learning design. And someone like Laurie Niles Hoffman has been talking about that for years.
Great. Okay. We're running up on time a little bit. So I do want to ask you before I let you go
about the next steps section at the end of the report, because you guys did something really
cool, which is you don't just give the information and say, hey, go do your best with what we have here.
You give a little slide on where to start,
what comes next, and also a bunch of resources,
which is really nice too.
Just, I'm wondering if the Josh Burson thing is in there,
but there's a handful of things
for sort of keeping up what's going on,
or if you're beginning with AI,
what can you read and go through?
So I'd love to, if you don't mind, look at the next steps.
So you talk about, first of all, understanding where you are, identifying your
goals with AI, as you just sort of mentioned, and then finding your own use cases for AI. So,
you know, wherever you would like to start within that, if you could give some sage words of advice
for our listeners. Yeah. So here I will start by saying that if it was even half a year ago,
the advice would have been experiment with ChatGPT, QB9, what's going on, sort of get yourself comfortable, play around.
But now I think it's time to adopt a more structured and focused approach, especially in 2024, now that this is where things are getting real.
So yeah, the three steps, understand where you are, identify your goals with AI and find your own use cases.
The first one is about doing an audit of the systems and the tools that you have in place.
Some of them may already have AI integrated.
Mapping out your key processes, understanding and mapping out the skills in your team.
Perhaps asking them about the experiments that they have already done using AI even outside of work. Maybe they have some ideas or some examples that they can share with
the wider team and obviously looking at and clarifying use policies in your organization.
So that's the first step, just taking stock of where you are as your team, as your organization.
The second one is identifying your goals with AI, which is
really not that hard because you already know what your most persistent challenges are in your flow
of work. So you identify those challenges, you look at your KPIs, you prioritize them based on
importance and value, and then you identify, looking at that map, you identify where AI can support you with
these tasks. And after that, once you identify these little snippets, you set up simple experiments
with a goal and success criteria, even as simple as, okay, so I want to cut my whatever, whatever
content production time in half. I'm going to be using ai for that specific
process for a few weeks and then we will reconvene and discuss the results like something easy like
that and once you're done with that the third step is finding your own use cases and i know that a
lot of organizations and individual professionals they're looking for someone else to show them the way, as we talked about, and work out the best practices. But as of right now, we don't have best practices yet.
So I would say get together with your team to consider the use cases that might apply
in your own work. Discuss what tools are proving useful to you, perhaps speak with other departments in a company
about how they have been using things.
For example, people in marketing may have a lot to share
that might help you in content development.
And finally, experiment with how AI can help you
address these key challenges,
solve these case studies that you come up
and do it in a
perhaps structured way at least a few times. Perhaps a workshop, perhaps a challenge,
perhaps a hackathon. Everyone's talking about hackathons, but one interesting approach to that
that I've seen is as a team setting a challenge that we are going to see how much we can push AI
to achieve this goal,
say for a month, completely asynchronously.
And then after a month, we're getting together and see that,
okay, so who solved it?
Who managed to conquer the challenge and to use AI in that way? So things like that that don't necessarily require
two full days of hackathon to build something useful.
And before I let you go, one last question here.
I forgot that I actually
I saw a quick video of yours on LinkedIn where you reminded everyone of the importance of motivation
in this whole process. When it comes to learning, we still need to remember that motivation is
largely internal. I mean, there's both intrinsic and extrinsic motivation sources, but
largely internal. I mean, there's both intrinsic and extrinsic motivation sources, but the technology will never be enough on its own to get people to learn and to make them learn in a more engaged
manner. At the end of the day, we still have to think about the sources of motivation. So
I guess I would just quickly ask you, and don't feel like you have to go too deep on this, but
do we always have to keep in mind the motivation from sort of a personnel relationship management leadership standpoint?
Or do you actually see that AI might really make a play for motivating people to learn better just by, you know, the drastic improvement of quality of content and how it can identify what really excites us?
What do you think about that?
Oh, you want a quick answer, huh?
You want a quick answer, huh? Fundamentally, learning is a cognitively engaging and difficult activity. And content alone does not motivate you to learn. It might motivate you to have a look at,
okay, so this looks like a cool video, or there might be that aspect. But ultimately, AI creating
some sort of shiny, amazing content is not, in my opinion, going to motivate people.
We are still going back to the fundamentals of what is relevant to that person and using AI to shape the learning experience or to serve the resources or whatever to address that relevance, to address that pain point. And when it comes to organizational learning as well, even if you think about yourself,
as I said, there's obviously a little inkling of, oh, this looks interesting.
I'm going to have a look.
But for you to dedicate the effort required to learn, you need to both feel that this
is relevant for you.
This is something that you need to do.
that this is relevant for you,
this is something that you need to do.
And we like when someone else cares,
when someone else cares about us and how we develop.
So communicating that
and showing us the way
and supporting us in a human way,
I think is still very important.
Although I do see how at scale,
AI does help bring learning
perhaps to more people. But I think the most
quality learning is going to be done with human intervention. Absolutely. I agree. Okay. Well,
Egla, thank you so much for joining me once again. This was a fantastic conversation. I think all
that we've said today will probably change quite drastically in a matter of months and, you know,
especially within a year and beyond. So I would love to have you on the show if there's another report that comes out or just to discuss
the state of things once any new major interventions have taken place. So again,
thank you. Before I let you go, can you just let our folks know where they can find you on
the internet, where they can find the report and any other information that you'd like to
share about your work and your brand? The best place to find me is LinkedIn,
but I guess just look up the way you spell my name to find me. E-G-L-E-V-I-N-U-S-K-A-I-T-E.
Is that it? Yes. Yes. Just rolls off the tongue. Perfect. And the report is on Don Taylor's
website, I believe, donaldhtaylor.co.uk. Yes, yes, that's the one. Okay, perfect.
Again, thank you so much for joining me
and for everybody at home,
thank you for joining us as well.
We will catch you on the next episode.
Cheers.
You've been listening to L&D in Action,
a show from Get Abstract.
Subscribe to the show
and your favorite podcast player
to make sure you never miss an episode.
And don't forget to give us a rating,
leave a comment,
and share the episodes you love.
Help us keep delivering the conversations that turn learning into action. Until next time.