L&D In Action: Winning Strategies from Learning Leaders - Buffer Zone Learning: AI-enhanced Feedback for Judgment-free Knowledge Demonstrations
Episode Date: February 27, 2024What is the most effective way to assess learning among employees? What is the most efficient way? What's the most affordable? What's... the best way? It's likely that the answers to each of these que...stions is unique, and could even be different from organization to organization. If you're anything like the rest of us with too much data and not enough time, you may be hoping that AI will eventually step in and support the learning evaluation process, addressing at least most of these concerns simultaneously. This week, we speak with CEO of Bongo, Josh Kamrath, whose service offers a unique perspective on the efficacy of AI-enhanced learner feedback.
Transcript
Discussion (0)
You're listening to L&D in action, winning strategies from learning leaders.
This podcast, presented by Get Abstract, brings together the brightest minds in learning
and development to discuss the best strategies for fostering employee engagement, maximizing
potential, and building a culture of learning in your organization.
This week I'm speaking with Josh Kamrath.
Josh is a thought leader, risk taker, and chief executive officer.
He leads the charge at Bongo, a video assessment solution that enables experiential learning
and soft skill development at scale.
Josh began his career in sales selling middleware, eventually discovering Bongo in its infancy
and taking on a commission-only cold-calling position to sell the product.
Just over a decade later, he has risen to the C-suite, leading the now-venture-backed
company into the future.
Josh is also a father, world-traveller, and chicken owner.
Now, dear listener, if you are familiar with present industry trends or for instance follow
Don Taylor's Global Sentiment Survey, you know that AI is top of mind for much of the
L&D industry right
now.
While Get Abstract doesn't partner or work with Bongo directly, and this podcast also
doesn't endorse any other products or services, Josh and the brand represent an important
node in the AI-enabled product conversation.
This is a complicated and sometimes fraught topic, as you will see, so I implore listeners
to continue
doing research on what it means to integrate AI into the learning process. This includes
experimenting with new tools, including the new Get Abstract AI. You can ask it anything at all
and you'll receive a thorough series of recommendations based on curated authoritative
resources. Now, let's dive into the conversation with Josh.
authoritative resources. Now, let's dive into the conversation with Josh.
Hello and welcome to L&D in Action.
I'm your host Tyler Lay,
and today I'm speaking with Josh Kamrath.
Josh, thank you so much for joining me today.
It's great to have you on the show.
Yeah, thanks for having me.
I've had a career in education,
in some fashion, digital education for almost a decade now.
It actually hurts to say that I've been working
for almost 10 years now,
but I've worked in
the publishing industry where digital learning to supplement books was really big.
I've been a content creator making long form courses for bestselling books, and now I work
with Get Abstract.
And I've always seen these learning discrepancies with different populations, obviously.
So in the university level, there's something really important about retention
and proper studying habits.
In the workplace, it's generally about learning
and then applying that seems to be sort of the biggest
gripe that I hear from learning and development folks.
How do we take learning and put it into action?
But retention is always an issue too.
Just how do we learn effectively?
How do we learn the right things?
There are many, many big questions, but the big one that I've seen since I've been working in L&D,
he's learning and applying. And what I see is when it comes to systematic training in organizations,
assessment tends to be superficial. It takes too long to view the metrics that represent
whether people have applied what they've learned, or there just isn't really any assessment at all.
You learn something and then you're sent off to do the thing or figure out how that's
going to impact your job.
If it's something simple like training for compliance or whatever, that's a different
story.
But when it's something that really requires assessment, it just feels like there's always
something missing.
So I want to ask you, why is this?
Do you have any idea?
Is it a poor function of learning design?
Is it or management and oversight of the learning? Is it something else? What do you think?
I think there's not any one particular item, but my argument would be it's a lot of different
elements that all kind of combine together to create that hardship. And like you mentioned,
depending on the application or the specific
context, there's different elements that matter and make it easier or more difficult or challenging
environments. If I had to narrow it down to one thing in particular, though, I would say
it takes a lot of time typically, whether it's in the education sector or the corporate learning
sector. It just takes a lot of time to measure knowledge.
And it's like you mentioned, to measure the application of knowledge.
Do you think it takes a lot of time to measure and assess the actual efficacy of that learning
itself in addition to the amount of time that it takes to learn?
Is learning just a long process and we've been trying to squeeze it down too much?
We all know how attention spans are shrinking and everything,
but do you think that's becoming an issue as well?
I think in some capacities, yes,
but there's been standardized testing for a long time
and there's psychometricians that basically apply statistics
towards the actual assessments.
But that, again, it takes a lot of time.
And with adding psychometrics,
it takes a highly educated,
someone with a very specific skill set
to apply efficacy towards question sets.
And in my opinion, for again, some specific contexts
that's like super, super important, right?
If you're using a standardized test
or trying to measure knowledge through an assessment
to get a job
or have some kind of high consequence output, then efficacy is incredibly important.
But oftentimes, from my perspective, being able to use the knowledge and like you mentioned,
apply that knowledge is ultimately what's most important.
You can be the sharpest tool in the shed, but if all that knowledge is locked away and never actually used or applied, then it's not really adding a ton
of value to the world.
So your product, Bongo, I'll start with this. My girlfriend is a doctor and we have talked
a good amount about her dissertation process and how she earned her PhD. And I actually
had an even longer conversation more recently with her close friend who just finished
defending her dissertation and
Bongo kind of feels like it's taking a page from that sort of dissertation defense based on the research that I've done
You know, it it seems like what you're asking people to do or what the product does is it ask people to present their knowledge as a video
Primarily, but literally present them as people speaking about the topic to demonstrate that they not only have learned something,
but they understand the importance of it and how to apply it and that they have a deeper
understanding as to how that knowledge will impact what they do.
Is this where learning needs to go?
Do we all need to just take everything that we learn more seriously, not as a proper dissertation, but do we need to take what we learn, teach it to others, present it?
Is that kind of what we have to start doing if we want to really apply effectively and
grow based on our new knowledge?
Of course, I'm a little biased because I think experiential exercise and having a use case
with a teach back is an incredibly effective and powerful form of both
demonstration of knowledge, but also it moves the needle quite a bit on combating the forgetting curve as well and
You know, I think a dissertation defense
we actually have that as a use case for a number of doctoral programs in the education sector and I think that is
a number of doctoral programs in the education sector. And I think that is spot on in terms of the culmination
of what Bongo is capable of, both being able to have someone
go through that experiential exercise
and just go through the workflow of putting their knowledge
into action, but also getting feedback,
whether it's human driven feedback or AI feedback,
both on the delivery, as well as the contextual,
are the concepts correct?
That's not sure what our product is or does.
I think with a dissertation defense, typically that's a huge amount of knowledge and a much
more drawn out presentation.
That's a great use of our product, but we find it a lot more frequently being used to just
demonstrate, can that learner, that person give constructive criticism effectively?
Or if they read a product release notes, are they actually able to articulate that knowledge
around that new feature set? So it can be big and complex, like a dissertation defense,
but oftentimes it's just a 30 second or-second little video snippet of a skill demonstration or knowledge demonstration and
helping that learner or that person really hone whatever skill they're trying to demonstrate.
So I want to challenge you a little bit here because you mentioned like mitigating the
forgetting curve. And what I've heard endlessly since I was in school is that
We're not studying the right way
We haven't been taught how to study correctly and I think we take those habits into our adult lives when we're learning
You know in school. I was a big crammer
I would sort of study toward the end but that was mostly because I was doing two degrees simultaneously
And it was just a lot of work
But by habit I'm a crammer and I do things kind of under pressure is where I always tell myself I excel.
And we all know that's not really the way to do it.
It doesn't feel right and science has demonstrated that distributed practice is a better way
to do things.
And there are a lot of different systems established for what the most effective learning is.
And unfortunately, I feel like a lot of test taking in school and in the education industry
doesn't
support that because we just aren't taught how to do it and because of other factors,
you know, that impact how students do things.
And now as adults learn, you know, I'm not sure how much the habits really change, but
to me, having a system whereby you present your knowledge still in some cases can support
that sort of habit where it's like, all right, I got to make sure that I know this stuff
for the purpose of this presentation.
You know, like how can I demonstrate through this presentation that I actually know this thing?
So I'm just curious as to how you feel about that. You know, if we're studying toward a specific goal,
which is to demonstrate our knowledge in this thing, but not necessarily to apply it, I mean,
theoretically that is the goal beyond the goal, but that initial goal is I got to pass this sort of like video assessment to demonstrate that.
So what are your thoughts around that idea?
Yeah, I guess that is like an interesting take on just the assessment community or their workflow.
And, you know, I was the same way back in school where I would cram the night before and, you know, as soon as I would take the test, I would effectively forget that information. But that's just not an effective way.
That desired outcome of going to school is that you both learn facts and figures, but
also learn concepts.
And when you're cramming the night before, maybe you have a couple concepts that you
can hit home on on an assessment or a traditional test.
But math, for instance, often times, you know, you need the foundations
and the building blocks and, you know, having different concepts kind of come together is
how you do more advanced math, for instance. That may be more of a straightforward, like,
linear example, but I think it's very relevant for, like, economics or really any body of knowledge.
So I guess just kind of speaking to my experience, although that's how I went through university cramming the night before,
reflecting back on how I learned all of that knowledge. It wasn't cramming the
night before that actually taught me, if you will, or where I learned. It was
sitting in front of the class and participating in the dialogues in the
actual classrooms or completing case studies.
So again, doing the things where I personally learn best.
And actually that's where most folks have the most effective learning take place is by actually doing.
And I suppose I'd be the first to say that Bongo isn't going to be the silver bullet
or the end all Be All towards assessment.
But it is a new way to facilitate, especially observational assessment.
So like in the workplace, the most common form of observational assessment is the manager
sitting behind you, watching over your shoulder as you make a phone call or do something.
And obviously that does not scale at all.
It's always one on one.
And that's really what Bongo,
especially with our AI capability,
where we've really spent a lot of engineering and focus
is trying to scale observational assessment.
And certainly that example in the workplace
is very relevant.
But again, going back to me in university,
having the conversations in class and having
that discussion take place, that again is like a one-on-one kind of interaction or maybe
one-on-two kind of interaction might be taking place in front of the rest of the class.
But again, me actually participating is how I personally learned the best.
Bongo allows every single student to actually participate
in those open-ended questions or answering those open-ended
questions.
And again, it's both about the assessment of can that person
actually apply what they learned or what they know.
But also, it's about going through that experience
of having to articulate yourself or have a response.
And of course, it's awesome to learn at a university or, you know, it is one week intensive
onboarding kind of session.
But ultimately, what's going to be most important in your workplace is actually taking the knowledge
and being able to just do something with it.
Yeah.
I want to dive into the AI question soon enough because that's one of the major reasons I brought
you on is the AI capability.
But I do want to just ask about the importance of optimizing learning while it's happening. So you're talking about
you know, actually having dialogue and having conversations, especially in a university setting.
But I think that's also really critical in the corporate learning sector too,
is not having people be isolated. And I think especially then learning just becomes like another
task that you have to accomplish. Whereas when it's group or cohort based, it's just it's much more serious.
And I think you learn five times as much when you have conversation around it.
But ultimately, what are the most important things that we have to do to optimize learning
while it's happening so that we reduce that time to assessment or that time that it really
takes to determine if the learning was effective and then improve it from there. How do we actually optimize learning live in addition to these
sort of like conversation things or digging more deeply into the dialogue and the conversation?
How do we really optimize learning live? For sure. I would say getting into a
groove of practicing, again, kind of going back to when we were in college, you read through
chapter seven and there's always question sets at the end that are multiple choice or a lot of
times they're free response. And to be honest, I would just kind of blow those off. But, you know,
that's not a good thing. They're there for a reason. And it turns out whenever I would be
struggling in a course or specific area area. Actually, going through those
practice questions was a good way to reinforce the lessons of that chapter. So that's just a good,
I guess, best practice. But getting into a workflow or, yeah, just a flow of being able to
practice what you're learning and practice applying what you're learning. Both helps reinforce that skill with the AI capability.
It creates a judgmental free zone of practice.
So what I was talking about with me raising my hand frequently in class, most people aren't
like that.
Most people aren't willing to raise their hand for every question and wants to participate.
The only reason I was is I didn't want to do homework and study.
I knew if I participated, I would actually learn the materials, but that's a reality
that people don't want to feel like they're judged by their professor, their manager,
their peers.
That's a huge outcome of adding AI to our tool is people don't feel like they're being
judged when they're just being evaluated or getting feedback from AI.
So having an environment that's conducive to practicing skills or practicing application
of knowledge leads to much better outcomes.
I had not thought of it that way actually.
That's really remarkable because one thing that I've heard more recently and I've been told this for a long time, but I think it was Lauren Waldman, the learning
pirate who was on my show somewhat recently.
She mentioned that we spend a lot of time learning when we're like not in the right
headspace for it, you know, when we are dealing with stress or under duress or something else
in our lives is impacting how our brains are functioning and it's just never going to be an effective
setting unless we can really calm ourselves and in some cases that means that we need to be in a certain
season of our life to really be the best learner that we can be or it means that we need to practice
meditative things and just learn how to settle ourselves into a better learning space.
But the idea of eliminating
observers for instance or the pressure of eliminating observers, for instance,
or the pressure of competition with your classmates
or with your other coworkers and that sort of thing,
and using AI almost as like a buffer in there
is very intriguing.
I will say though that at the end of the day,
a human probably does have to come in and observe the scores
and say, hey, what does this mean about your performance
and that sort of thing?
But is that the point that just adding the singular buffer into the actual moment of presentation of AI actually makes a big difference in how people present what they're learning?
Is that what you're saying?
Yeah, no, that's exactly what I'm saying.
And I would absolutely agree that from our perspective at Bongo, we view humans being in the loop still as a critical element.
So there's that.
And then additionally, having the judgmental free zone, you know,
allowing for exactly what you said, people to get in the right headspace
to be able to apply what they know.
We didn't foresee that in the crystal ball.
We originally were adding the AI to dramatically reduce the need for human evaluators
to spend a huge amount of time on evaluation.
And one of the unforeseen outcomes that our users report back to us is that they just
feel like they end up practicing a lot more.
And because they're practicing a lot more, it's like three and a half times more with
AI compared to human evaluation alone.
Reinforcing those skills or reinforcing that knowledge ultimately leads to better
outcomes.
Okay, but here's the thing.
ChadGPT is about a year old now, and that was Bill using reinforcement learning from
human feedback.
This feels a lot like reinforcement learning from AI feedback for humans right now.
That kind of feels like the direction that we're going. And that's a big step, you know,
like suddenly deciding what metrics alone,
AI should utilize to determine
how well somebody has learned something.
The metrics of, I mean,
there are a million things that you could choose.
And I think Bongo has been the arbiter of that
in the case of the product.
You know, you're deciding exactly what those metrics are.
Why don't we have that conversation then?
What are the things that you think are important to measure?
Whether Bongo is doing that, I think let's speak more empirically.
What are the things that theoretically, if AI is going to act as a coach or as an
assessment system of some sort in an ideal world, whether Bongo is achieving that
right now or not, maybe in the future even.
What are the things that AI needs to be able to observe about humans who are presenting
or demonstrating knowledge for it to be an actual valid coach?
Sure.
If you would imagine it into two different buckets.
So there's the delivery.
So how fluidly someone's articulating themselves, those are actually quite easy to define or
they were easy, but it's a lot easier to define across the user base.
So things like confidence versus nervousness, how informative or persuasive were they?
And we present that not as a definitive score, but more on a range.
So in some cases, you want to be more informative or you want to be more persuasive.
I would say in almost all cases, you probably want to be more confident, but
how it's presented back to the user is, is effectively a range on a specific
paradigm.
Again, that with a bunch of engineering is pretty straightforward to define the
contextual business, though, of like,
did that person actually get the concept correct? Are they accurate in their information?
That is in orders of magnitude more complicated. And there's a number of different ways to approach
it without trying to get too technical. We've actually taken the philosophical approach of
keeping the human in the loop. So the instructional designer or the manager, the've actually taken the philosophical approach of keeping the human in the loop.
So the instructional designer or the manager, the person authoring the assessment, they
effectively submit source material.
So again, it could be chapter seven, it could be product release notes, it could be an awesome
example of someone doing a customer service interaction or a sales call. And the AI consumes that source material
and then generates learning objectives.
Like what are the key aspects
and the critical components
that are probably most important?
It presents that to the instructional designers
so it still keeps the human in the loop.
And then that instructional designer or manager
uses their human intuition to basically validate, yes,
this is important, no, that's not important.
And that's where the evaluation effectively comes from.
So the AI is effectively doing, in a way,
like the prompt engineering to do the evaluation around
the content and contextual nature of the presentation.
So we believe that, again, keeping the human in the loop, not just dumping all knowledge,
like the entire knowledge base and letting AI figured out, from our opinion, is that's
just not a very responsible way to go about the calibration of measurement or calibration
of the assessment.
And our way, again, philosophically, is a lot more pointed to the assignment in question
and relevance to what that question set specifically is.
And you can also kind of think of it as like, whereas Consumi All Knowledge, like the entire
knowledge base, that's like a hard drive, Our philosophical approach is more of like RAM, where it learned
for that assignment and then forgotten after the assignment is complete. There's a bunch
of benefits from a security standpoint of our zero shot approach is what it's called
in the AI world. That's what it is today. I think the other part of your question in
terms of like, from a future state, what is good. So today, generally, everything's centered around what's being articulated.
So what is that learner verbalizing?
And that's what's being measured in terms of like, they're correct or they're incorrect.
In the hopefully near term future, we're targeting later this year, being able to incorporate
visual components into the equation as well.
That's not necessarily just like eye contact or facial expressions.
We've actually already had that in a development environment that we've been testing on.
When we talk about incorporating visuals, it's more like, where's that person clicking
your ad on the screen?
So being able to expand the news case of like, hey, learner, you know,
not just demonstrate that you know how to greet a customer or, you know, how to argue
your dissertation, but actually show how to use this product and the tool, you know,
again, theoretically, it's not in production yet, will know the proper click path and if
that person's actually doing the thing on the computer the right way.
I don't want to open up a can of worms, but I think I'm going to open up a can of worms
here because one thing that has been an AI point of criticism is, I mean, it's been
like more heavily like discriminatory things that accidentally pop up with certain tools
and I don't want to say that's the direction I'm going right now, but what I'm thinking
is that the way that people present themselves varies radically.
Like between you and I, the way that we speak, the way that we gesture, the way that our
voice intonates, the octave ranges that we utilize, and even my own example of myself
personally on my podcast versus me on the Get Abstract Instagram.
When I'm on the Instagram, I'm talking like this and I'm using different levels of my voice
to really make it seem like I'm an authority
on this topic that I'm talking about
because it's also fun, it's different.
But I would argue that like those are different levels
of demonstrating understanding.
And I've also met with people who are many, many different,
you know, public presentation coaches
and did toastmasters for a while.
And, you know, you can learn how to present more effectively and be more convincing.
You know, that's a whole thing in our society is like influence.
That's a huge thing that determines the path of nations, you know,
who becomes the leader of this country and you can fake it, you know,
you can fake understanding something pretty effectively.
So how can AI do this?
And it's not just, you know, how do we suss out faking it?
But how do we determine that, like, okay, maybe this person,
they're a nervous person.
Some people are more nervous than others.
And how do we determine that they still really know this really, really well?
Because I do think that there are a lot of people who are like super introverts
that have really elite levels of knowledge.
So how do we make up for these differences?
Yeah, no, that's a super good question. And we score on different aspects. Whereas the delivery,
so how many filler words are used and how nervous or confident are you, those items,
we currently don't incorporate into the actual grade, if you will. That's left to the human. So we present the information
and almost always that information is used in terms of competence versus nervous, for instance.
That information is used to radically improve the grading or evaluation rate. The human
evaluators can score that part much, much faster. Where we do score, and we actually call that smart scoring,
that score is on the actual,
did they get the concepts correct?
Because exactly what you're insinuating,
you might fumble along,
but then still get the answer correct
and the answer is still correct.
So, you know, we basically discriminate
in terms of what we're scoring on. The other
complicating factor beyond what you described is different cultures have different definitions
of good, even within the same language. And that's been another large technical, I guess
it was an obstacle. Now it's more of a feat that we've been able to more tackle. But being able
to auto-calibrate our tool and turn the AI based off people's dialects and specific languages they're
talking. So, for instance, both of us are Americans. But if one of us was like a Kiwi from New Zealand,
or like Australia or somewhere, the tool, you have to speak for 12 or like Australia or somewhere, like the tool, you know, you have to speak for 12
or 15 seconds or so, but it'll actually detect that there's a different dialect in this language
English being used. And then it recalibrates itself based off that dialect. And, you know,
I guess you'd mentioned Toastmasters. We actually initially used TED Talks as the initial calibration for the delivery.
But again, we would be advocating this is still just a tool, right?
It's not the end all, but no.
It's working towards making the evaluation happen faster, facilitating an experiential exercise,
and hopefully radically reducing the need for time being spent on the human evaluator.
But you had mentioned GPT coming out about a year ago.
Just within that year, it's continued to accelerate and advance.
And that's, I think, a really exciting thing is, and certainly our perspective is, we want
to leverage these tools to make life easier and make evaluation more authentic in terms
of does that person know or are they
BSing? But we also don't want it to completely replace a human evaluator. We still want them
to be in the loop.
Of course. I actually was first thinking about Carmine Gallo who wrote the book Talk Like Ted
when I'm thinking of examples that come to mind. Former news anchor is what Carmine did and then he turned into a public speaking coach,
you know, really high level and I shot a video with him and nobody sounds as authoritative
as a news anchor when it comes to knowledge.
You know, the cadence, the confidence, the lack of filler words, it's all perfect, you
know, and you can be taught those things, but also is it fair to establish that as the grade
to strive for? Do we actually want to sound like TED Talks or are there other ways that
we can demonstrate knowledge that is still authoritative and sufficient to demonstrate
truth? That's a whole other conversation. Maybe I'll have you back on the show to get
into that.
You actually talked about internally at Bongo having specific configurations for different
things.
The news anchor configuration where it is, your bar is extremely high and you have to
present perfectly, but right now it's still more generalized.
I would also maybe mention, although we certainly are centered around video assessment
and trying to validate knowledge and facilitate observational assessment scale, the huge part
of that and certainly our DNA is helping all the learners improve on what they're being evaluated.
Right? So there's an assessment aspect, but there's also other workflows that are in our tool
that are really
just centered around helping that person improve and learn.
And watching yourself say like those filler words 15 times in one minute, that's one
of the best ways to iron that out.
And that's really, again, in our DNA and ultimately where we started was more of helping people practice their presentations
and where we're evolving is and really what the AI capability is enabling is having there be more
of like a part and fast assessment kind of capability. Yeah, I mean that was going to be my
conclusion previously anyway is just that it can't hurt to be able to review your presentation to see yourself
and to want to improve on what you see.
Even if we are going in a specific direction like emulating TED Talks, whereas I think
there could be a more varied range of what successful presentation looks like, I don't
think it hurts to have that review option and to have some sort of a grade and assessment
that goes with it.
So, that's something to think about for sure.
I met with the people from Synthesia AI when I was at Dev Learn a couple months ago in
Vegas.
They do relatively life like AI avatars based on a video snippet and a voice snippet of
you.
They will take that and then they can take a script and turn it into a relatively stiff
but pretty lifelike looking video of you as if it was you presenting.
And it's pretty amazing.
The directions that AI is taking us this rapidly is very exciting.
But at the same time, I do want to ask you as I ask these people, what are your thoughts
on how involved humans need to be
in most of our processes?
Because these represent things where, like,
it's a lot, you know, we're literally
eliminating ourselves by
replacing ourselves identically
with video, or in this case
trying to emulate what a person would do
as assessment, which you've already
mentioned, you know, there's a valid thing about
putting a buffer between an individual learning and a pressure inducing professor or teacher or
system like that.
But still, you know, this is another example of removing people, reducing the time that
people need to spend on something like assessment.
So what are your general thoughts on this, not just from a Mongol perspective, organizational
perspective, but societally?
Like where does the buck stop
where we actually don't wanna remove humans
from these processes?
Yeah, I would argue that humanity
needs to continue to evolve.
And I think a great example is, you know,
when Excel came out, my dad's an accountant.
You know, I'm sure that people were like, holy smokes,
like this is gonna replace all the bookkeepers
and all the accountants.
All that it really did ultimately is increase the expectation and the expected output.
You would never think of trying to run a company with a paper ledger anymore.
Now it's just an integral part.
Excel is an integral part of every business.
My opinion or perspective is AI tools are going to continue to work their ways into
the different facets of how we live our lives.
A big part of most of our lives is working.
I guess that would be my answer there, but with the example that you gave, to my understanding,
they're leveraging AI heavily, very similar to how most LMSs are, making it really easy to create
content and to democratize content creation, essentially. Whereas, Bongo, we've asked ourselves,
like, is that our direction we should go?? Ultimately, at least initially, we chose to, instead of it making it easier to create questions,
if you will, we're trying to leverage that kind of capability to do actual, was that
person correct?
Can they take the content that they learned from and actually do something with it?
So again, a little bit of a different philosophical approach on how we're trying to apply AI, but I think it's just going to continue to evolve and get more powerful and
capable. And just like back in the late 80s or early 90s or whenever Excel came out, the people
that continue to try to do it the old way and aren't willing to adapt or change, those are the
ones that are probably going to get left behind. But I also pretty much adamantly would say it's not going to come and replace everyone
at least for decades.
I'm not so concerned about replacing people.
I'm concerned about something along the lines of the degradation of knowledge.
I don't want to say it's that severe, but the path that we're on, again, I've worked
adjacent to the education industry in some
way, and I have seen even in like the three years that I was in sales at a major publishing
company, the speed with which we are abbreviating our books, the speed with which we are creating
faster working, adaptive online digital education tools, the speed with which we are able to
learn something by googling it and allowing whatever
the algorithm has determined to be the top answers to teach us.
The way that we learn is just constantly changing.
And that's kind of what I'm talking about.
Like, where does the buck stop in terms of like how we determine the knowledge that teaches
us in the future.
So I like that what you've described is you're not removing the human from the process.
It's still very much reliant on an expert
who is creating the knowledge base,
the source material, who's choosing
or creating the source material
that goes into what learners
have to then present on via Bongo.
Like I like that system a lot,
but the ability of AI to determine how accurately somebody
is presenting that information back is another step in the direction of we are letting an
algorithm, a non-human source, determine how well we know something.
And this isn't even as severe as something like, you know, if I Google the ingredients
of a cake, like I'm probably going to get a few different
answers like that to me is a wild thing that like, you know, I mean, it's not that wild,
but an algorithm is telling me the answer to questions, you know, where can I find this
bird?
Yada yada, whatever.
An algorithm is going to tell me the answer by favoring one source on the internet because
of X, Y, Z.
And in this case, an algorithm or a computer program is going to determine whether I'm
correct and accurate enough and that I understand the concept because of XYZ.
You know what I'm saying?
Like because of an input to an algorithm.
And that to me is the question that I'm asking is like, where does the buck stop in terms
of allowing a piece of code to determine our knowledge as opposed to something passed down
between people more directly,
which we're already very deep into this, as I said, because of how we learn in Google and everything.
But this does seem to me to be a pretty serious step if we're starting to determine whether we actually know something
via the code itself. Like now the code is saying whether we're correct. You know what I'm saying?
Totally. No, and I would say the reality of what you're describing, like the abbreviation of knowledge
into oblivion, that problem represents a massive opportunity for tools like Bongo.
Of sure, algorithms might make it easier to access information and make it easier to consume
it with the abbreviations.
But at the same time, there's tools like Bongo that work towards measuring, does that person,
do they actually learn or do they actually just read the cliff notes and they actually
didn't read the book?
So if that's the problem, maybe another way to answer it is anytime throughout history,
there's been better weapons that
have been invented.
Several years or several decades later, better armor also comes about.
We make guns and bullets, like all of a sudden tanks come out.
Then armor-piercing bullets, and then reactive armor.
Maybe that's for warfare, but I think it's a similar analogy towards, you know
How we're approaching it is there's tools like GPT or you know, there's lots of different AI capabilities
You know where they're algorithmically making it either
Content authoring or access to content easier and faster then at the same time
There's tools like bongo and in terms of like well trying to measure did that person actually learn or can they actually apply what they learned?
Is the desired output, application of knowledge, you know, that's, you know, for a lot of
organizations what's most important, being able to measure that is effectively like the
angle we're going.
I see your analogy about weapons and armor, but where are we in the modern world with
weapons right now? Like our weapons could destroy humanity. Like that that's pretty well known that you know,
nuclear warfare has been something that is capable of erasing us from the planet if it went wrong
because that's how strong our weapons are now. And my point there is that the only thing that has
kept us in it, despite the power of our weapons that we've
decided to create, is some sort of realization among us that said, okay, we kind of got to
chill.
Like, yeah, we're still going to have wars, but like now we have, you know, the Geneva
Convention and we have like expectations for what we're actually allowed to do.
We have regulations around it and we also have the powers that be ostensibly working
very hard.
And that's kind of my concern is like, will knowledge reach that point of oblivion, as
you said, where we actually have to set regulations.
I mean, the internet is already regulated in many ways and that sort of thing.
But are we going to have to identify, all right, our knowledge has grown so futile and
maybe even just fake news and misinformation is an example of this.
Like knowledge has become so fraught and futile in the way that we consume it
that we really need to rethink how we're doing this entirely.
I actually think we're a lot closer to this point of no return.
And I'm not saying that AI is the reason for this.
I think it's been happening since before AI, but I think you have an important perspective
on this as somebody who creates a tool that has some sort of power on that.
You know what I'm saying?
It's a big question.
Totally.
And the fake news problem, the first several months, when all the fake news became a big
thing, it was pretty difficult to distinguish, okay, is that real or it was like very believable.
And then it made people question like, okay, like, is that real or is that like bull?
So like, it was a big thing, a big dangerous thing and senior talked about a ton and there were almost certainly a lot of bad things that happened from it initially, but humanity adapted effectively and
combated it. And maybe there's a little bit of regulation that took place, I think with AI and
nuclear weapons analogy, the consequences are probably harsher and can happen much faster.
I would agree with that. And maybe initially, again, just sticking to the analogy, how it was combated was the mutually assured
destruction.
So then that led to a bunch of agreements and effectively regulation.
But now, as technology continued to advance, you look at the Iron Dome system in Israel,
there has to be systems like that, or lasers that are on planes that shoot missiles out
of the sky.
Those are forms of effectively armor that people invent and come up with.
To loop it back around to your question within knowledge and access to it, the world's adapting
and evolving super, super quickly right now.
I think that's one of the reasons it's scary for a lot of folks. And I think that companies or organizations that try to be responsible with it, again, like to
think Bongo is leveraging this kind of capability responsibly and really cautious with wanting
humans to continue to be in the loop and continue to be like the final validator, if you will.
I like to think hopefully those are the
ones that will prevail because people will look at that as they're acting responsibly.
But I'm sure certainly there's going to be bad actors who'll ever ji to fully democratize content
or to create bad pathogens or something, maybe with those that have incredibly terrible consequences,
like maybe there should be some regulation,
but yeah, it's a really sticky question.
And then for the next few election cycles,
you know, hopefully that's what gets talked about.
The system, like our society, you know,
works towards self-regulating and trying to figure it out.
I do appreciate that you want to keep humans
at the center of this and in what you're doing with Bongo.
I think that is very critical. I noticed that you hesitated a little bit when you were saying that you want to keep humans at the center of this and in what you're doing with Bongo. I think that is very critical.
I noticed that you hesitated a little bit when you were saying that you hope that those
are the ones who will prevail because I think we all know where the money tends to flow
in some of these cases.
The big promises that are made about certain tools that end up just being a little bit
more powerful than we envisioned initially.
But at the end of the day, it's good to have you on who can speak frankly about it and
have that as the goal of your product as well's good to have you on who can speak frankly about it and have
that as the goal of your product as well. So I appreciate that.
Right on. Well, I guess thank you. But the other thing with that is, Bongo, we have somewhat
of an unfair advantage in that we have a decade of tens of millions of human evaluators that have
graded people's presentations already. So we're able to have a few steps ahead in
terms of trying to emulate humans as opposed to somebody who's just starting out today.
Like, sure, they might be able to leverage the AI the same way, but they don't have that
moat around them or that kind of, like I said, unfair advantage with human validation already.
So I guess I'm just maybe qualifying myself a little bit by saying, it's not like I just
have that altruism of like, oh, we want to keep humans in the loop.
I do fundamentally think that's what is best, but I also think that it's a unique advantage
that Bongo has compared to some of our competitors.
Yeah, absolutely.
I see that too.
Well, we're running up on time here.
Before I let you go, can you just let me know where our listeners can learn more about you,
your work, and also just Bongo too?
Oh, sure. So, yeah, I would say
if anyone wants to just connect with me on LinkedIn,
it's just Josh Kamrath with Bongo.
I'd say if you're interested in Bongo,
just go to our website, bongolearn.com,
and we have a lot of information, both about our product,
but also just about how to measure knowledge and take have a lot of information both about our product but also just about
how to measure knowledge and take this new type of capability and apply it into your
corporate or educational settings.
Cool.
Well, Josh, thank you so much for joining me.
I wasn't expecting to get into post-apocalypse discussions today.
Haven't done that on the show yet, but I do appreciate it because ultimately we are looking
at some big changes coming in the world and I think it's an important thing to discuss in some form or fashion.
So thank you for being the one to do that with me.
Yeah, absolutely.
Thanks for having me again.
Yeah.
And everybody else listening at home, thanks for joining us.
We will catch you on the next episode.
Cheers.
You've been listening to L&D in action, a show from Get Abstract.
Subscribe to the show and your favorite podcast player to make sure you never miss an episode.
And don't forget to give us a rating,
leave a comment and share the episodes you love.
Help us keep delivering the conversations
that turn learning into action.
Until next time.