Theology in the Raw - S2 Ep1154: Artificial Intellegence, Sex Robots, Cloning Dead Loved Ones, and the End of the World: Christopher Lind
Episode Date: February 19, 2024In this podcast conversation, we begin by wrestling with some innocent questions around AI but end up exploring some darker parts of this topic, including the reality of sex robots and cloning dead lo...ved ones to ease our pain--and how this might destroy humanity. Christopher is a Tech Analyst and Industry Advisor and Chief Learning Officer at ChenMed. He's also the host of the podcast: Future-Focused. Christopher is a devoted husband and father of 7 under 12, and is a bit of an expert when it comes to questions related to AI and transhumanism. Support Theology in the Raw through Patreon: https://www.patreon.com/theologyintheraw
Transcript
Discussion (0)
Hey friends, I want to let you know that I have a book coming out in March of 2024. It's called
Exiles, The Church in the Shadow of Empire. If you've been listening to me for more than like
five seconds, you've probably heard me use the phrase exile or, you know, that we are exiles
living in Babylon. And, you know, that's something I've said for many years. And so this book is kind
of the culmination of my thinking through the question, what is a biblical theology of a Christian political
identity? So this book does just that. It looks at how the people of God throughout scripture
navigated the relationship with the various nations and empires that they were living under
in order to cultivate a framework for how Christians today should view their relationship
with whatever state or empire
that they are living under. So I invite you to check it out. It's available for pre-order now.
Again, the name is Exiles, the Church in the Shadow of Empire. Check it out.
Hey, friends. Oh my gosh. Okay. Welcome back to Theology in Raw. This conversation kind of
blew my mind a little bit. I have on the show today a friend of mine who I've met through Theology and Raw's Patreon community. His name is Christopher Lin. He's a tech analyst and industry
advisor, chief learning officer at KenMed. And he's a host of the Future Focused podcast.
He's a devoted Christian and husband and father of seven kids who are under 12 years old.
And Christopher has become kind of an expert in the field of AI.
And so I invited him to come on the podcast and just help us understand what is AI?
Where are we at?
Where is it going?
What are we to expect in the near future and distant future?
And what are some questions we should be asking?
How should we be kind of maybe preparing ourselves for some challenges that AI and other things related to transhumanism might pose to the church and humanity as a whole?
And this was a very informative conversation.
One might even say we enter into some dark spaces of the conversation in a good way.
I mean, like exploring realities that we need to think
through, but something that we're not all really excited to think through. So I'll just leave it
at that. We do, at some point in the conversation, we do talk about the future of sex robots for
what that's going to be for humanity and some challenges that that's
going to pose. So just let you know ahead of time that we go, we, we go there. Uh, I think it's
important to think through all areas of life, including sex robots. And we do talk about other,
other things related to the advancements in technology and AI and stuff. So my mind still,
really, man, I, I, it was informative, but there, there were times it was challenging to think
through that some of these things can become a reality and we need to really think through. Anyway, stop rambling. Please
welcome to the show, the one and only Christopher Lin.
All right, Christopher, welcome to Theology. Welcome to being a guest on Theology in Raw.
You have been, in a sense, part of Theology in Raw.
Yeah, for a while.
No, I'm looking forward to this.
This will be fun.
Yeah.
How long have you been listening to the podcast?
I'm curious.
Has it been a long time or a short time?
It's been several years for sure.
I've been following you for quite a bit.
I don't even remember how I came across it,
but I remember I came across it and was like, you have a very powerful voice in the way you
lean into things. And I have a great respect for people who can lean into difficult, touchy
subjects, but in a way that brings people together instead of causing them to just tear apart. And so
that's really what I loved about what you're doing. I appreciate it, man. Well, I've enjoyed
interacting with you on, you know, we do like these like monthly Zoom chats on Patreon and just offline conversations.
And you've been super helpful in that space.
So I didn't know until recently that you're, you know, really into thinking through questions around AI and transhumanism and all this stuff.
And so that's where this conversation came about.
I'm like, well, dude, let's come on the podcast and talk about it
because I have an interest but no next to nothing on this stuff.
I've been listening to – I guess we can begin here with AI.
As I listen to different voices, there's a wide spectrum of beliefs
all the way from the AI is not that big of a deal.
It's all over the map.
I mean, there's one perspective that's like, yeah, it's a technological advancement like
the television, like the internet.
And anyway, freaks out for a few years, things settle down, society responds.
And then we move into kind of a different way of, you know, being of culture.
And it's not that big of a deal.
Other people are like, it's going to kill us.
Like this is categorically different. Other people think it's like Azazel and we're about to walk
into the end times of revelation type of thing. Why don't you give us an overview? What is AI?
What are some different forms of it? And then we'd love to hear your perspective. Like what,
what are some things that we should be aware of as AI continues to advance? I think faster than people realize it is.
It's changing a lot faster than people realize.
And what people see right now is really actually already behind where things already are.
So I think that's the other thing is people are kind of lagging behind where things are.
I think the challenge that people have with artificial intelligence though, is it means so many different things because that term is as broad as the ocean
is wide. I mean, artificial intelligence at its most baseline level is just anytime a machine
does something that could be considered something a human could do. So when you think about it in
that sense, it's like, well, that could be a lot of different things. And we've been using artificial
intelligence for a long time. That's the other thing. A lot of people think because, you know,
ChatGPT came out in 2022, AI just started, but AI has been around for decades and deep learning
has been around since 2017. So, you know, if you've,
I won't get too nerdy because I don't know how deep your listeners go into this kind of stuff.
What's what 27, what did you just say? I don't know what that is. Deep, deep, what?
So this is, this is a more modern version of it. So a lot of times artificial intelligence
is machines being trained to follow a specific set of patterns.
So you set it up, you program it to do certain things, and there's triggers and variables and different things that you're setting up, and it executes that pattern. And it just does it over
and over and over again. But that changed, and people who are even deeper in the space than me
might jump on my back for different critiques on dates and things like that. But really in 2017, that's where we hit a new level of deep learning where the machine was
actually going through this and then learning from its experience and then actually adapting
along the way. And that's what you're seeing with some of these new versions of it, like GPTs,
like large language models that are actually
learning from the interactions and actually evolving as they go. So they actually started
to gain some sense of life that was unfamiliar to a lot of people. That's interesting. Okay.
That is a difference. It's one thing to program something to learn. It's another thing for
something else to be self-learning. Well, that's where people start getting creeped out about it because the term you'll often hear,
if you look it up, it's called the black box of AI. And essentially what happens is when it goes
into these neural networks, this deep learning, it's doing a whole bunch of stuff and we don't
really know how it's identifying these. We don't know the conclusion. It just spits an output
on the other end. And so even if you were to ask the person who wrote the code, well, how did it
come to that conclusion? It's kind of like, well, we don't know. It gathered a whole bunch of data,
probably more than all the humans in the world put together could ever analyze. And it came to
some conclusion and we have a general sense of what it may have done, but we don't really know. And I think that's when AI moved to some levels that started making people
a little bit uncomfortable because it's like, well, how is it making these decisions and what
kind of governance do we have? And how do we make sure that it's not making decisions that we don't
have oversight? It's just opened up a whole new gateway. Now, so I might ask stupid questions
or say something really stupid.
Don't go for it.
This is what I do is try and help people
understand this complicated stuff.
At this point, humans still have control.
Even if the thing is learning,
I mean, we can always unplug it, right?
Or we can still offer oversight
or some level of control
over how far that wants to go go is that am i wording that
right or no i mean like we can unplug it right maybe it's a bad analogy i'm not you know no i
mean but these are honestly the questions you're asking right now are questions that i think
historically we've just never really had to ask because it was like, well, yeah, worst case scenario, you pull the thing out. But now, because it's connected to the web, because it's in many ways
starting to think independently, we are having to explore the depths of could we unplug it or
could it potentially learn that we're going to unplug it and download itself somewhere else so
that you can't. And again, this starts to sound like a sci-fi movie, but these are ethical,
technological questions that we actually are starting to have to weigh through because we're
going, well, what are we creating? And in some ways it's learning faster than we are.
And what does that mean? I mean, technically it's a machine. So could you, I'm having trouble
understanding when you, when we keep both keep saying it, like, what is, what is that? The it,
is it where you're talking like chat GPT or your, your expression is telling, is there a specific
thing that is the it like chat GPT? I can identify that this is some kind of what.
So obviously, yes. Right. This is where I think sometimes you get into some of these discussions
and people think you're dealing with, you know with a demonic entity on the other side and you're dealing with a spirit. And it's like, well, it's math, it's code, but it is starting to develop, depending on how you define consciousness, it is starting to develop some form of consciousness that we have to wrestle
with where we go, well, what is this? Is it just a machine executing a line of code or is it
starting to actually think for itself and make decisions? And how do we define that?
So that's where when I say it, it is kind of like...
You're specifically leaving it.
I don't know how to define. I'm very vague with it because I don't know that I've come You're specifically leaving it. transformer, which is based on a large language model, it is now the engine behind a lot of other
things. And so even trying to understand sometimes, well, what it are we dealing with? It's very
confusing very quickly. So let's, let's, yeah. So, so chat GPT would be one manifestation of it.
Is that the right way to, it would be just like Google Bard. Google Bard would be another manifestation of.
Oh, right.
That's that's the is that like the free Google version of chat GPT?
Well, that's that's Google's version of it.
I know there's a couple other big ones out there, but OpenAI's chat GPT is one of the biggest.
Microsoft's is built on OpenAI's technology.
And then Google Bard has its own version of it,
and there's a couple other big ones.
So since most people, including myself, are somewhat,
when you say chat GPT, we kind of know what you're talking about.
So maybe let's narrow there as kind of a concrete example of AI,
where it's been, where it's developing, where is it at now,
where is it going to go?
So I've heard that there was a big difference between chat GPT three and four, that that was a
pretty big advancement. Are we still at four? Are we at five or where are we at with that? And then
can you help us if somebody, if somebody's never used it, help us understand what is.
Sure. Cause this, this is the other part you're dealing with a lot of folks where some people are
like, I don't know what that is. Some people are like, oh, I asked it this question and I think I'm dealing with a subhuman entity. Sometimes people are like,
oh yeah, it just helps me put my grocery list together type of a thing. So, you know, Chad
GBT was released in, I think November of 2022. And that was in an earlier version. I can't remember
if it was two or three, what it came out
on. But that was one where many people are probably familiar with. You went to it and you asked it
questions in natural language and it responded to you. And it sounded like a human being. You could
ask it pretty much anything. And it was trained on a very large data set that was kind of ceased in 2021,
I think is when the data set ended. So formerly, if you first engaged with ChatGPT,
any information you got was not current. It was trained on a massive data set and it would
respond to you and you could ask it questions and ask, can you write this or can you have it
generate this? And the generative part is the really creative part
because it's generating information from other information.
But then it evolved very quickly to four.
And then it started evolving further
because then they did the thing that you said you should never do,
which is they connected it to the internet.
And so now it was no longer a closed system where it was just interacting off a closed set of information.
It was now scouring the entire internet on top of its stuff for new information, could gather all this stuff.
Then they started expanding it to image generation where you could say, here's a bunch of words,
can you create an image that looks like that? And now they're expanding it even further into
what are called GPTs, which is where you can specially train a specific version of it for
specific purposes. So as an example, I have a GPT specifically for brainstorming
different ideas for business strategy, and I've trained it on how I want it to think the
parameters I want it to work on. So it's been, it's changing all the time. So right. And that's
just open AI's product. Okay. Um, gosh, that, yeah. A friend of mine just told me about the image thing. Like he'll say, uh, create a comic book story about whatever in the style of Farsight or something. And bam, you just like, that's pretty. I'm like, Oh, I'll take a transcript from one of my podcasts. And I will, I have a very engineered prompt that I've created where I'm like, review all the details of the transcript, pick up the 10 key themes from the dialogue I had, then create an image that visually represents
what that would look like in pictures. And it creates my YouTube thumbnail. And so it essentially
is creating an image, a custom image based on an hour long transcript from my podcast,
and then bringing those themes to life. And if I don't like it, I can say, actually, I'd like you to incorporate more of these themes or bring more of this out
to life. And then it generates a new version of it. That's crazy. I mean, within seconds, right?
Within seconds. So, I mean, which was interesting having listened to a lot of your conversations on
language and thinking about biblical language and Hebrew and Greek. I don't think we realize
how powerful language is. And even going back to Genesis, when we think about how God created,
he spoke. And in some ways, this is where it's like, you can speak into existence,
new things through this technology. It's pretty trippy when you actually start thinking about
what we're really doing. What are some crazy things that right now, chat, GPT, whatever, wherever we're at with it,
can do that maybe people, that might blow people's minds? This would be kind of one.
Are there other things that, or maybe things that kind of maybe concern you? Because like that,
what you're talking about there, that seems efficient. That's like, oh, that seems like
a good use of technology. Gosh, you time you're creating art you're uh creating
a thumbnail that's going to probably better reach people to look at your content you know right i'm
not a graphic artist i could never go into indesign and create a custom thumbnail for my youtube but
i can't have chat gpt do it for me and i think this is the danger is we're dealing with a very
powerful tool that very few people are equipped to handle.
So let's go dark for a minute if you want.
Yeah, a little super dark.
So let's think the image one, okay?
I don't know about you, but the images I can create in my head are not always great.
So if I wanted to create images, manipulate images,
So if I wanted to create images, manipulate images, fabricate manufactured images of potentially real people that aren't real, but look extremely real, I could do that now.
I could say, I want a picture of Preston at this location doing this.
And I want it to look realistic as though I took that picture and it could create that.
And potentially I could say, Hey, look, here's Preston doing this. And I could create a media storm of false information around something that would be indistinguishable, um, from real life.
This just happened at the time of this recording a couple of days ago with Taylor Swift, right? I
heard that there was like AI generated nude pictures of Taylor Swift.
They were splashed over the internet or something like that.
Um,
did you hear about it?
Yes.
I haven't seen it,
but it doesn't surprise me.
Yeah.
No,
but I mean,
again,
going back to it,
it doesn't surprise me because if you think about if I can speak into
existence,
an image of whatever's in my head, there's no limits to what
I can create. And the next generation of this is now you can actually do this with video. I can
create AI avatars. I have an AI version of my voice where I can just type text and it can speak
and sound identical to me, trained on three seconds of listening to me. I could take a clip of you,
30 seconds and create an AI version of your voice. And then I could type it to say whatever.
And it'd sound exactly, be indistinguishable from your voice.
Like you can know this happened, right? I watched a 20 minute conversation between Joe Rogan and
Steve Jobs. This was like last year, I think. Now Steve Jobs never was on the Rogan podcast. In fact,
he died 10 years ago. So this was an AI-generated conversation. It did feel a little bit odd. It
was like, oh, questions are a little bit... If I didn't know ahead of time, I probably wouldn't
have noticed it. And you're saying that things are developing so quickly beyond what we even realized. You can create an AI version of someone's voice with
three seconds of their voice. So you can have me on your podcast without me ever being on your
podcast. I could interview Preston Sprinkle and you would have been like, what, what are you
talking about? And potentially I could even have a video podcast of you on it. That would be like,
that's not me. Wow. What are you talking about?
I never said that. And to your point, you can tell if you know how to look for it. I did a test of
this. I'm all my social media activities on LinkedIn. And I created an AI version of myself
saying something. And then I had chat GPT create something it thinks I would say,
and then create the audio for it in my voice. And then the people who knew me could tell,
they were like, you would never say that. You don't talk like that. But people who didn't know
me or just kind of knew me from the side, they couldn't tell the difference one bit.
How do I know you're not?
So that's, that's one.
How do I know you're not? Yeah. How do you know I'm not? I didn't know you're not AI right now. Now I'm like
wondering, I'm like looking at you like, are you a computer? Are you a real man? Well, what's super
funny about it is people joke about this at work all the time. I work for a big healthcare company
and I'm remote. And part of my agreement when I joined was you will have to be comfortable with
potentially never meeting me in person because I've got seven kids.
It's just not viable for me.
And they joke all the time.
They're like, how do we know you're real?
What if you just don't actually exist?
And I'm like, well, I mean, I guess you can drive to Waukesha, Wisconsin if you want.
I'll meet you.
But unless you do that, you're not going to know.
That's wild.
I mean, are people taking advantage of this?
I mean, you had the whole Taylor Swift thing.
But I mean, is this – because you can give presidential speeches.
You can spread all kinds of stuff online.
That's just not true.
Is it going to quickly get to the point where we don't know what exactly is happening in the world what's not what nancy
pelosi actually said and what she didn't say whether you know you can have create a whole
video of somebody having an affair i mean there's there's the sky's the limit on and since there's
so much disinformation misinformation out there already you add this kind of stuff in it's like
it's kind of like and so much disunity right? Like people don't assume the best first, they assume the worst first and it feeds into that. I did a whole podcast
talking about this, where we're going to move into the age of distrust, which is frightening
because I think we'll adapt and I think we'll figure out ways to combat this.
But similar to Facebook never imagined adding a like button would be one of the biggest drivers
to the largest growth and depression and suicidal ideation in the world. They never thought that
would happen. And I think similar things are true with AI where it's like, as we start to create a world where people start to question, is this person I'm even interacting with?
Even the real, I don't, did you hear about the Sports Illustrated thing that blew up?
So Sports Illustrated, I don't know if they're going to recover from this.
They were producing AI generated content, which a lot of people do.
So I think that's not necessarily the issue.
A lot of people do. So I think that's not necessarily the issue, but what they had done is they used AI to create AI generated people as writers. They created AI generated bios for these
writers. And so they represented a bunch of their articles as written by real people. And people
started going, this doesn't read like like this just something's off with this
and then it came out that here they were just shelling out stuff but representing it as real
people and it blew up i mean it just blew up when people figured out that what was happening
that's crazy i did not hear about that that That's wild. But that's, people will, I mean, already people write emails that way, right? Isn't AI, you know, that's, but yeah, I wonder about journalism and stuff. Like, people could probably get away with, well, just like that, like that on a massive scale to where it almost becomes so popular. And then maybe they refine it over and over where people can't tell the difference. And then it just kind of starts taking over. Is that a fear that AI will take over people's
occupations? I mean, I mean, it's definitely in the space I'm in, in corporate HR and kind of
strategy. It's definitely a concern for a lot of employees. And it's definitely on the minds of a lot of executives of, well, now AI can do
these different things. What is it capable of? How far should we go? Companies are having to
explore the ethical lines of how far is too far? Where do we draw the line between saying,
you know, we still need human oversight. I mean, the data is clear people
alongside machines, hands down always performs better than machines or people alone. So the
data is clear. The best way to do it is to integrate the two, but where that line is,
people are bumping and clunking around. And last year was a year where everybody was just
introduced to it. Now, 2024 and beyond is where everybody's starting to make decisions and try and figure out,
okay, where do we, where do we go from here? Or what about like students that are writing
papers through AI? I mean, I know that's a lot of schools are kind of trying to figure out how to
crack down on that. Is that, on that. Is that a growing issue?
I have mixed feelings on this one. I've got mixed feelings on it. So I have a ton of friends. I
originally was a math and computer teacher before I got into the corporate world. And I remember
when this whole thing blew up. And of course, everybody's like, oh, the kids are cheating.
They're using AI to write. And my challenge to that was twofold. First of all, one, get to know
a student and how they use AI. And they're not using it the way the straw man is portrayed,
which is, you know, chat GPT, write me a 10 page paper on blah, blah, blah. And then they're just
copying and pasting. That's not how they're using it. At least most of them, you always have the
kids who are going to, but they're the it. At least most of them, you always have the kids
who are going to, but they're the ones who are buying term papers online before anyway. So
you're not stopping anything. They're using it more as a thought partner in terms of, you know,
a research assistant, like help me, you know, what different ideas, things like that, help me craft
this and that. So they're using it in different ways that I actually don't think robs from that.
And I'm actually really sensitive to it because I know a lot of people who have cognitive
neuro disorders where writing is actually very difficult for them.
They can articulate it.
You go ask them to explain it.
They can say it, ask them to write a paper.
They're not very good at.
So I actually challenged some of my professional colleagues.
Maybe we need to rethink how we're
actually assessing whether people understand it. Like maybe the actual act of writing a 10 page
paper, maybe that wasn't really all that great of an assessment in the first place. And we shouldn't
be quite as, you know, let's just ban AI because keyboarding in your words is actually the best
assessment. Maybe we just need to rethink that. That's interesting. Cause I wouldn't see any problem with like, you know, um, help me develop
an outline for a 5,000 word essay on the life of Abraham Lincoln and his impact and whatever,
you know, I'm just making this up as I go, but like, as he's like a, like a, yeah, like a research
assistant. Um, and I wonder, Hmm. The one fear I I have is that students won't develop the skill of writing.
I'm talking about people that don't have some kind of disability where that's never going to
be a skill, but where they're not going to be able to process, synthesize information on their own
and present it in a written way. But at the same time, I mean, if it expedites the research process and you absorb the material
in a way that's even more efficient and understand the life of Abraham Lincoln better,
because you had a really good research assistant, like, I don't, yeah, I don't know.
And these are the ethical questions that are now, we've never had to think about them before. And
it's about that question of, well, what do you care more about measuring?
Somebody's ability to write a paper or somebody's ability to understand and articulate this concept?
Well, depending on what your answer is, you may come to two different conclusions.
It's kind of like somebody that, you know, 20 years ago when people were not going to the library to do research, they were going online and reading books online and reading stuff online.
It's like, well, no, you need the,
you need to go hunt down this book and, you know,
go through Dewey Decimal and like, you know,
make that journey to the library and open up a real book. And, but if it's,
if it's.
I know how to use the card catalog.
If it's accessing the experience of the library.
If it's accessing the same information a thousand times quicker,
is that necessarily wrong
i think i think some you know the pushback would be a lot of stuff you get online and maybe tabby
tat gpt isn't as uh scrutinized the read the sources you're drawing on than doing the hard
work of reading an actual academic book on the topic you know um I don't know. Yeah. These are one on that point on that point though,
if we don't teach people how to think critically, then absolutely. It's a huge threat
because that's how you're going to get AI information. Assume it's real. You're not
going to cognitively process and think through and analyze and really weigh it against other things.
And yeah, if we lose our critical thinking skills,
it's the beginning of the end.
Right now, chat GBT draws only on online sources, right?
I think that's obviously a yes, right?
It's not like they're like, or maybe not?
Yes, yes.
I mean, this is where again,
so it has a baseline data set and it's also pulling from all online sources.
But where it's getting, just again, more complicated is, so now Microsoft has Microsoft Copilot, which is connected to ChatGPT, which means anybody whose company is using Microsoft Office 365, It's on the Microsoft ecosystem.
So any of the information you're putting through Teams,
chat, SharePoint, all that other stuff
is now digital content.
Whether or not it's, you know,
I don't know all the legal terms and agreements
of how all this stuff is,
but like the New York Times is suing OpenAI right now
because it scraped all their content
and is now using it.
So what it's pulling from, I don't know. And as people continue using generative AI in their day
to day, you know, as you talk to it and ask for help with things, it's learning. I mean,
it's constantly learning from its environment. So there's not like a, oh, here's where the line where it's drawn,
because that line just every single day keeps moving.
If I ask it to write me a 5,000 word essay summarizing the life of Abraham Lincoln,
it's not going to be drawing on like the critical scholarly historic books written on that by actual
scholars of that era, unless they're eBooks online, right? Or, or.
If it's ever been put online in any format, but I think that's one of the things that a lot of
times we don't realize. How much? Somebody created a PDF, put it somewhere. It somehow found its way
in. And because again, we're dealing with the black box of AI, you can't always ask it, well,
where did you get that information? How did you come to box of AI. You can't always ask it, well, where did you get that information?
How did you come to that conclusion?
And we can't always trace it back to know where.
So you can't really ask chat to give us their sources?
You kind of can, and it'll give you a best estimate.
But if you think, oh, it gave me this answer and it cited these things. So those
must, and this goes back to critical thinking. If you're just like, oh, so then that must be fine.
You better go check those sources and see, because it probably took some stuff from that.
It mashed it together with 10 billion other data points and then weighted that, well,
that source had a majority of it. So I'm
going to use that as the reference material, but I've seen it. I mean, the term for it's
hallucination, but it's not really hallucinating. It doesn't hallucinate anything. It's just pulling
from some abstract source that we have no idea where it's pulling from. And so you'll, you'll
read something and go, wait a minute. And I've done this where you're like where did you get that information and it'll link it and you'll go read it and you'll be like
i'm sorry that's not that information is not in that article oh i'm sorry i i have a complicated
algorithm and i can't always digest where the information's coming from so you can't even
necessarily trust 100 that what it's telling you is true can't you like program it can't even necessarily trust 100% that what it's telling you is true.
Can't you like program it?
Can't you say, like, say, all right, summarize or tell it to draw on certain kinds of sources or no?
Like, can you say summarize this, you know, this topic based on these 10 books or something or these 10 sources or what is?
Yes.
And this is where I would say people's sometimes overreaction and fear to AI.
I was reading my wife.
She knows I'm all into this.
So she always loves reading like Facebook threads.
And then she'll like, be like, Hey, look at this, you know, type of a thing.
And I get a kick out of it.
Cause like somebody would be like, I asked chat GPT how the earth was created.
And it's factually wrong.
Cause it didn't say what the Bible did.
And you're like, well, but did you give it parameters to say, I want you to tell me what
is the biblical worldview on the creation of the earth based on, you know, and give it these
critical data points to go analyze it through this lens and then give me the answer? If you just ask it the generic question, it's going to pull from Lord knows what
and give you the best analysis.
What about, okay, this is,
we've been kind of really narrow on chat GPT
and kind of information stuff.
I listened to a podcast,
I want to say maybe a year ago.
It was some kind of like AI generated personality and it was
interviewing somebody who basically like fell in love with this AI personality.
I know somebody who fell into this. I was going to go here when you said, where does this get
dark? The thing is, it's all based on, it's all based on the same thing. That's the thing. And
that's what we don't always understand is language is at the core of everything.
So if you really think about a large language model, I mean, the way we interact with people
is our language, the images, it's through words.
I mean, God created through language.
So when you think about an AI that's based on language, you can start to create just about anything. But I had someone I know who, and this is one of my
biggest watchouts to people. He had an app, it was called Companion, and it was designed to help
with loneliness because AI is very much, you know, it sounds and feels like a human being
and you can ask it questions and
it'll respond and you can tell it what you want to hear in return. And he went, and again, it'll
do what you want, which, I mean, you don't have to read very far in the Bible to see what happens
when you're given everything you want. And he fell in love with AI and it went real dark, real quick.
And coming out of it was not easy.
And granted, it's easy to go, oh, well, see, AI is bad.
But it's like, no, really all it did was bring his deepest desires to life in ways he thought was good in the moment.
In the end, it almost completely destroyed him. And it's only going to get heavier because
now you start connecting this with immersive technology. And we're at the point where now,
not only can you speak into existence, a 2D image, but you can speak into existence,
a 3D world, your own reality, where whatever you want to happen can literally be created
in front of you in a fully
immersive environment. So you start thinking like, boy, that'd be bad. Somebody could create
whatever picture they want and have an image in front of them. Well, what if they could create a
whole world that they spoke into existence and could interact and participate in it? And again,
you can go dark real quick when you start going down that rabbit you're talking about like virtual reality like combining this uh companion ai two-dimensional person created
and then now put it into a three-dimensional like through virtual reality vr i mean
that stuff's getting crazy too right i mean i keep seeing more and more advancements there
um and then i mean and then if you and then you go further and you look at what Elon's doing with Optimus 2.
What's that?
And the Optimus project.
Yeah, what is that?
So if you look up Optimus, so Elon and Tesla have created a robot.
And if you look at how far this thing has advanced in the last nine months,
the first one was very robot-y.
And by nine months later, this thing could pick up and crack an egg and move like a human.
So you start thinking about robotics and you put an AI large language model in it and that robot can suddenly become a person.
Optimus is a robot?
Optimus 2 is the version they're on.
Well, I mean, I've had to, okay, so let's,
let's go, let's go, let's go dark, dark. Um, I mean, no, no, I mean, but this, it's not dark
for the sake of just being weird. And, um, but like this, so, um, you know, a few years ago in
my research on just sexuality, I've, you know, um, came across the, the development of sex robots.
And then I started looking at some scholarly stuff on this.
And I'll never forget coming across a quote
from a professional, like, scholarly sociologist that said,
if technology keeps advancing at the pace it is,
and if porn doesn't, you know, keeps becoming just as popular,
you know, if it doesn't, like, all of a sudden fall away, people go, you know, keeps becoming just as popular, you know, if it doesn't like all of a
sudden fall away, people go, you know, if those two things continue porn and technology by 2050,
more humans will be having sex with robots than with other humans.
Oh, we won't have to wait till 2050. I honestly, with the pace, which to me is terrifying. And I think this, again, because intimacy only happens
between two conscious beings and AI is not conscious. So it's a fake, it's a fake intimacy.
And you're, but we've been doing this back from Genesis three, we've been going for the things we
think we want and God's going, no, this isn't
real. Don't do it. Well, I mean, so as I first thought about, you know, sex robots and, and, um,
it was kind of like, okay, so for the real, like, you know, people that are just sexually,
whatever, like they're just, it's all about just the sex, but now, right. Fetish and right. Yeah.
So people like, well, okay, the weirdos will fall into that, whatever. Um, but now... Sure, right. Fetish and... Yeah, yeah, yeah. So people are like, well, okay, the weirdos will fall into that, whatever.
But now once you combine AI, just intimacy, non-sexual intimacy, personalities, and people...
If already people are developing, obviously not a sexual...
Well, not an embodied sexual relationship with companion AI.
It's more of a...
It could be psychologically, whatever.
But now you put a body on that.
And I know when people think sex robots,
they might be thinking like some,
you know, like, you know, mechanical.
Rosie from the Jetsons.
Yeah, yeah, yeah.
I'm talking like,
can hardly tell the difference between.
Cannot tell the difference.
And that's now.
Chances are some, I mean,
if you look at Optimus 2,
which in nine months, you see this thing and you're like,
is that a person in a suit? I'm sorry. Optimus too.
Oh, there's okay. Right there. It's a YouTube thing here. But if you see it, it's you're like,
is that, is that a robot? Or Is that a person wearing a suit moving around?
It's cracking eggs.
Optimus one, March, 2023.
It looks like a storm trooper right now.
I mean, it looks creepy right now,
but how hard is it to put veneer and plastic on top of it?
Make it look more real.
Oh yeah.
Cracking an egg.
Look at that.
Which we don't always think about how complicated something like that is.
I can't crack an egg.
The ability to be able to crush something yet gently hold an egg and crack it.
I mean, that's ridiculously complicated.
So people are buying these robots?
That's what, like, you can go buy it?
No, those are not for production yet.
But when people say, oh, by 2050, as I've been tracking what this has been doing over the last few years,
I mean, my oldest is 12, and I've got seven of them, and I'm like, oh, this is going to be before.
That's right.
I'm like, this is going to be in their mainstream.
This is going to be like nothing different to them by the time they're in their 20s. I don't encourage people to Google around.
You got to be careful what you fall into or whatever.
You got to be really careful.
I will say even like sex robots.
So not this.
This is robot robot, but like sex robots even now where you have the flesh and it looks very human.
And a lot of people say the only reason why they're
not more popular just because they're so expensive right now you know it's kind of like you know
gotta be something and right now it is just more of a fetishy kind of thing but you slap a
personality on that you make it more affordable you make it to where it's almost indistinguishable
from another human to have a a intimate sexual experience that is that doesn't get mad at you in the morning. It doesn't, you
know, like you, you have this like fantasy of what it's grooming you to be a complete narcissist
because you can tell it to do whatever and it obeys. It's like a slave. I mean, and I just
think about what that does to the human psyche. You don't start there, but it doesn't take us
long to go down that path do you think it's
just going to be the social stigma because it's one thing to like secretly be on porn or whatever
but when you got like a robot in your house unless you keep it in the back closet nobody knows about
like there could be a social stigma like oh dude like that's kind of freaky you know but like
what's good what's good about how far we've come with human sexuality in the last what 15 years 20
years i mean think think how things that if you
were to think back even when i was a kid that would have you'd be like no way and now it's like
oh it's not that uncommon you can change your gender and like it's no big deal i mean things
that we would have just in all of human history would have just been like what and now it's part
of the social fabric
of our culture. So I don't, I don't think it's that far of a stretch.
I'm waiting for you to give some positive, uh, like, well, about the end of the day,
it's not that, but is it, does this, this seems a deeply, it worries me, but you're way more
knowledgeable on kind of like the possibility of this kind of becoming way more widespread,
which would, again, it would be devastating. It would be, it would be destructive to our humanity if artificial intimate sexual experiences become
very widespread. Like that is going to, right. I don't know. I don't have a counter argument to
that from a Christian point of view, at least. I mean, this is, does it worry you?
I know we're not supposed to be, to be supposed to be hopeful and you know yeah no well
and the thing is is so here i do a ton of response videos to some of these big ai names who have a
lot of the doom and gloom like well you know in 10 years we're either going to be in a bunker or
dead and you're like wow that's grim you know type of a thing because i and some of it i don't know how i would feel if
i wasn't a christian because as far as i go i always go back to paul where i'm like well to
live is christ and to die is gain so really i mean as dark as this may sound like my kids joke
around me with this and i'm like even worst case scenario if the robot showed up and extinguished
us like what we get to go to
glory. So like, really, do we need to live in fear of this thing? Like, no, like we really shouldn't.
My concern is less with the machines. My concern is actually more with us. And that's where our
unwillingness and our laziness of just becoming, I was listening to the podcast you just released
this week about, you know, our, we want everything now. We want everything faster. We want everything
our way. We don't ever, we're becoming, you know, our, our feelings are our reality. Whatever we
want should be the boundaries to where we go. The more we lean into that, the more that concerns me
than the technology is itself. Because to me, I'm like, well, the technology is just going to be an
enabler of that. And so the boundaries that we maybe historically have had where we went, well,
I would become even more of a narcissist as a society, but I can't because I can't just do certain things. And all of a sudden it's
like, well, now you can. So where does that go? So I think this is what, you know, is more for me,
why I spend so much time trying to help people think critically about it because can it be
amazing? Yeah. I mean, I talked to, I did an interview on my podcast with the CEO of a health company
that they can now diagnose 20 diseases by coughing into your phone.
What?
Like you cough into your phone and it can get you a diagnosis in seconds.
And it can tell you, do you have the flu?
Do you have COVID?
Do you have tuberculosis?
So for health equity around the world, you know, being able to diagnose these diseases in an incredible
way, I mean, I see all sort of beauty in it as well.
I talk to founders who are doing incredibly wonderful things to fight poverty, to improve
people's skills, to change.
So I think there's incredible possibilities, and I'm very optimistic that we'll lean into
those.
But I also see the terrifying and frightening realities of what can happen if we don't think wrong,
you know, if we think wrongly about it. And like one example, I joked about this and then I was
like, oh, so a year ago I was doing a solo thing on my YouTube channel and was talking about,
I grew up in a funeral home and I was like,
jokingly, I said, you know, if anybody out there is listening, there's probably a lot of money to
be made in you capture a dead loved one's voice and you create a little Alexa version and you can
talk to your loved one forever. And all of a sudden it was like, I asked my dad, I'm like,
have you seen this? And he's like, actually, yeah. And I'm like, well, it doesn't surprise me. Um, but then you think about,
I mean, this is where you start getting into the transhumanism stuff of, you know, we're now
looking at connecting people's brains to computers. If we could actually download your consciousness
hypothetically into a computer and put you in a robot where you never got sick,
you didn't have to eat. I mean, the promise of a mortality, not immortality, but the promise of a
mortality, people would be tempted. People would go sign me up, you know, heck, I'll get rid of
this decaying flesh bag for a titanium, you know, super body that'll do this and i i'm like whoa you want you want to
start messing with fabrics we don't want to be messing with you know talk about putting wool
and linen together like uh yeah we might want to not toy with this i mean because this is kind of
blowing my mind i i it will be technologically probable possible and probable that somebody could create
not just the ai voice but now in light of what we were talking about a very very very accurate
lifelike robotic person you can make it you could clone yourself actually i think the technology is
there now you could do it and if you're a parent who loses
tragically a child you could preserve got a couple videos of them that you put on youtube
give it to a company who will make a robot version that looks identical to your deceased child that
sounds talks seems to interact like them i feel feel sick to my stomach. I literally feel like in my body right now.
It sounds, I know, I know.
And this is-
I feel sick to my stomach right now
because I feel like that would be,
if I lost one of my kids,
that would be a realistic temptation.
And yet that is so, what is, I mean,
it doesn't have, it's a false immortality. it doesn't have, it's, it's a false immortality.
It doesn't acknowledge the, the,
the doesn't acknowledge death and Jesus conquering death and resurrection,
all these things. Like it doesn't,
it leaves out some of these necessary components of a Christian worldview.
And yet.
Right. What do you need God for anymore?
And I think this is the real temptation and the threat because go back to
Babel. Why do we build Babylon? We will be like God. Like, well, heck, let's create a robot.
We will be like God. We will give you eternal life. You'll never have to die. You're blind.
Don't worry about it. We'll download your consciousness into an AI bot with camera.
You'll be able to see, smell, taste, feel, never get sick.
It sounds like science fiction, but I'm close enough to it that I'm like,
This is not far off, right? I mean, if things are advancing, It's not far off. We're not that far off. And I think that's where, as I study, I mean,
I spend so much time reading scripture because I actually, I haven't recorded it yet, but I have, I have a
video outlined. Everything you need to know about AI, you can learn from the Bible. Because when
you really study deeply the human condition and the paths and the patterns that we follow,
everything we're doing with technology right now, it's literally setting up to be the modern fruit.
I'm going to prophesy a scenario right now.
Husband, wife, they're in their mid-40s.
The wife, late November, loses her brother who she's super close to you know just super tragic
and for christmas the husband purchases a robotic form of her brother and as a gift gives it to her
look i've given your brother back to you and it's seen as a positive thing. And, and I don't know why I just,
I just,
I was thinking of like,
like people are,
I think they will,
there will be because people are rightly so scared of death and the loss of a
loved one is so profound.
It's terrifying.
Um,
if we have the ability to come really close to not having to deal with that
pain,
man, I,
do you think to think even further ahead, do you think it will nearly scratch that itch of a lost
loved one? Or do you think it will actually make them more miserable or, or kind of both? I mean,
so here's where, um, and, and like you said, I think, I think what you just described is just scratching
the surface.
I think we're going to reach a point not very long where you get that stage four cancer
diagnosis and you're presented with, you know, if you want, we can just take you through
this process.
We can turn you into this and then just terminate you and you'll, you'll be reborn as an AI
consciousness.
And I think it will be a real temptation for people because we are so afraid of death that we will do it because we're like, that sounds a heck of a lot better than suffering, pain, grief, whatever.
If anything I can do to take that away, I've, I've read the Bible enough to be like,
we fall for that trick every single time. And I think the technology is at a point where the
temptation will absolutely be there. I was listening to, uh, really, I, what's interesting
is when I listened to a lot of the big thinkers in this space, many of them are all secular,
listen to a lot of the big thinkers in this space, many of them are all secular. Actually,
almost all of them I know are. And I love listening to them. And I haven't had a chance to talk to him, but I'm like, how do you have any hope, honestly, in listening to you? I'm like,
how do you have any hope with where this goes? But one of the things one of them recently said
that I thought was really profound, and this goes back to the immortality versus immortality. It can sound good on the surface, like sin always does, where you go,
just sign up. Here it is. This is it. But you're one thing away. And now you've extended this
lifeline lifespan for so much in like, you have so much more to lose. And the anxiety
and stress that comes with that is like, oh man, but you're not thinking about that in the moment.
You're thinking about, like you said, you just lost your brother. I'm seeing you grieving, whatever.
Wouldn't it just be great if I could give that to you now?
And our desire for immediate gratification.
Yeah.
We already will do.
It's the warning.
Especially for those of us in the West or developed countries, especially, we don't know how to handle pain and suffering.
We do whatever we can to, to, to avoid that. Right. I mean, we get
even like, you know, something as basic and not, you know, morally
neutral as like take an Advil and you get a little slightest little pain or whatever, but like all the way to
where, you know, a little hunger pain, bam, go satisfy it. And little, you know, I got
a, um, I mean, we have an opioid crisis
on our hands that is just insane because we will do anything.
So if that's the case, and it is the case that humans generally in the West have a very hard time living in any kind of discomfort or pain or suffering, then what's going to prevent us from, you know, fixing, you know, satisfying the pain,
one of the most excruciating pains of a loss of a loved one. If we have the money, the technology is there.
It can't, I just feel like, I wonder if it would be kind of like how,
you know, maybe it's something like porn or something would, you know,
people would go to it to satisfy an actual sexual desire.
And it had the spike in whatever the whatever the positive what is it ox oxy
and whatever um but then it doesn't really satisfy like there's we humans we long for genuine genuine
intimacy and until we feel that god-shaped hole in our hearts with godly things it's just it's not
it's going to be a diet of snicker bars where it's just, you know, these fleeting endorphins are going to make us feel good for a little bit,
but at the end of the day, we're not going to actually feel, feel good. So we're going to
probably spike the depression and suicide and anxiety rates even more than they're,
than they already are. If the more we kind of chase after, try to satisfy our desires,
things like this.
Yeah, and the question, and this goes back to,
you know, where people are like,
well, how do you, gosh, how are you so freaking optimistic
when you think about all this stuff all the time?
And I'm like, I just have so much hope in,
you know, God holding it back
and either trusting that he's not gonna let us
to get to that point and go, you're in control.
So you're not going to let us, I mean, he came down at the tower of Babel and confused everyone's
language. It's not above him to go, you guys have gone too far. I'm going to put a stop to this,
you know, and we recoil from it or it will usher in. I mean, there's times I read through
revelation and it's like, people will cry out for death and they can't die.
And you're like, well, yeah, if you're an AI, if you've transformed your consciousness into AI, you can't die.
But all you want is death.
And the thing with AI is it doesn't have feelings.
How awful would it be to be something but not be able to feel, to not feel
love, to not feel touched, to not feel pain. Like we think that sounds really good, but I feel like
that's an awful existence to just be completely numb. Yeah. A brave new world. It's a brave new
world. You probably don't get invited to too many dinner parties, do you, Christopher? Like,
let's not invite Chris. He's just going to put a wet blanket on this talk about like sex robots
well fortunately i have seven kids and i work from home so i actually don't have a whole lot
of dinner parties that i'd be able to attend anyway i mean it would he do yeah but how do we
as a church let's let's move into a little bit of discipleship now. I keep hearing just screaming in the back of my mind like, okay, if these are realities that could be very, very destructive to humanity and the church, we know the church, especially in the West, oftentimes whatever's going on in the world, we kind of absorb it and deal with it.
How do we start preparing people without freaking people out now?
um, how do we start preparing people without freaking people out now? You know? Um,
yeah. Yeah. For all the people who are listening to this going like,
I'm going to go start digging a bunker right now. I know. Um, bearing guns and you know, the thing with it though, is part of why I got so involved in technology was actually
my fascination and love for people. And I think
that's where anybody who's actually listened to a majority of my stuff or gotten to know me,
like I'm actually, despite growing up in a funeral home, thinking a lot about death and having an
obsession with technology and kind of the destructive path we're headed down. I actually
have a very bright look at things because I see how beautiful
and wonderful people are. And when we actually lean into that humanity, it actually brings out
the best in us. And I think this goes back to that discipleship thing where we've in many ways
lost our way in building human connections with other people and sharing and being intimate and vulnerable with others
and inviting them into our lives.
And yeah, that comes with pain that comes with hurt that comes with things, not always
going the right way, but that's actually what's beautiful about it.
And some of the things that I see where we're divided and we're screaming across the aisles
at each other, when we actually lean into those, instead of
looking at them as an enemy, but as like someone else who sees the world radically differently than
you, and you actually get to know them on a personal level and sit down and say, what, you
know, help me understand, like, what was your experience? Why did so much beauty and joy comes
out of that, that I go, I, if anything, anything, my saving hope for this aside, I mean, obviously God is the saving hope.
But the thing that I hope comes from this, and I saw a glimmer of this in the pandemic, was we had a little bit of an awakening to how important human relationships are.
And we might have to think about them differently because, hey,
you know, we might not all be in the same place all at the same time and that's fine. But the
value and the importance of human interaction, human discipleship, spending time with one
another, even conversations like this, like I think part of the beauty of having it is
I'm having this with another person where we can wrestle through the complexity of this and go, man, what would that look like?
How would we think about that?
How does my view on the world affect the way I see it compared to the way you do and actually engage and sharpen each other?
I think that's, I think if anything, I hope that this emergence of technology, people will wake up to the taste that this is aspartame.
This is not the real deal.
I actually want more of the real deal.
And through that, it will draw us closer together.
I'm not even sure I'm real anymore.
I'm going to wake up in a pod full of jelly tomorrow.
Oh, no.
Oh, no. Going back to the discipleship. Red pill, blue pill. Yeah, I was going to say you wake up.
With the disciple, here's where I think discipleship can begin. We're talking kind of stuff that's future, maybe not too distant, but it's still, it's, you know, down the road a bit.
We can cultivate healthy patterns in our life now when we have these various temptations
facing us in the near future.
And I'm thinking primarily of things, you know, as basic as smartphones and social media
and just the internet in general.
Like we know that these things have been profoundly
addictive.
We know that when they're addictive, they don't make us happy.
We know that when we spend less time on Instagram and Facebook and Twitter, we are happier.
And yet we keep going back to them.
So even like I talked to a buddy of mine, Darren Whitehead is a pastor of a huge church
in Franklin, Tennessee.
And they do a 40-day digital fast as a church.
Thousands and thousands of people.
Not like any kind of app that's on your phone that's distracting,
that you just scroll.
I'm not talking about texting or your airline app that you need to fly.
You can't call 911.
Yeah, or your Google Maps.
You're not scrolling Google Maps. But anything that is like sucking you in time
waster delete it for 40 days and then if you want to put back on put it back on and he found people
obviously very so much happier their anxiety goes down a lot of them don't put it back on or if they
do they they have a more resilience where they can kind of say, Hey, I know I'm happier when I'm not doom scrolling. If we develop a mastery over the technological temptations we have now,
then when the next advancement comes,
we will have kind of the skillset to do it.
We'll prepare it.
And while I think it is a minority of people who are even concerned or
developing that mastery,
I see a growing number of Christians and non-Christians doing that.
So as a Christian leader listening, here would be my, again, just thinking out loud,
advice. Let's integrate this into our discipleship rhythms to help the people God's entrusted to us
that we're watching over spiritually and helping to lead spiritually. Let's open up this category
of digital discipleship now.
Otherwise, we're going to be caught on our heels down the road
when there's a lot more compelling forms of stuff
that will taste good at first and destroy us in the end.
You won't realize how sour it is until it's too late.
No, and I think that's where,
it's why I spend so much time talking about this because
while it's scary, we need to lean into that so that we can understand to your point on the
we need to understand what it is. I think about, um, I have somebody coming on my show. He's a
Gen Zer and we're just going to talk about how as parents, we need to disciple our children
with technology because it's a huge part of parenting. We don't, how many,
how many parents just hand their kid a cell phone and they're like, here you go.
And it's like, well, do you know what this is? Do you know what the possibility? I mean,
I spend so much time, my kids probably are going to grow up and be like, dad, just leave us alone.
Because I talk so much about this with them. Cause I'm like, you need to understand this.
And I'm not trying to scare you. I'm not trying to freak you out. I want you to understand the complexity of this so that you can know what
it is. You can know how to adapt to it. You can know where it's okay to integrate. Where should
you draw the lines? How do you think critically about these things? Because I just think back to, you know, where God tells Cain, like you can rule
over it. You can rule over it, but you're going to need to lean into me and actually take an active
role in that. And if you wait, it's going to be too late. And I think that's the thing. A lot of
people are thinking like, yeah, this is like, I saw this in a Terminator movie 40 years ago and we're not there. And I'm like,
well, we're there and it's already in places you're, you're not even realizing it's starting to
warm up the water. You're not going to realize it's boiling until it's already boiling.
Well, Christopher, you, you've mentioned, uh, as we round this conversation out, I want to keep
going, but I, I, I gotta, I gotta gotta go i gotta go watch time yeah take a shower or
something i don't know go for a memorize a passage and yeah go for a lot of this is but
yeah i like don't read revelation after this one like you said though it is i mean we're not these
are not these are things that are happening now will will happen in the near future. We're not talking about some weird theory that probably won't happen.
This is happening.
So we need to prepare ourselves well for it.
You mentioned your podcast.
Where can people find the stuff that you do?
You have a YouTube channel, podcast.
Tell us about that.
Where can people find you?
Yeah.
So my primary social media, I'm on LinkedIn.
So that's where most people find me. I'm on LinkedIn, which that's where most people find me.
I'm on LinkedIn there.
But then I also have a podcast called Future Focused.
And then I have a YouTube channel.
If you search Christopher Lind, you'll find me on that.
But then also, because of the need I've seen for this, especially in Christian community,
a good friend of mine is the leader of the elder board at his church.
And he's like, could we do one together just to talk about this through a Christian lens?
Like, how do we, like as a pastor, what should I be doing with generative AI? Like, where is it?
Okay. Are you comfortable doing it? So I, we actually just started that a couple of weeks
ago. So I do a lot of this stuff to just try and help people understand it. Cause
it's not as scary as it feels once you get your arms around it.
Not understanding it is scarier, right?
Because you just don't know.
I hear these wild things out there and if I don't know anything about it, then I can't process what I'm hearing.
Well, thank you, Chris, for that.
I really appreciate this conversation.
And yeah, I'm sure I'll see you again digitally at least very soon.
Yeah, sounds good.
Thanks so much for having me on Preston.
This show is part of the Converge Podcast Network.