Employee Survival Guide® - Negative Impacts of AI on Employees and Working

Episode Date: April 17, 2024

Could the very tools designed to enhance our productivity in the workplace be silently shaping a future of bias and invasion of privacy? Join me, Mark, as we delve into the profound impact AI is havin...g on employment, from the boardroom to the break room. Along with insights from industry consultants, we unpack the transformative effects on hiring practices, highlighting the unseen biases lurking within AI algorithms. We confront the unsettling reality of how these systems could perpetuate discrimination and examine their role in employee surveillance, questioning the trade-off between efficiency and ethical practice.In a world where AI's judgment can influence your career trajectory, understanding its reach into performance evaluations and mental health assessments is crucial. Our discussion traverses the spectrum from the benefits of AI, such as personalized support and early symptom detection for mental well-being, to the darker side of increased scrutiny and emotional surveillance. We dissect the delicate balance between leveraging AI for good while safeguarding against its potential to exacerbate workplace stress and breach the sanctity of personal data.Finally, we grapple with the complex relationship between trust and technology as AI surveillance becomes an unwelcome fixture in our professional lives. I emphasize the pressing need for self-awareness and proactive measures in protecting our digital footprints from prying algorithmic eyes. The responsibility to navigate these murky waters lies not only with employers and regulators but with each of us as individuals. As we sign off, I urge you to stay vigilant and informed, for the AI-driven workplace is not a distant future—it's here, and its implications are profound. If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States. For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Mark here and welcome to the next edition of the Employee Survival Guide where I tell you, as always, what your employer does definitely not want you to know about and a lot more. It's Mark and welcome back. Today we're going to touch upon a topic that I've been thinking about for some time, and you've been confronted with it day in, day out. Artificial intelligence and working. Pretty complex area, and I'm waiting in
Starting point is 00:00:37 because I don't think many other people are. I think we're really early in the process. I'm using Google Gemini, and I'm sure you're using ChatGDP and other devices to play around or draw pictures, et cetera, art, et cetera. But my focus is really on what is gonna happen in terms of AI in the workplace affecting you as employees and executives.
Starting point is 00:01:04 And I may be a little all over the place today, but I did try to organize my thoughts. For example, we're gonna talk about hiring and discrimination about my favorite performance monitoring and surveillance. Other topics include AI decision-making and how about mental health. And the weird one I came up with as well is decision-making and how about mental health. And the weird one I came up with as well is AI is a harassment amplifier.
Starting point is 00:01:34 People would use a bot to go after you. And then employee trust was really a good one too because that dawned on me. People really don't have a, they have low employee engagement, so they have low trust of employers these days. So let's just dig into it. As a preface, I see a lot of cases, a lot of fact patterns, and I'm just a very curious person.
Starting point is 00:02:01 So I went out and just looked at, and yeah, I used AI to search for this podcast because I wanted to see what it was doing because what you know if you understand what AI is it's you know it's I was watching a podcast with Elon Musk the other day from a fellow in, I believe, Norway. And Musk said that we're running out of data, meaning, and the implication was that the AI is so fast at picking up all available data. He even talked about having all books in the world that ever were written have already
Starting point is 00:02:41 been analyzed, in terms of the machine learning and then photos, TVs, podcasts, you name it. And so it's going to run out of data. That's pretty alarming if you think about it. So getting back to what did AI produce in terms of its relationship to itself in implement law, I will say that the topical areas were, that when I typed it in in terms of Google Gemini to come up with issues, then I looked at the topics and said, was it accurate or not, based on my experience? And I wanted to just share with you what I discovered
Starting point is 00:03:24 and also what I'm gonna just talk about in terms of the reality check of what these different topics are, what's gonna happen to us as employees, how employers are gonna react to this issue, how they're gonna handle it. So let's just dive in, sorry, segue. So the first one is hiring and discrimination.
Starting point is 00:03:51 AI-powered recruiting tools can inadvertently perpetuate bias if the data they are trained on contains historical patterns of discrimination. That's probably the biggest issue that most people think about is, you know, employment or human input into it. How does the program as the coders prevent the AI machine learning of not replicating bias? So this could lead to cases involving employment discrimination based on race, gender, age, disability, et cetera. And employers will need to be very careful about how they use AI in the hiring process
Starting point is 00:04:19 to avoid legal trouble. Good luck with that. I see it's fraught with issues. If anybody has a recent college graduate, they know that their son or daughter has been interviewed by a computer. The computer is doing all the work. It's the screening process. That's more common these days than not. So it's not something that I experienced when I was ever interviewed for a job, but I never really interviewed for a job because I've been doing this all my life. But that's happening at a very quick pace.
Starting point is 00:04:50 So one question is in terms of what is the AI interview process like, and what is it looking at recognition of facial, how are you tweaking on the, is it looking at your nervousness in the video, et cetera. The problem in discrimination in the hiring is the AI system used in recruiting often is trained on historical data.
Starting point is 00:05:16 If the data reflects past biases, for example, fewer women or minorities hired in certain roles, the AI algorithm may learn and replicate these patterns, as I indicated. This can lead to qualified candidates being overlooked simply because they don't match the bias historical model the AI is using. We could see discrimination cases where a rejected job not to consume employers alleging disparate treatment. For example, they were intentionally discriminated against due to their membership in a protected class. Disparate impact is another form of discrimination
Starting point is 00:05:49 theory we use where a seemingly neutral AI hiring process disproportionately screens out individuals in a protected category. So the defense challenges, meaning the employer, will face the following of proving the AI system is fair. This may involve demonstrating that it doesn't have an adverse impact on protected groups. This is a complex, especially with less transparent AI models. You have to understand that the developers don't really know what's in the black box, and I'll get to that in a second. This is happening so quickly. And we all have, at least I have this,
Starting point is 00:06:29 is the Schwarzenegger film when the machines take over the world approach. You're like, you know, the concern that AI becomes too smart and it begins to eliminate the humans. But I digress again. And so there's a discrimination in the hiring process, a possible issue that we might have. It may be happening now. How would you know if you're being discriminated based on the hiring process for the AI bot? You really wouldn't, honestly. The law is very slow to pick up
Starting point is 00:07:00 as current events. And so, you know, you wouldn't find, unless somebody wrote about it, like I would write about something where I found a little tweak here and there, my discovery of cases, in terms of discovery of data in actual legal cases. And we came up with something that, you know, you would learn it that way, but it's a very slow process from the legal standpoint
Starting point is 00:07:22 to bring these issues to the forefront. So I think it's a wild west of discrimination in the hiring process. You just pray that they're going to do it correctly in terms of the coding of what they're looking for. But again, I have really mixed feelings about that, that humans put in data into computer code that can generate the replication of the bias. Onward. number two, performance monitoring and surveillance.
Starting point is 00:07:47 My favorite. This was a really big topic when we all went into our own homes during the pandemic and we discovered and reporting happened where a lot of employee monitoring took place. It's been going on for a while, but it came out. So AI can be used to monitor employee productivity, communication, and even physical movements.
Starting point is 00:08:08 This raises significant privacy concerns and can lead to cases where focused on lawful surveillance and on reasonable expectations and the creation of a hostile work environment. So AI augmented monitoring tools go far beyond traditional performance tracking. They may analyze employee email and communications for, get this, sentiment and potential disloyalty. Think about that.
Starting point is 00:08:35 You know, you're writing an email, everything you do at work, everything you touch, it can be analyzed in a millisecond by a computer to determine whether you're, you know, loyal or whether you're something maybe it's happening to, maybe mental health. So employee monitoring is a huge concern here in terms of the AI issue. We don't know, and you know, is the government regulating how companies use this,
Starting point is 00:09:01 the technology to, you know firm with employees. We know there are keystroke monitoring software. I always laughed at the one where the, you can buy a little device and move your mouse and make sure that the mouse is moving to trick the computer for detecting low productivity. But the facial expressions and body language on Zoom calls and
Starting point is 00:09:25 things of that nature, all of that data, emails, Slack, text, visual from Zoom, all of that gets dumped into the data because the data, so the AI bots are so hungry for more information and you can just think about the insidious nature of what it's looking for. Even your physical location, where you are. Some people want to work remotely from wherever these days. And maybe that comes into play. Types of cases we're looking at in terms of this aspect of performance monitoring and surveillance.
Starting point is 00:09:56 You get, obviously, the invasion of privacy issue. Employees may argue unreasonably intrusive surveillance violates the right to privacy in the workplace. I agree, because we had this issue come up whenates the right to privacy in the workplace. I agree because we had this issue come up when we all went to Zoom during the pandemic. There was other things happening in the household around us that we could see. We got the cat, the pet, et cetera, but it was like conversations between spouses, family matters, serious issues. So how far is the reach and breach that the employer can go to monitor people is really a serious question.
Starting point is 00:10:30 I always argued that the issue when you had remote working and employee monitoring is you have this issue of violation of trust. We have these little things on our computers, laptops, we can turn off the screen, but can you really turn off the microphone? I mean, it's not likely. And if you are, I'm going to give you a warning. If you have a computer device from your work, shut it down if you want some privacy. Because leaving it open is like letting Alexa listen to your conversations with your spouse.
Starting point is 00:10:59 Remembering it's there and remembering they can listen in on everything you're doing. So I don't mean to scare you, but it's a reality. And by the way, if you got something as an ad targeting you after listening to or working or talking in front of Alexa, it's no joke. It's actually doing that. It's listening to you and it's spinning back advertising to you.
Starting point is 00:11:19 It's not an irony. Next issue, data protection violations, collection of and storage of vast quantities of employee data by AI systems will raise issues regarding data protection laws like the GDPR. And I apologize, I didn't look at the acronym, but the issue here is I thought about health concerns issue. What if the AI is working on through the laptop and listening to you have a conversation about your cancer diagnosis and you didn't tell anybody at work.
Starting point is 00:11:48 There's some, where does HIPAA come into play there? How does the computer know to shut off and when it hears a medical issue like that? We're not being told anything. Is your employer telling you about what the constraints are gonna be about when it hears a medical issue happening across its devices? I mean, certainly they're not gonna say,, you know, a screen come on and say,
Starting point is 00:12:07 sorry, Susan, but you can't talk about that in front of this computer because we have to observe your HIPAA rights. I mean, that's not going to happen. Employers are their own private governments. They do what they want in their own ways, and it's all secretive. And we as employment lawyers try to enforce the rights of people against employers because employers are terrible. They they're they're just they don't want to observe the ethics or morals of the issues.
Starting point is 00:12:36 They claim that they do. But in reality, they're sitting there listening to you right now, listening to this podcast. You know, how about that? Your own time at home. Third issue, gig work and independent contractor status. Let's take a quick break. It's Mark and we have a new product for you. It's called the Employee Survival Guide or Employeesurvival.com.
Starting point is 00:13:02 And it's a site that you can obtain PDF products that I created myself. I was spending too many hours, way too many researching and writing about, for example, the performance improvement plan or beating them. And the second one about negotiating severance negotiation agreements to the most important topics that we see in terms of the web traffic and podcast traffic we have. So check out EmployeeSurvival.com and see if this can try to help you and you don't need an attorney to use it.
Starting point is 00:13:28 Thank you. This one came about in terms of the Gemini related feedback I got. AI platforms often manage gig workers and freelancers. And the AI bot said in response, legal battles are likely to emerge around whether these workers are truly independent contractors or if they should be classified as employees and the rights and benefits that come with the status. Now, I saw that feedback from the AI device. This is Google Gemini.
Starting point is 00:14:03 I'm like, you know what, the data, what I just read to you didn't really make too much sense in terms of that's an issue. Well, we understand what gig workers are, we understand what independent contractors are, but the legal battles, it doesn't, the AI device is not smart enough yet, it will be, to tell us why this is really a big issue. It's saying often managing whether these employees are truly independent contractors. I mean, it's not being more specific. So you're really at the advent of, in this case, Gemini, that it's inability to learn about particular areas, but it will catch up. It'll probably listen to this podcast and reform itself to give a more explanative analysis about the roles between independent contractors or why the AI device is maybe
Starting point is 00:14:56 interfering or doing something with the independent contractor. So I found that that particular concept, gig work and independent contractor feedback from AI was not actually, it didn't tell us anything, I guess, at the point. And I'm struggling actually to help you interpret what's happened because it didn't really tell us much about anything in terms of because the AI device is designed to learn, but it's not producing yet. It will. The fourth thing is automation and job displacement. Well, here we get something really sound and concrete.
Starting point is 00:15:29 You know, I read in the Wall Street Journal that, and I think it picked over in the New York Times as well, Wall Street is in, you know, the certain levels of workers in the investment community at the very low end are getting eliminated because, and I think it was Goldman Sachs eliminated a range of individuals at the low end. So if you're an investment banker trying to start out, AI is doing your job for you. There's no more pairing decks and spreadsheets and the like.
Starting point is 00:15:56 That's all being done by AI. So it's automation is happening in job displacement. That's a good example that just recently occurred. And so the AI Gemini said to us, as AI automates more tasks, job losses will occur. This could be cases related to severance packages, retaining obligations and the overall responsibilities companies have towards displaced workers.
Starting point is 00:16:21 Again, another example of Gemini not producing a response that is adequate to explain the topic automation, job displacement, the issue of severance package cases, retaining obligations. Well, we're training, yeah, maybe to retain your workers and have them do something more that AI devices can't do, of course, and then overall responsibilities. I don't know what that didn't really explain it. more that AI devices can't do, of course, and then overall responsibilities, I don't know what that, they didn't really explain it. So another example that AI gets it wrong, or at least doesn't provide enough explanation. So I included these because they were just,
Starting point is 00:16:55 they just kind of stuck out being nonsensical sometimes in terms of their explanation. The next one is algorithm decision-making.. Here we're talking about the coders, the human people who code and I'll tell you what the response was I got from Jim and I. When AI systems make decisions about promotions, terminations, and disciplinary actions, there is the potential for bias and unfairness. Losses may challenge a lack of transparency in these AI systems demanding explanations for the decisions that heavily affect employees. So I included this one because we all knew that the data in, data out, you can have data data in, meaning the biased data by human individuals.
Starting point is 00:17:38 And then next, think about this. The AI device is learning from a wide range of things. It's going to learn everything. It's going to learn what the Ku Klux Klan was. It's going to learn, I think that Google tried to change its AI device to do certain things and as a way to not offend people. But so the AI device is going to be to be machine learning device is going to pick up and learn about biases in historical American history, your world history, etc. And it's going to bring that into its algorithmic equations. And it's going to make decisions about your job. And are they they gonna get it right? Is it going to, you know, is the employer gonna be transparent about it?
Starting point is 00:18:30 If the employer says we were convinced by Accenture or whomever to have AI in our workplace, and by the way, Accenture has a large team of people trying to promote AI in workplace, that's what they do as consultants. And so they, well, what about the actual box itself in terms of putting the code in and what is it learning and how you control for bias? And so the fear is being that the bias is being pumped into the
Starting point is 00:18:59 algorithmic equation and thus it's going to be impacting you adversely. Again, next one, number six, AI decision-making or AI-driven decision-making. Here we get into examples of performance evaluations or promotions or disciplinary actions and terminations. Think about this in all honesty, it's very important. Companies love to automate things and And so you're probably experiencing an automated performance evaluation. You know, your manager is, you know,
Starting point is 00:19:31 maybe creating something of an input and then you're getting a question back from a AI device and you're having to feed into it. And it's gonna analyze the time it takes for you to do it. It's gonna analyze your pattern of communication, what you say, the language you use, and what it can potentially mean. It will interpret that. And then it'll assess other aspects. Performance reviews can include now your emails and your work on various products. And if you only, if you received a performance improvement plan
Starting point is 00:20:02 or a negative performance review, where it goes into various line items where the accusation of poor performance because you need improvement, it's going to put various facts in. Well, now we're going to see that the AI device is going to grab from your available work product, you as the employee, and then from your 360 review by your coworkers, remember those? Yeah, they still happen too. And that's all be fed into the performance evaluation and the AI device is going to help the manager rate you, but maybe they'll just rate you itself. That's pretty scary. And it's going to be pretty smart about it too, because it's looking at every piece
Starting point is 00:20:41 of data you've ever created about your performance. and who's smarter right now in terms of your memory versus the computer. The computer is going to memorize every single email you wrote and every single deck you wrote, etc. And maybe have more data than you. That's scary. It means that you have to get on top of your game now because it's starting now. I'm talking to you about it on a podcast and what concerns our performance evaluations are gonna get more uglier. We know that they don't work themselves, but we know why they're used.
Starting point is 00:21:14 They're used to basically get rid of people, but maybe that's gonna get more intense. The next thing is promotions. What about AI system identifying candidates for advancement based on a wide range of factors, potentially including social media activity and personality assessments? I thought I included this one because we have different generations of people who put out social media.
Starting point is 00:21:35 We have people who put out LinkedIn. We have people who put out, let's say, YouTube or TikTok or, you know, in various ages and people are, you know, I think of the classic example of two things actually. I heard a story yesterday, I was driving where there was YouTubers who were putting on about vaping. There was apparently a large phenomena of people just video recording themselves vaping, teenagers generally, and whether that, you know, it's stuck on YouTube and you can't take it off so it's there and the machine learns it and as you grow older, you know, it's factored into your MO of who you are as an employee potentially and the computer
Starting point is 00:22:22 has a wide range of reach to understand you. So it's like, well, first of all, never put that stuff on YouTube or anywhere else if that matter, but that's a different population of workers who live their lives in the iPhone generation. I have three kids like that. And so, you know, they put it, well, I don't know if they're not putting a lot of data out there, but they're on Instagram or whatever it is.
Starting point is 00:22:48 And so the concern is that all that data that they put out there is going to be used to evaluate their performance potentially, because the machine learning is learning from everywhere. It's very scary. So anybody of any age putting out data in the social media sphere of any sort, it's going to be used to qualify you for performance evaluations or promotions. And how about disciplinary action? Maybe you're fitting a pattern that you engaged in some form of insidious bullying of somebody on the internet or shaming or whatever it was and then brought into the workspace because that's the computers were told to go out there
Starting point is 00:23:25 and see what you do and speak very clear to you that when you do a background check on the individual, do you know that background check companies will search your social media profile? I only know personal experience because I've asked for them in terms of background checks. I don't know who I'm dealing with sometimes. And so I didn't ask the background check folks to do that,
Starting point is 00:23:48 but they went ahead and did it. I say that as an example, because now you have the AI devices doing what? Much, much more quantitatively bringing in data all about you. So we have this kind of reckoning of your prior activities and what your future activities will be about putting information out there. At one point, you want to put information out there because
Starting point is 00:24:08 it's job relevant in terms of maybe it's LinkedIn, you want to put something out. Or if you're a younger employee and you want to, you know, you've just been fired and you go viral on your TikTok and you want to share that because you're 20 something and, you know, you know, and the company just, you know, got rid of you in such a way. These are recent examples where young employees have done this and it's gone viral. And people, and the CEO of one example in one company had to apologize to the individual
Starting point is 00:24:37 in the way the termination was handled. So really caution in terms of AI driven decision-making and your availability of data that you're putting out there So really caution in terms of AI-driven decision making and your availability of data that you're putting out there or putting out there individually at work, even outside of work. It's, as you can hear, it's quite crazy in terms of what we're dawning upon.
Starting point is 00:25:01 So let's move into section number seven, the challenge of the black box, the AI. Not to be redundant here, but the one significant challenge is that many complex AI systems, especially those using deep learning, are considered black boxes. This means that even the developers may not fully understand how the AI arrives at specific decisions. I mean, that's pretty crazy. It presents issues with transparency, potential for bias, accountability. So if an employee is terminated or denied promotion
Starting point is 00:25:35 due to an AI assessment, it may be impossible to provide a satisfactory explanation of why that decision was made. The lack of transparency goes against notions of fairness into process. I pause here because employers are always going to try to create a defense about why someone was let go and they're going to build a case.
Starting point is 00:25:54 If they can't understand why an AI device did this to a terminated employee or et cetera, that's a problem for employers. And almost want to sit there and wait and pause and watch the landscape develop because they want to use this product, they got to control the product and they got to know what it's going to do.
Starting point is 00:26:13 And you and I can sit here and watch as they screw up because they're going to make these screw ups. And I'm going to bring these examples once they happen. So transparency is not something employers are going to want to do because they're not transparent with you now. Are they? No. And that's all I ever talk about to raise your awareness. I'm being transparent because that's what I see it. Employers don't want to be transparent because that's not the way it works. They want you to be hidden. The very
Starting point is 00:26:41 essence of this podcast is telling you what your employer does not want you to know, including this podcast. So transparency and AI are in conflict, but because employers have to justify their reactions to a court or why they made a decision, they can't tell the judge, your honor, the AI bot did it. Well, no, Mr. Employee, you've got to explain it because that's the law. So there's a conundrum there that they want it and they want to use it,
Starting point is 00:27:08 but it's going to get out of control very quickly. Transparency, I doubt it. Okay. The next one is very similar potential for bias. Even if the AI developers themselves have no discriminatory intent, hidden biases in the training data or even the very features of the AI
Starting point is 00:27:24 selects for analysis, meaning historical data or current data, news, New York Times, Wall Street Journal, you name it, may also lead to unfair outcomes. Detecting the bias within a black box system can be extremely difficult. Yeah, I'm waiting for this one too. You're going to do some type of audit trail, some type of accountability to ensure that there's no bias there. I think it's just black box for a good reason because it's going to probably get a lot of employers in trouble. You'll have a lot of class actions and I'll be looking for it because, you know, who else should look for it? The federal government? Not likely. So it's up to you and I to police the situation.
Starting point is 00:28:11 And why not? You know, this is kind of the dawn of the employee era where employee actually matters and, you know, employers realize they need employees instead of the other way around. They can just sit there and abuse them, which is changing. It's very slow. But so potential for bias is extremely important. Hidden bias, even though the employee and companies say we don't have it and they're going to claim that there's no bias there. There's EEO employer, et cetera, equal employment opportunity employer.
Starting point is 00:28:42 But it's going to be the Wild West to watch this develop. Now let's talk about something really important. Another topic, AI, workplace mental health well-being. This one popped up and I said, I'm going to pause and just ruminate on this with you because. So AI is poised to influence employees' mental health in positive and negative ways. And so the potential benefits are this, personalized support.
Starting point is 00:29:11 Now think about an AI-powered chat bots or apps could provide tailored advice on stress management to you. Early symptom recognition saying, you know, Susan, I think you might be exhibiting the patterns of the diagnosis of a current maybe major depression or anxiety, something like that. So you think about that happening to you. And then assess resources that will be helpful to you.
Starting point is 00:29:36 So that's positive. That sounds likely to happen, I think, right? But so it's monitoring how I'm talking with you right now and assessing is Mark having some DSM5 DSM5 is the Diagnostic Statistical Manual to assess, you know, is there something about the way his tone and intonation about, you know, he might be feeling kind of blue today. You know, it's something of that nature you think about. But, you know, it's the laptops listening to you constantly. Proactive intervention. The AI could analyze communication patterns, as I just described, behavioral data to identify employees at risk of burnout or mental health decline, allowing for early intervention. Well, again, makes sense. I personally would never use
Starting point is 00:30:18 that at my employees at work. That's something of their own personal privacy, but some employers may decide that this is a relevant area for them to create apps and help employees, like under an employee assistance umbrella. But again, pretty strange and unusual, but maybe we recognize it being a positive. The risk aspects are self-evident. So increased work pressure, AI systems setting productivity goals,
Starting point is 00:30:50 and more monitoring employees' activity constantly could exacerbate stress and anxiety. I mean, who are you working for? You're not working for the man, any logger, or the woman. You're working for the bot. And the bot doesn't have this consciousness about you, about, oh, I think Mark's, he looks a little overworked right now. His tasks and timing is actually slowing down. Maybe we should give him a break or
Starting point is 00:31:11 something like that. Think about the, I'm sure it does exist at Amazon, the warehouse. I mean, the package fillers are, you know, they're working constantly. And there's lawsuits about this issue. And people are burning out and describing this warehouse as a very difficult place to work. But yet you and I both need our packages at our doorstep. And it's amazing that it happens every day. But someone there in that pipeline of logistics is probably experiencing some level of stress and being monitored by it. Why wouldn't Amazon do that? Of course they would. So increased work pressure. How about emotional surveillance? Employers may deploy AI tools for sentiment analysis of emails. I've been discussing that. Facial expressions,
Starting point is 00:31:52 monitoring, worker emotional status, and raise serious ethical and privacy concerns. I mean, folks, when you step into the workplace, you have some privacy and it's related to your HIPAA and when you go to the bathroom. Other than that, you've got no privacy and employers basically run the place like a private government, like I tell you. So, you know, emotional surveillance, I mean, that's basically stepping over the line and, you know And our employer is going to do this. I have examples and performance reviews where the manager doesn't really identify fact issues, examples, but goes after the subtleties
Starting point is 00:32:38 of how you reacted or spoke or interacted with your team. You know what I'm talking about. You've seen and heard about this. That's an emotional surveillance scenario that's happening now, but think about it happening in a magnitude of a hundred, you know, using computers to do the work that humans can do and putting that information in the hands of managers
Starting point is 00:32:58 to make decisions. I personally know of a case in MetLife. I did a podcast years ago, and it was the woman had been, was working remotely, I think it was during the pandemic, and they told her she had negative performance review. It was a race case, but they said because her, they assessed basically her emotional intelligence. This person had a PhD and worked at a very high level. And they basically, you know,
Starting point is 00:33:26 should canter because she had emotional intelligence issues because they surveilled her. They actually told her that we produced later on in the discovery, but these like, you know, these five different items that she had failed and they're all based upon how she reacted or said something, et cetera. So quite serious issue, it does happen, it's gonna get worse. How about the algorithmic bias and mental health assessment? AI systems used to predict mental health concerns could be based on biased data,
Starting point is 00:33:55 leading to misdiagnosis or unfair treatment of employees. I love this one because you wanna have a learning machine that's going to be, it's gonna feed in the data what what it needs to learn all of the DSM-5, okay, so get that, all of any articles research reports, all that because it's consuming everything off the internet. And so then you have this thing of unfair treatment due to it regarded you as having disability, you either you have or don't have. And that gets into the Americans Disabilities Act because that's the act that controls and governs, you know, mental health and well-being at the workplace.
Starting point is 00:34:33 So you could have a potential situation where too much data goes in and feeds about this junk back to the manager and the manager reacts to it. And, you know, it basically terminates you. So that does happen. jump back to the manager and the manager reacts to it and, you know, it basically terminates you. So that does happen. People do get fired when their mental health status is disclosed because some employers, you know,
Starting point is 00:34:53 don't understand it and they react to it. So it can get even worse than that. Privacy concerns, violations, yeah. Collection of sensitive biometric and emotional data by an AI system will raise alarms about employee privacy and potential misuse of this information. Yes. All day long.
Starting point is 00:35:12 Run. You know, this is going to get, this mental health and wellbeing at work is probably going to be the largest issue that's going to come up out of this because of all the surveillance that's happening. And if you have coworkers or yourself or have mental illness of any sort, do nominal to severe, you're kind of put on notice right now because you have to manage yourself with your employer, which is like, wait a minute.
Starting point is 00:35:41 I have to think about how I write something or how I talk about work in general and I'm being assessed. I mean, folks, that's what's happening now. That's what they're gonna do to you. They're going to, and it already does happen. And it's actually Bridgewater Associates is the largest hedge fund in the world, down the street from my office here.
Starting point is 00:36:02 They actually instituted this, the principles that Ray Dalio had, and they monitored people the way that they spoke, and they rated them. I don't think that they'd do it to the extent that they used to do it, but they still operate with principles over there. But they used to operate with this,
Starting point is 00:36:22 two iPads sitting there and they rate each other as having a business discussion about tone, intonation, whatever it is of being effective or transparent or whatever they wanted for to do. It's been done already. And so now you're gonna speed that up, that process with AI doing it in a way that is not apparent to you,
Starting point is 00:36:44 done through maybe laptops, because it's probably easy to program that. Because the device in front of me has a microphone on it, has a camera on it, and it has all the things you need to pick up. It maybe has a sensor to pick up my blood pressure if I push the little fingerprint icon on the dashboard here. So enormous privacy issues can come up
Starting point is 00:37:08 because of what the AI device is coming up with as it monitors you. This leads to the next issue of discrimination based on mental health. If the AI system flags individuals with potential mental health struggles, could this lead to unfair treatment? Yes.
Starting point is 00:37:24 Mispromotions? Yes. Termination? Yes. That sounds insidious, but that's what currently happens today. We just don't have an AI system doing that. Humans do this to people. Humans do this to people. That's why you have mental health cases under the American Disability Act and state law in the courts. And there's been thousands of them because humans do this to one another. If you become unhealthy, you're going to get fired or mistreated. Not to say that all employers do this, but humans are not nice folks at work. And if you're not a healthy person at work, you're a lesser-than person. And it's called dehumanization. If you don't know what that means, learn it. It happens all the time.
Starting point is 00:38:12 I think about the person who's at, again, the Amazon warehouse, and they're experiencing physical problems and not able to move the packages along, et cetera. They're being less productive and they're being monitored for number of packages they can get in a minute, an hour, whatever. And it's not about them being human with emotions. It's about them as a machine. Okay. So it happens.
Starting point is 00:38:35 You need to be aware of this. So I won't get into the new responsibilities for employers. Feedback I got was duty to monitor the AI impact and then reasonable accommodations. I'll leave that for another day. Here's another topic. AI as the harassment amplifier. The new story today is teenagers,
Starting point is 00:38:58 which can be downright mean, are using AI currently. I think there's some ban on production of sexually explicit content in terms of if you ask an AI to do this, but deep fakes. Currently, this is really sad, but teenagers, mostly females, they're being deep faked in terms of their nudity and it's being put out there. That's the story. So now let's
Starting point is 00:39:28 bring that into the workplace. Number one, that deepfake that's out there, it's going to remain out there for that individual. That's an identity issue that somebody has basically stolen their identity and is going to follow them in the future for their employment. But in the workplace, there's the potential that the AI could excel, pattern recognition and data analysis in terms of analyzing. Let's say you're an employee and you like someone, and you basically reach out the line, and you start to analyze their social media. You read it basically an algorithm.
Starting point is 00:40:05 Essentially you create your own learning machine and you go out there and gather everything, the person you adore that you want to have a relationship with, and you go out and look at all of their sense of personal information any way possible. And you're mining it, even your communications with them. So you can get into a scenario where you have,
Starting point is 00:40:31 harassment of any nature, doesn't have to be sexual based, it can be any nature where the coworker is using an AI device to develop a scenario to basically harass you online and potentially at the office. So a deep well of potential, you know, just misbehavior by coworkers and also by companies who are responsible for managing this issue as well, because it's their work environment. So you now have taken the work environment, you stuck AI into it, and you've given this free range to the entire world to use AI, and of course they're using it.
Starting point is 00:41:10 The data statistics, the first week of its production of AI, the first chat that came out, it was insane how many people went on it, and I did too. I'm sure you did, so to find out what the heck's going on. But now the employers are responsible with their work environment with the existence of AI externally being brought into the workplace to harass employees.
Starting point is 00:41:31 I don't think we have early examples of that just yet, but you can imagine the social media presence of someone and using that in some way to extract out or some vengeance or some level harassment. Again, people are humans are mean individuals. So that's an issue. You could have the deep fake issue happening to a coworker at work, it's possible
Starting point is 00:41:57 because you can tell a device to do that and it will. Next issue, employee trust in AI is a hot topic. Okay, if you're not, you should at this juncture of the podcast episode feel quite uneasy about this new topic of AI in the workplace. And there was already a trust issue beforehand, now it's getting more intense. And so employers have to manage this issue and they're
Starting point is 00:42:27 not going to tell you how they're managing you. They may bring it up at work. If you're an outside consultant, you're working in AI for corporations, you know a lot more than the average employee. But employers are not telling employees what they're using AI for at work. If they do, let me know. Send me an email. I want to know like, you know, earliest signs of what's happening. But the employee trust issue is poor to none at this juncture. It's called employee engagement. I think it's in the low 20s to 30s, depending upon age. So that's pretty low. I mean, that's really low. So that means a lot of people are unhappy at work. And so you then throw this AI issue on top of it,
Starting point is 00:43:11 then you have a lack of trust of the black box issue because even employers don't know what the heck is gonna happen in the future. And they're implementing these technologies into your workspace. So how do you gain trust in the process? I guess, and I thought about this before I started to include this topic, but,
Starting point is 00:43:32 you know, think about the, you know, yourself, your role in this process. And it's always gonna come back to your role in this process. What are you doing for yourself to protect yourself in any way against any misuse of AI? So we can think about basic examples. I can close my machine down,
Starting point is 00:43:53 turn the little screen thing off. I can turn the machine off, the laptop, whatever, and I got some privacy at home. And I go about this lifestyle to ensure that, am I in a safe, private space from my employer? So you can do that. The next thing you can probably think about is your professionalism when you're working at work. Obviously, you don't want to be baited into an argument by anybody or raise your voice. You want to avoid that stuff. It's pretty commonplace. When you write your emails or when you're having
Starting point is 00:44:23 conversations, don't drop the up bomb on the individuals. So we're going to get back to maybe a political correctness aspect that we have since moved away from, but maybe that comes back. So think about things you can do for yourself to protect yourself because your employer is not going to do this. They may say that they are, but they're going to want these tools because they can't resist. They're being sold this technology by consultants. It's at a scale you don't understand. It's going to overtake the workplace. In order to control AI,
Starting point is 00:44:55 you can control how it interacts with you by making choices and being aware and observant of what's happening around you. Now, you're just trying to do your job, but now you're going to settle with this next level of observation and protectionism of yourself. Well, yeah, you got to do that because your employer is not going to do it for you. And so, if you're ever vigilant about gaining information about work, now's the time, because this exponential effect it's going to have upon you is going to get insane. Meaning that AI is going to gather all data about you in any way, shape, or form.
Starting point is 00:45:31 And we're talking current active data on projects you're working on, whatever it is it's feeding into their system, it's watching you. And that's crazy. And you have to just be vigilant about what your boundaries are, what you're saying in such a way, and turn things off when your work-related device is off. People have phones. I mean, that's a work-related device. And what do you do with it? Do you put it to some type of safe that you can't hear?
Starting point is 00:45:59 Do you turn it off? We don't turn it off, but we should. So I just prompted you to think about all these things that are coming to mind as I'm talking with you about this that are just quite serious and insane, and I don't want to trust employers to do the right thing. So the aspects of eroding trust employees about AI is topical issues like the black box problem I just talked about. They're opaque. You don't understand them. Employers don't understand them. Big problem. Fear of job loss because the AI device is going to eradicate investment banking early-stage jobs
Starting point is 00:46:39 that people get out of college. So job loss, a fear of job loss, even at, let's say you're in your 50s and you already have fear of job loss already because you're gonna age out at your prime earning years but now the AI device is going to maybe, take some of your job responsibilities away. You shouldn't see that on the march in terms of before it happens to you. Likewise, you should see that your job responsibilities,
Starting point is 00:47:05 some of them are given to younger workers. Usually that's a telltale sign that you're being, you know, led to the pasture. Privacy concerns, extensive workplace monitoring, as I discussed, big issue, perceived bias, a good one talked about. So, you already have a lack of trust, low employee engagement. Employers still trying to figure the stupid thing out. Just type in the word, the phrase employee engagement, okay, in a Google search, and you'll see what I mean. You can't get past maybe, I don't know, 10 pages of Google if you want to, what pages are there anymore, but they're all consultants who are out there trying to pitch this thing called employee engagement.
Starting point is 00:47:43 It's just everywhere. So you'll find nothing in terms of employee engagement helping employees, meaning this podcast helping you. You won't find that topic. You'll find employers and consultants pushing their information out there, doing a lot of SEO to push employee engagement like it meant something. But with an employee engagement at 30%, I mean, they're obviously not doing their job. They're making a lot of money, but they're not doing their job. So how do you build trust in the AI processes now upon you,
Starting point is 00:48:17 making things transparent and explainable to you? Well, that's not gonna happen. Why would employers wanna do that? The next topic is involving employees. I've been talking about that for a while, involving employees in the performance review process, but they don't want to do that either. No input from employees other than making a widget better.
Starting point is 00:48:38 That's fine. But no involvement in the AI implementation process. Why would employers want to do that? Ethical and responsible AI, that's so new in terms of AI being so new, that's a topic that maybe policymakers will think about implementing, but trying to type into an AI like Gemini
Starting point is 00:49:07 and trying to ask it for anything of a sexual nature, a picture, an image, it won't do that. So there's some level of concern and hold back that maybe the companies themselves are implementing this or I have maybe a little couple of policy making maybe as any new federal sanctions come out, but I don't know yet. So trust in AI,
Starting point is 00:49:32 I don't think there's gonna be any employee trust for a long time. It's more gonna be the opposite of run for the hills, folks, this shit's happening to you in real time. It's like, and, and the problem is, and my sincerest thoughts about this is that no one is thinking about this. No employee is actively thinking about this is going to happen around them. That's the purpose of the podcast episode because it's happening now.
Starting point is 00:49:59 I've only touched upon a small scale of what's happened to you. I gave you a large in terms of overview, but it's already here. And so what are you going to do to protect yourself? I've said to you some common concerns of just data privacy, et cetera. But don't put your data out there. And I know you want to be social and social media, but but it's gonna come around to kick you in the ass hard and it's gonna affect your job. Employers don't care.
Starting point is 00:50:31 They're just gonna wanna have this machine, learn everything you possibly can because it's doing that now. All right, so I won't talk any further. I think you got a hard taste of what we're seeing is I'll talk about more as a research further into this, but it's here and it's gonna come across in sideways manners and it's gonna affect you and your job.
Starting point is 00:50:51 And I'll try to bring these things to light. So you're aware of it and I'm on the front lines because I'm concerned about it. And who else on the front line with me, the federal government and the courts? Somebody else on the front line with us? You. But we don't talk about you because I don't wanna talk about you, the employee.
Starting point is 00:51:10 But you're here because you're listening. And if you're pissed off, freaked out, whatever, cause I'm bringing this to your attention, then good. And now you have more information to deal with to protect yourself. So with that, have a good week. I'll talk to you soon. Thank you. If you like the Employee Survival Guide, I'd really encourage you to leave a review. We
Starting point is 00:51:33 try really hard to produce information to you that's informative, that's timely, that you can actually use and solve problems on your own and at your employment. So if you like to leave a review anywhere you listen to our podcast, please do so. And leave five stars because anything less than five is really not as good, right? I'll keep it up. I'll keep the standards up. I'll keep the information flowing at you. If you'd like to send me an email and ask me a question, I'll actually review it and
Starting point is 00:52:00 post it on there. You can send it to mcaru at capclaw.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.