Simply Solving Cyber

Simply Solving Cyber - Shelley Jackson

June 28, 2023 Aaron Pritz
Simply Solving Cyber
Simply Solving Cyber - Shelley Jackson
Show Notes Transcript

What risks does the rapidly increasing use of AI bring to the table?

Aaron Pritz and Cody Rivers sat down with Partner at Krieg DeVault LLP, Shelley Jackson to chat AI Risk and what you can do about it.

Aaron Pritz:

Thanks for tuning into Simply Solving Cyber. I'm Aaron Pritz.

Cody Rivers:

And I'm Cody Rivers.

Aaron Pritz:

And today we're here with Shelly Jackson. She is the partner at Krieg DeVault. She's the chair of the Labor and Employment Practice, heavily focused in privacy and security, and newly appointed chair of the AI Task Force. We're excited to dive into that and hear your perspectives as a lawyer in that space. So welcome, Shelly.

Shelley Jackson:

Thank you, Aaron. Thank you, Cody. I'm excited to be here.

Aaron Pritz:

Awesome. So let's get right into it with AI. So maybe from a law practice standpoint, how is AI or related technology coming up in your practice? Like what questions are coming your way, or what issues are you having to deal with?

Shelley Jackson:

Well, I would say just like any other organization, both as a law firm and then also as a business, it seems that AI is suddenly everywhere and it has been coming for many years. This is not something entirely new, but it feels like it's really exploded and has, advanced at a level that we haven't seen previously. And so we have lots of clients asking us questions and those questions can range from a variety of things. They can be. Business related questions like intellectual property rights as it relates to generative AI, for example. I get a lot of questions because I am in the employment law space about data privacy and security as it relates to artificial intelligence that may be used to make employment based decisions. We have some guidance coming out from some of the regulatory bodies like the Equal Employment Opportunity Commission has recently issued some guidance about using algorithmic decision making tools, including artificial intelligence in making employment decisions so we're seeing a lot of that. I think also just entering into agreements and contracting with different organizations that are providing maybe software as a service technology, they're incorporating AI in various forms whether it's generative or not generative. We're doing some deep dives into what exactly is this technology doing? What product is coming out of it? What risks or benefits are attached to that product? And that's really a consideration, like I said, not just for c lients, but it's part of the business environment that we're all finding ourselves in.

Aaron Pritz:

Yeah. So linked to employment, screening or making offers, I remember probably almost 20 years ago when I was in an internal audit function at a corporation doing some audits of HR providers that provided screening tools. So AI has changed, but algorithms and weeding through thousands or hundreds of thousands of resumes isn't new. I'm just curious what's different in that space and what's changing? What do people need to think about or do differently or be aware of?

Shelley Jackson:

Well, I think one of the things that you've seen in that space is particularly an explosion of technological capabilities. Those capabilities allow employers to get more bang for the buck. In other words, I can go through a larger group of resumes. There are the basic sort of screening tech technologies that might be used, tore to screen out applicants. Based on keyword searches or things like that, that's been around for a really long time. Artificial intelligence is coming in to do things like profiling potential candidates, creating the profile of the ideal candidate, taking that one step further and creating, new content based on, harvesting information or pulling that data in, in different formats. And I think that's one of the areas that's really important for employers to understand. On the outside, it may sound like just a slick new feature that can be added, but if it is adding, let's say, an artificial intelligence component where it is actually creating, new decision making about a candidate or perceiving information about the role that's open, say, let's say profiling a particular role, some of those behaviors and some of the data that it can be collecting, especially if it's in the case of about an individual. Let's say a candidate or a current employee.

Cody Rivers:

Sure.

Shelley Jackson:

There could be regulatory impacts in terms of how is that information being collected and used? Is it creating issues of, for example, I. Reaching discriminatory results?

Cody Rivers:

Yeah

Shelley Jackson:

Not intentionally. The vast majority of these tools are not set up to create a discriminatory impact but it is what we would call like a disparate impact, meaning that there's not, a specific discriminatory action that's occurring, but because of the way an algorithm or a decision making process or artificial intelligence is employed, you're getting to a result. That results in a disparate impact or an increased impact to one particular, group that may be protected under, applicable law so that's one of the things that we're seeing.

Aaron Pritz:

So let's, before we move on from this topic, let's flip the tables. Cause we've talked about employers. I've heard in the last two months from two companies, senior leaders, that they've started to see an influx of very similar worded. Resumes and cover letters and communications where I'm assuming, they inputted their resume and say, make this better for company A. And in both cases I, I heard and discussed the outcome where they're like, flush'em all. So I guess what, from your perspective, less of a legal issue, but on the employee side, as people are gravitating to, how do I make this tool? Work the best for me. Is there any impact from a legal standpoint as you start to represent yourself through what a bot is saying about you?

Shelley Jackson:

Absolutely. I, that's a great question and I think it's something we all have to individually wrestle with, not even just as organizations. I think the bottom line is we have to remember what artificial intelligence and other technologies are, and what they are not. They are not a substitute for human thinking. Mm-hmm. And human discernment and human intelligence, they are artificial by designation. They are essentially a really sophisticated computer program that's been taught that can teach Yeah. Or that can learn.

Aaron Pritz:

To predict right words, not even thoughts.

Shelley Jackson:

Exactly. It has the perception of being very sophisticated, and in many cases it is. But what we also find is we find these pockets of inaccuracies or where the data somehow fails the process. Mm-hmm in other words, it's pulled the wrong data, it's relying on the wrong data. It doesn't have the data, so it makes something up. This extends beyond candidates for a job. I think it's just professionals in general. You have to understand what is it that's being created and do you have a process to ensure that that's accurate. Yeah. There are a couple of higher profile cases right now where, you know, I. Lawyers in my profession have submitted. In one case, a an attorney submitted, a brief that was written by Chat GPT that Chat GPT and its effort to give that person what they wanted, created fake case law. And now that it blew up and it became, like a little sensationalized news story.

Aaron Pritz:

The world's worst shortcut.

Shelley Jackson:

It's exactly right. When you're moving quickly through space, whether that is, you're trying to, to submit lots of different applications for jobs, you're trying to screen lots of applicants. You're start, you're trying to get work product out the door for whatever your profession is. It can be a really great tool, but it's just that, it's a tool. It is not a substitute for, the human element, which I think is a really key part of this. And. If you do any work in this space, and I imagine you see this all the time as well, that there's not a substitute for a human being getting in there and using their expertise and their skill and their training and their education and whatever it is they've put together to create their, professional abilities. There's not a substitute for that. Yeah. You can roughly approximate it. You can help the process, you can help prepare a draft, for example, but there is no substitute to that individualized, feedback from a human perspective.

Cody Rivers:

Yeah. I thinking about putting, Cody is six foot tall on a lot of

Aaron Pritz:

Smart enough to say, uh, it's Cody. I don't think so.

Cody Rivers:

I don't think it is, man. We just see what happens.

Aaron Pritz:

Yeah. The last thing on G PT and the fails is I went to a conference, a couple, a healthcare conference a couple months ago and got a lot of follow up emails just from being there.

Cody Rivers:

This is great.

Aaron Pritz:

Uh, and one of the articles was HIPAA spelled like hippo, which is if you're in privacy, like that's about the worst thing you can do to blow away your credibility. But it was spelled, I had some, had somebody tell me the female version of Hippo is hipaa, like Spanish, um, which is not Spanish, but anyway, it was spelled wrong throughout. And after reading about half of it, I'm like, this is clearly written by a chat GTP like thing. And to your point, Shelly, the no human looked at it. That was an expert to say, Hey, this is wrong, or this basic spelling of the thing that I'm trying to be sharing thought leadership is way off the bat. So I emailed the company and they didn't acknowledge my feedback to update it, but they updated it within about 30 minutes.

Cody Rivers:

Yeah, they did send a follow up.

Shelley Jackson:

That was quick marking mode.

Cody Rivers:

That was great. That was great.

Shelley Jackson:

It's impressive quickness.

Cody Rivers:

I have a question going back to employees and policies and them keeping up with AI and what do you think around there as far as, employees using it, employers using it, but then you've got these old policies that are written from years ago and are hopefully, used but not always used. So what are you seeing there?

Shelley Jackson:

Well, I think one of the things that happens is that we have technology advancing at such rapid paces. This is not new to the AI discussion. This is part of, I think, just generally managing risk. Your policies, of course, are only as good as either the paper they're written on, or the database that they sit in. If they're not living, breathing parts of your organization. So when you're bringing in, let's say, a new technology, it's really important to think about do we have policies in place that govern or that give instruction as to how we will use this technology and how we will make this technology work for us. And as for the limitations, there's a lot of really sophisticated great opportunity out there to, to use technology in a way that will really leverage information, leverage data, leverage time. It helps you, create more time. But I think the policy piece of it is just being intentional about do we have the framework set up to use this great new technology and make sure that we understand the risks that come with that technology. I feel like artificial intelligence and technology in general is parallel. There's huge risk, but there's also fantastic opportunity. Mm-hmm and it's, as I view it, this is not, do we want to engage with AI? This is just how do we do it?

Cody Rivers:

Yeah.

Shelley Jackson:

And how do we grow our organization? How do we update our policies? Do we make sure that our employees are trained? Having training sessions, you know, doing tabletop exercises when you're looking at risk, putting together scenarios for people. Testing people doing, giving them some sort of opportunity to exercise those decision making skills that can be driven by policy.

Cody Rivers:

Yeah.

Shelley Jackson:

And then also staying up to date because at the same time you've got all of this innovation going on on the technology side. You've got. Lots of legal updates. Yeah. You've got regulatory, items coming up, and we have a new privacy law that will take place in the next couple years here in Indiana that just got passed. And I think goes into effect, in 2026, and so I. We're gearing up for the future. And again, there, there are ways to do this and it's just no one's gonna be perfect in doing this. Yeah, this is a, these are big issues and big items, but I do think being thoughtful and proactive can really head off a lot of potential challenges along the way. Just because you're staying in front of it and you're thinking, okay, great, we've got this great new technology. Let's figure out how it works for us. But let's also make sure that our work population, our employees, are empowered to use it. It's an empowerment issue. Creating consistent policies that are tracking what the current legal landscape is and the regulatory landscape. It's an empowerment tool for employees. Yeah, because it gives them the boundaries they need to effectively leverage this very powerful technology that's available to them Now.

Cody Rivers:

Very cool, very cool.

Aaron Pritz:

I'd like to pivot to a new segment of our show called, and let's Say It together. Fun facts. Fun facts. Yes. So, let's start out easy. What was the first concert you ever went to?

Cody Rivers:

Live concert?

Shelley Jackson:

I was thinking about this earlier in, I don't know if it was Debbie Gibson or Public Enemy, which is an interesting brand.

Aaron Pritz:

Those are the bookends of the options that you can have. It's great. Very, and like we, one of them, like we talked about when we warned you that the segment was coming, can you please recite one of the choruses from either song? No, I'm just kidding. I won't put you on the spot. That's okay. Let's think, any other fun facts? What do you do for hobbies when you're with family?

Shelley Jackson:

Well, I spend a lot of time with my family. We are crazy about our pets. We have a dog and a cat. Enjoy spending time with the dog and doing stuff with him. And uh, but we recently, as a family, none of us are golfers, but we decided that we would like to become golfers. Nice. And so the four of us, we have two adult sons, and my husband and I, we go out onto the golf course and we are terrible at it. We choose the last tee time and the golf course. Pro Shop Pro has taken, I think, pity on us. And so he's very kind to us when we come in. He helps get us set up with the golf clubs that we need and any of the materials. And we did that all last summer and frankly, we did it in a very low pressure environment and it was just, it turned into this great bonding opportunity for our family. That's awesome. And we now, we like to do it. every few weeks in the summer, and just spend time together nice. And not like, worry about, we do get frustrated.

Aaron Pritz:

Everyone does it. Golf,

Shelley Jackson:

yes, we do get frustrated, but it's okay because we're all together and we're driving the golf carts and we're having a good time.

Cody Rivers:

That's like a sport. You can have like it's 17 bad holes, but you get one good shot and you're like, I could do this.

Shelley Jackson:

We've seen some questionable techniques too. One of my sons decided he was gonna do one hand, one armed golf swinging, which okay, is a thing. I did not know this

Cody Rivers:

Really?

Shelley Jackson:

Apparently

Cody Rivers:

I didn't know this either.

Aaron Pritz:

I can't imagine that's gonna my game yes. Uh, give us a fun fact that maybe 99% of your colleagues would've never heard.

Shelley Jackson:

Well, I have like one recyclable fun fact that they probably all have her, they have heard. Okay. So if they hear it, they'll be like, Shelly's using her Fun fact again. But my one fun fact that I use, when you're in a, an icebreaking scenario? Mm-hmm. Sure. And they tell, you have to say something interesting about yourself, but you have like 30 seconds to figure out what that is.

Cody Rivers:

Yeah.

Shelley Jackson:

I always tell the story that back in the day, so this will age me. There were these commercials for OnStar, which was a very early form of, I mean, it's still there, but you basically could call for emergency help when you were in your car.

Aaron Pritz:

Yep.

Shelley Jackson:

Yeah. And we had OnStar in one of our vehicles and I actually had a collision involving a deer with my grandson who was like a, oh, dears yes. On it. Who was about one at the time. So it was really scary. Everyone was fine. I think there was a little damage to the car. Everyone was fine. But it was like maybe a couple months later and OnStar contacted me and said, we wanna do a commercial and we have a recording of it. Would you like to hear it?

Aaron Pritz:

Oh, was that an instant Yes. For you? Did you have to reflect a bit?

Shelley Jackson:

The recording was very embarrassing.

Aaron Pritz:

Okay.

Shelley Jackson:

Because it was, semi panicked, I got my little kid in the background. I've just hit a deer. I'm worried about the deer.

Aaron Pritz:

Did you say, show me the money and then. Press forward.

Shelley Jackson:

Well, we talked to them and they were, glad to put together a commercial that's called Deer Damage. So that's the name of the commercial. It aired a few times.

Aaron Pritz:

We're going to youTube right after this.

Shelley Jackson:

I have tried to find it. I haven't, I a recording somewhere in my house.

Cody Rivers:

Have you asked Chat GPT yet? Have you asked the AI to find it?

Shelley Jackson:

I have not asked Chat GPT and I won't.

Aaron Pritz:

So from a legal standpoint, the point in which you had the accident is there now a deer crossing sign. Are there any precautions that would set the right framework of prevention?

Shelley Jackson:

To my knowledge, I do not believe there is a deer sign in the area.

Aaron Pritz:

Is there a lawsuit opportunity? I. Could you get paid twice? I'm just kidding. This is is not your professional opinion.

Shelley Jackson:

Just this was like two decades ago.

Aaron Pritz:

Yeah. What's the term? Is the period of the statute of limitations. Statute of limitations might be up

Shelley Jackson:

expired many years ago.

Cody Rivers:

We got it. So we've got all these fun rings. We've got the golfing fa, we've got the Jan Star story. Give us some, give us the Shelly story. How did you get into to this practice here? what did you enjoy? You know, kind of your story and then I think, well, let's start there.

Aaron Pritz:

Yeah. Both attorney and the privacy. Technology side.

Cody Rivers:

Yeah.

Shelley Jackson:

Okay. So the attorney side, which came before the privacy side, I was actually a teacher. I was an English high school English teacher and loved my students and loved teaching English, but I knew at some point that I wanted to go to law school.

Cody Rivers:

Yep.

Shelley Jackson:

At one point it made sense for our family and to make the decision to make the leap and try law school. So I actually took a year leave of absence from my teaching position. And in the idea that if law school was really terrible, I could just say, come back, back, boy, that I dodged that bullet and I decide, I went into my first year and I really just have never looked back. I've been in the legal community now, it will be, oh gosh, like 17 years. And so I've had an opportunity to build a career over that time that has spanned a lot of different things, but as it relates to privacy and security, What I found is as a young attorney, I intersected, maybe I would say in incidentally with it. In other words, I would be working, like I did some med mal defense, some pharmaceutical defense very early in my career. So we intersected with it in that it was a part of the litigations. Mm-hmm. We needed to be aware of that from like producing documents and reviewing information. So it started in a really incidental way. But what I found is over time I really enjoyed the, helping clients navigate decisions about information, personal information, and so it, it expanded from just healthcare related items into a broader array of consumer data, employee data, all kinds of different data. And so I started to develop an interest in it, and at one point I just decided that it was time. I'd been in a law firm for several years, decided that I'd like to. Explore that opportunity and became a chief privacy Officer. So I decided to leave external legal practice in a law firm.

Cody Rivers:

Yeah.

Shelley Jackson:

And be in-house, with one dedicated client and in that role I was both an assistant general counsel who was, uh, had responsibility over human resources and litigation. And then I also was the chief privacy officer, which was a global role, was a really interesting, exciting, fun role. That's where I met Aaron, and Tim. Yep. And, some of your team members. So that's how I first, got to meet Reveal Risk. But, it was a great experience and ever since it's just, there's a couple different things. First of all, it's about how we, how. Create opportunities to handle information and data in a respectful manner. Yeah, so I think it's, and it's hard. It's about being respectful in the way that we use information that we have about people, but it's also it's a highly regulated area. It's an interesting area. Yeah. You know, sometimes people like their eyes glaze over, right? Like privacy. No, but if you dive in it, there's really interesting, I mean, how all the, all of the different frameworks work together. It's an up and coming area. It's a rapidly changing area. We have new laws all the time.

Cody Rivers:

Yeah.

Shelley Jackson:

And then because I spend so much of my time in the employment space. Privacy is so closely connected both how you treat employee data. How you deal with candidate data, how you deal with your employee data, former employees, but also how those employees are empowered to treat data the way in a compliant manner and in a way that is, really contemplates the best use of that data.

Cody Rivers:

Yes.

Aaron Pritz:

So what happens on the AI task force that was done with no filter by the way. Um, what happens on the AI task force and what are some of the top tips or good conversations you're having with clients about AI?

Shelley Jackson:

Well, we're looking for someone to do like voiceovers as we begin our meetings.

Aaron Pritz:

Sign me up.

Cody Rivers:

He also plays the drums, so I put this little snare beat, have

Shelley Jackson:

like a whole entertainment segment. So one of the things we discovered at our firm as I'm, I think is happening everywhere. We're doing a ton of this work. Our clients have really interesting. Sometimes very complex questions and issues that arise with respect to, managing artificial intelligence and making sure that they are thinking about all angles of how to leverage. Um, and then also manage risk as it relates to artificial intelligence. So the idea of our task force is simply to make sure that we are serving clients in the areas that are important to them. Also in the areas that maybe they're not as aware of, but we can help bring awareness and help them manage risk. A lot of times from a risk management standpoint, so much of what I do in a day is managing risk, right? There's the legal compliance piece, but a lot of times it's, we know the legal compliance piece. We just need to figure out how do we effectively manage that risk. Yeah. And how much of our resources do we devote to x activity versus y activity. How much of a benefit do we get from leveraging this new AI technology versus simply using what we already have and how much risk are we creating here? Yeah, so we're seeing it in every area. I see it a lot in employment and human resources issues. Certainly a healthcare is a, yeah is a big place because that's a highly regulated area and you've got lots of apps that can tell you all kinds of things about you from a health perspective. You gotta think about how you're using that data and how that data is being processed and what. Is how the inputs are set up to create any sort of artificial intelligence, generative outcome. If there's, if they're generating new content, how is that being pulled? What's the security and, and the privacy controls. Yeah. Business negotiations and discussions. And who owns AI? There's some lawsuits right now that are pending about just talking about who owns something that's been generated through AI that was created based on examples of real artists and yeah. Is it a derivative work if that happens? So there's a lot of discussions happening in that area.

Aaron Pritz:

So what I'm hearing, or this is my kind of wrap up of that, is you're letting, you've asked AI to come up with the agenda for your your task force

Shelley Jackson:

that is it. No, no. That's not what it's

Aaron Pritz:

client demand.

Shelley Jackson:

I will not lie though, in any meeting that involves AI, it is not uncommon for someone to utilize AI in some way. Just to use it as an example.

Cody Rivers:

Yeah.

Shelley Jackson:

Because again, we are a business that is navigating really sophisticated. New technology that frequently uses AI and thinking about is lawyers, what is our role in exercising that AI and using it. So I think it's very client driven. It's based on what we're seeing from our clients, but it's also what value can we provide to clients about maybe looking around a corner that they're not yet Yeah. Experiencing yet and staying ahead of it, rather than all of us running to keep up with it.

Aaron Pritz:

Maybe we've got time for one more question. Give us some of your top tips for AI that you're seeing from a risks that are out there or guidance that you're giving.

Shelley Jackson:

Yeah. Well, I think, and especially if I'm framing it, you do so much in the cybersecurity space and I think when I think about it from like a data privacy and a data security space is. There's so much, sophistication to what is out there. There's so many new opportunities and they are incredible. Again, this is not, do we utilize AI? This is not, do we utilize technology? It's how do we do it? Yeah. And how do we manage risk? So when I think of the big picture issues, number one. What is the legal and regulatory framework that you're in? And are you doing something, are you changing something about what you're doing that could impact that?

Cody Rivers:

Yeah.

Shelley Jackson:

So, you know, if you're on top of that, that's gonna help manage your risk at the outset. Mm-hmm. Rather than you implement something that seems super cool and then yeah, you go back in time and you think, oh, ooh, I wish I would've known that it was this issue. A good example of that is the E E O C. Is regulating, the use of decision making as it relates to candidates and employees, and holding employers responsible for decisions that AI makes. If that decision turns out to have a discriminatory impact, even if it's not intended, if it's just an adverse impact that's created by the, decision making process or the generative AI. That can result in some additional risk. But if you're on, if you think about it on the front end and look at that and take a look before you're using the technology, you can help manage risk. Second of all, I think it's just to have conversations and, I always believe employees are your number one. Your best weapon, right? Yeah. They are your best advocates. They are out there doing the work day to day of the business. Yeah. And so thinking about what is it that your employees need from a business standpoint, what do employees need to feel empowered? Whether that is, understanding policies, staying ahead of policy development, clear communication of expectations, giving employees a voice to say, Hey, I see this great opportunity here. Can we explore this? And sort of outline what those risks are. Yeah, so I think that's the second. And then the third I think is just to be curious. There's so much going on and there's so many opportunities here, and I think being curious, doing a lot of reading, staying up to date on what the opportunities are in your industry. There's so many opportunities to learn and you don't have to learn the hard lessons.

Cody Rivers:

Yeah.

Shelley Jackson:

You can learn from other people who learn the hard lessons so just being engaged in that industry or community.

Aaron Pritz:

So with AI, being careful on what you feed it. What the implications are and how you're using what you're getting back.

Shelley Jackson:

Yeah.

Aaron Pritz:

I almost think that's like an analogy to like raising kids, like accidentally said a bad word. What did I feed my kids?

Cody Rivers:

Whoops.

Aaron Pritz:

What were the implications? Well won't go into that. And then what did I get back from them? Well, a lot of crap. So that wasn't the word I used either, so. All right. Well, thanks for coming on, Shelly. It's been great to have you here and discuss these topics. Really appreciate you coming in and talk to you soon.

Shelley Jackson:

My pleasure. Thank you so much.

Cody Rivers:

Awesome. Thank you. Bye.