
Simplifying Cyber
This show features an interactive discussion, expert hosts, and guests focused on solving cyber security and privacy challenges in innovative and creative ways. Our goal is for our audience to learn and discover real, tangible, usable ideas that don't require a huge budget to accomplish. Shows like “How It’s Made” have become popular because they explain complicated or largely unknown things in easy terms. This show brings the human element to cyber security and privacy.
Simplifying Cyber
Navigating AI & Legal in Cyber with Tim Sewell
Artificial intelligence has firmly established itself at the forefront of the cybersecurity agenda, creating both unprecedented opportunities and complex challenges for security leaders. In this eye-opening conversation with cybersecurity veteran Tim Sewell, we dive deep into the realities of implementing effective AI governance and security practices in today's rapidly evolving threat landscape.
Tim shares invaluable insights on how AI has fundamentally transformed the cybersecurity domain, comparing this shift to the rise of desktop computing or cloud adoption. He cautions against the "wild west" approach to AI governance that many organizations have inadvertently embraced, where tools are deployed without proper oversight or awareness. Most concerning is his observation that AI is increasingly being integrated into existing business processes by vendors or partners without explicit notification, creating dangerous blind spots in security programs.
The discussion reveals surprising developments in third-party risk management, where AI tools now handle everything from vendor questionnaires to SOC 2 report analysis. We explore the troubling reality of "AI sending questionnaires to AI that is responding to questionnaires," raising critical questions about trust and verification in our increasingly automated security ecosystem. Tim provides practical guidance for security teams on transparency in AI usage, particularly when making decisions that may later require justification in legal proceedings.
Despite the focus on advanced AI capabilities, Tim emphasizes the continued importance of security fundamentals. He notes that sophisticated nation-state actors are increasingly targeting basic vulnerabilities like buffer overflows and cross-site scripting, especially in critical infrastructure with legacy technologies. For new security leaders, his advice is refreshingly straightforward: identify what you're protecting, assess existing controls, and practice your incident response.
Listen now for essential insights on navigating the AI security landscape, from governance frameworks to practical implementation strategies that balance innovation with risk management. Whether you're a CISO looking to update your program or a security professional wanting to stay ahead of emerging threats, this episode delivers actionable knowledge for securing your organization in the age of artificial intelligence.
All right, thanks for tuning in to Simplifying Cyber. I'm Aaron Pritz, I'm Cody Rivers and I'm Todd Wilkinson, and today we're joined by Tim Sewell, who has been a long time I've known Tim for probably over 10 years now Longtime cybersecurity practitioner leader really great depth of knowledge and experience in aerospace and defense, healthcare, pharma and consulting. And we're excited to have three hosts today. With Tim, it's the Dynamic Four and, yeah, excited to have a great conversation on kind of some of the future of cybersecurity and some of Tim's insights and some recent evolution of thinking from RSA. So, tim, welcome to the show.
Speaker 2:Thanks, it's great to be here, nice to be back.
Speaker 1:Awesome. So let's start out with kind of a big and broad question. You know thinking about CISOs and kind of some of the insights that you learned and discussed at RSA what are some of the top cyber program opportunities that you see right now for leaders and what's top of mind for you?
Speaker 2:Yeah, so definitely AI. The explosion of generative models, artificial intelligence tools and their increasing use in the environment has to be top of mind for pretty much everybody, cybersecurity or not. A few additional topics I think we have a rapidly changing regulatory environment in cybersecurity. That's somewhat unprecedented and should require a little bit more focus than even we've historically given it. And then I think we have some interesting challenges on the technical side in terms of the rise of quantum cryptography, some of the use of deepfake and other kinds of AI attack technologies. And then how do we protect this infrastructure that we're building to handle AI and kind of the future of computing?
Speaker 1:That's a lot of great topics. Let's start, maybe, with AI and governance. I know everyone well. If you're not, if you're sleeping under a rock, maybe you don't know of AI or your company is not working on it. But when you say AI governance, how do you set that up? What is that for you? And maybe what are some of the gaps that warrant having governance to kind of moderate the progress?
Speaker 2:Yeah, I think, governance in AI. If you'd asked me a couple of years ago, when you know, really, the large language model started, we talked to a lot of folks. They said, yeah, we've got an AI policy, we've got governance, we're good to go. And what we've learned since then is that a policy saying don't put sensitive data in AI is not particularly practical, nor is it really solving the underlying issue of how we can use these tools holistically. Ai has such broad reach into the enterprise, so many different use cases, that it really requires the organization to come together in kind of a new way. It's almost a new wave of compute, similar to desktop computers, similar to the rise of cloud, and now we have rise of AI. It's truly that level of transformation for the business. So you've got to have all the stakeholders at the table and it takes time. That takes effort and it's not necessarily the fun work that people want to do, but it does enable the fun that we can all have with.
Speaker 2:AI.
Speaker 1:Yeah, so maybe balancing your points on kind of the progress and then minimizing the mistakes that can defeat progress or slow progress. What are some of the challenges that you've seen in? Maybe AI not being used right or not being governed well to kind of justify that kind of governance layer in place.
Speaker 2:Yeah. So I think in some organizations AI governance is kind of wild west and people are just using it willy-nilly the organizations that are trying to get their arms around governance. I think there are some really good efforts going on out there, but I think a lot of them one of the common pitfalls is they get stuck in this idea of use cases for AI. I think it's really important to understand what use case you're trying to bring AI in for. What that model doesn't cover for most organizations is where you've got a tool, you've got a process that's already in place, but now AI has been introduced into that process, either by the existing technical solution or by one of the partners involved in that process.
Speaker 2:Technical solution or by one of the partners involved in that process and now you've got AI in this business process or in this flow that has been working just fine for however long it's been in place, and if you haven't put governance around that kind of change, if you're not reviewing your processes and your tech stack on a very regular basis to find those changes processes and your tech stack on a very regular basis to find those changes, you're using AI in processes where you don't know you're using it. So you've got AI use cases you're unaware of.
Speaker 4:Good stuff. So a lot of AI talk a lot of stuff. Here I'm a newer CISO or I've been a CISO for a while and AI or those aren't my big topics or my deep knowledge. How high on the risk register am I putting AI in? What are some kind of foundational things I can start putting in place on like when to start addressing them?
Speaker 2:I would say AI needs to be very near the top, if not the top, for most organizations today that have a cybersecurity program, for most organizations today that have a cybersecurity program. I think there are some cases where there are organizations that don't have a program and there are some foundational pieces that I might prioritize higher in those cases. But for organizations that have a security program, the way that this is transforming the attack surface, the threat landscape, the business, transforming the attack surface, the threat landscape, the business it's got to be top of mind, top of list, okay.
Speaker 4:And then kind of going further on that thought for and this is kind of a question for you on certain companies, when is it relevant to have, like, a dedicated AI security person, or when can my existing security team can absorb those responsibilities? Maybe like idea of both and pros and cons of each one, but probably a common question people have these days.
Speaker 2:Yeah, I think it varies for the organization. I would say certainly any organization that is putting out a product that contains AI or is using AI to deliver their core business function has a strong argument to have some dedicated AI cybersecurity resource. I think large enterprises also have a strong case for that because their exposure to AI is so enormous just because of their size and the number of folks they've got in the organization using these tools on a regular basis. Of course, it gets a little harder for smaller organizations where resources are constrained. People have to wear multiple hats. It's hard to get a dedicated anything in those conditions.
Speaker 2:But I think another challenge with this is that the skill set for how to deal with security and AI it's not broadly dispersed or distributed through cyber practitioners. A lot of us in the field are still learning. There's a fairly steep curve in some points for how to deal with the security. If you've got an organization where that's the case, where you don't have a lot of cybersecurity expertise for AI in-house already, again, I think that builds a strong case because it's a significant addition to the estate that you have to deal with. Yeah, you may need those resources sooner than you think.
Speaker 3:Yeah, tim, you made a comment there that there's tools and processes that the companies have been using and now AI just may be introduced to them just as part of using that product, just as part of using that product.
Speaker 3:So yesterday we may have had a decision tree that the output to the consumer was an approval or a rejection or something along that line. That was based on we made this decision tree. We understand the forks in the road because we were part of making that calculation, but now, all of a sudden, ai is in the middle of that making decisions. So there's a clear path where companies are going to have to, I think, decide or interject. When to say are we going to trust the decision that AI is going to make and how do they address that? I think that's a big question that companies are going to have to wrangle with or are right now. But let's pivot that inside, those same things are happening with security tools and security teams, and so they're starting to get that exposure. Do you have any advice on maybe a couple of key steps the inward facing teams may need to take to either adopt or how to start to dip their toes in it a little bit more aggressively?
Speaker 2:Yes, I do and I think you're absolutely right. The internal use of AI by the security team creates additional challenges. First thing I would recommend is know where you're using AI in any of your information security workflows, for a couple of different reasons. One, as you said, you've got to be able to justify the decisions that you're making, and in an InfoSec perspective, that's really important because you're dealing with very precise, very binary truths or fictions. There are cases that have gone to litigation where solid forensic evidence has been kind of tossed out because at some point in that process AI was used but it was not necessarily known to have been used or called out by the process. So if you as a security team can't say and I used AI here for this reason, and here's how I justify that decision there are instances where things are being tossed out, and if that's happening, you're going to see that ripple further into legislation and regulation and other places. You've got to be transparent about where you're using the AI. Yeah, the second thing I would say is you've got to make sure you're dealing with AI that's going to deliver some value versus become a distraction. There are lots of really fun ways you could use AI in cybersecurity. But you've got to go back to the foundation of how is this helping me reduce my organization's risk? Can I go back to my stakeholders, my leaders, my board of directors with a clear return on investment? For hey, this is why I'm using this AI tool in this way.
Speaker 2:One of the risks with that is it's really easy to go back to cost. I'm saving two FTE of security analyst time by using AI here. Well, that might be true this year. It's because AI is very early. We're still seeing the early pricing. If you remember Uber in 2015, your ride across town was like 10 bucks. That same ride is probably 70 or 80 now because we're used to the convenience. I think we've got to anticipate a similar shift in AI pricing $100,000 analyst by using a $20 AI subscription. That's not going to stay the case forever and as a CISO, as a security leader, you need to be anticipating that shift. So your value for using AI needs to go beyond I'm reducing headcount or I'm reducing OPEX. It needs to go to I'm enabling new use cases, I'm blocking better threats, I'm doing more with the resources I have, versus the simple headcount reduction.
Speaker 1:On that note, are there any compelling areas within cyber where you're seeing more true valid traction versus just buzz and agents applied to everything from a tooling standpoint. But where are the hotspots for you with AI in cyber programs?
Speaker 2:Where are the hotspots for you with AI in cyber programs? Yeah, it kind of depends on how you describe a hotspot. One area I know that there's a ton of AI being used today in cyber programs is the whole area of third-party risk management. So this is an area in cybersecurity. We've helped tons of people over the years build programs in this space. We understand it pretty well.
Speaker 2:There's a lot of information that's generated in this process, both by the vendor and by the consumer. We do all these surveys, we do SOC 2 reports, we do external scans. We collect all this data. We ended up with a data problem. It sounded like a great use case for AI. So now we have AI tools that will go and summarize SOC 2 reports and will help us analyze these questionnaires and help send them out. And then on the other side, we now have the situation where the organizations that receive all of those questionnaires are saying I can't keep up with these questionnaires. I'm going to use AI to fill them out. So we've got tools that will go through your trust center resources and try to auto-complete the questionnaires you're getting coming in. So now you've got AI sending questionnaires to AI that is responding to questionnaires. So I think there's a tremendous amount of traction in third-party risk. The value question has to be determined.
Speaker 1:I think there's a smart way. I'm curious your thoughts on this. I was a few weeks ago. I was at an event where a CEO was kind of humble bragging about using AI to complete questionnaires for their cyber program. That didn't exist and I think the prompt that again, I don't know why he was bragging to me, but the prompt that he used was like complete this for having a modest cyber program that was passable but not super sophisticated, and he was getting responses that were kind of a passing grade but it was all fabricated. And I mean the ex-auditor in me is kind of freaking out saying, okay, that's borderline fraud, but secondly, that's not accurate and we're just creating AIs, talking to AIs that are not actually doing anything meaningful.
Speaker 1:So is it fair to assume that the companies that win in this space and let's stay on third-party risk are those that use it, but don't push it too far? Party risk are those that use it, but don't push it too far If they free up more time for the smart humans that are probably overqualified to do third party risk assessment program question matching. But could they be pulling other threads that they've never had time to pull to get to the real essence of the risk or where the data is within the third party. Where should you care Things like that? What are your thoughts on that and have you seen anybody getting it right?
Speaker 2:I think that's a great way to think about it. Ultimately, I think it comes down to the trust. So in your example with this CEO, right, there's no trust there. You can't trust the reports that he's giving back to the folks asking him questions, because he's just using AI to make up answers and fill it out. So you've got a trust deficit.
Speaker 2:I think it's the tools that can figure out how to close that trust deficit, either by pulling on threads you couldn't otherwise pull or having some kind of validation or verification of these answers are accurate. These answers are accurate. So there are some vendors out in the space that are trying to do this almost in real time, using a combination of technical controls and posture and validation techniques. But it's all very early startup, unclear how much of it is slideware versus real. At least, I haven't seen it real personally, I hear some promising stories and some good approaches, but to me, with the AI and particularly the third-party space, it comes down to the trust. How do you trust the answers that you're getting back from the questionnaires, or how do you trust what you're seeing?
Speaker 4:Provocative question here. So, like in a world focused on like advanced, persistent threats and nation-state actors, how important is a strong basic hygiene program in defending against like sophisticated threats? So I almost like back to the basics. Right, we get down the road and like is the road we're going? Is it still the right road? Or there's a kind of a pause and go back to the fundamentals that are pretty effective against AI. But we were kind of that lost leader where we're going so advanced that now we're not even on the same orientation anymore.
Speaker 2:I love that question. You know I'm kind of an old school cyber guy from way back, so I really appreciate the basics, the fundamentals, the tackling, the blocking, and I think for a long time we've been focusing as an industry on the more advanced side of things, on more advanced controls, more advanced technology and what we're starting to see, particularly as more critical infrastructure gets connected and becomes more aware, more smart. We are seeing attackers, specifically these sophisticated nation states that are going back to the pure technical exploit, that are looking for the latest buffer overflow, the cross-site scripting, the malformed packet that is a technical means to penetrate into an organization, into a control system and again, especially for these critical infrastructure areas that have a lot of legacy technology that may not necessarily be easy to update. So I think, if you've got any exposure at that level, that those basics, those fundamentals, become much more important again because we are seeing the investment by very sophisticated adversaries in targeting those systems.
Speaker 3:So I got this. May be a bit of a hot take question, so I'll set it up with that, but a hot take question.
Speaker 3:So when I filled out a lot of these questionnaires, I've had to ask a lot of these questions and I get down to there's two things I really want to know when is my data going and how are you protecting it? That's all these questions we ask to me boil down to those two pieces to oversimplify it, but to be more provocative, todd, how many times do 300 question questionnaires not actually answer either of those questions?
Speaker 1:Yeah, so if I go, back to those two questions.
Speaker 3:Do you think AI, at some point, is going to help us answer those two questions? Where did my data go and how are you protecting it? Because that's what I want to know.
Speaker 2:And I love the way you asked that question At some point absolutely when that point is. That's where my crystal ball gets a little bit fuzzy, do we need the cyber laugher curve?
Speaker 4:Yeah, there you go. We've got to ask Sarah Connor man. Maybe she can give us some insight.
Speaker 2:Yeah, I do get worried when I read about the models that fight being shut down and then blackmail the engineers that are trying to do so. That's a little scary. Who gives AI root on the container in which it's running?
Speaker 4:Tim question. Here too and this is not so much AI-focused but maybe like stakes for CISOs are getting higher and higher and there's been talks of like CISOs considering purchasing professional liability out of their own pockets, but what are your thoughts on that and what does that really say about the current risk landscape for security leaders?
Speaker 2:Yeah, I mentioned that. One of the top things I would focus on as a CISO is that changing legal landscape. It's moving more rapidly than I think it ever has, particularly with the rise of AI. You're seeing regulations and laws being passed in all kinds of different jurisdictions that are creating conflicting requirements. It's creating new requirements that you may not be aware of and I think, as a CISO, if you don't have a good relationship with a lawyer, now is an excellent time to invest in one, and I think you need to have that relationship both with your internal legal resources.
Speaker 2:But I think there is some value in having a trusted outside perspective that is independent of the organization that you're working with. I also think that, with some of the rise in liability that CISOs are being asked to take on, either by law, by regulation or just the kind of industry stereotype that after a breach the CISO gets fired, that after a breach the CISO gets fired you might want to look at a personal policy around professional liability or umbrella insurance or something like that, because I think the role of the CISO is misunderstood in a lot of ways and that kind of misunderstanding when there are millions of dollars in fines or regulatory losses or reputational harm at stake. Fines or regulatory losses or reputational harm at stake. It's good to have a little bit of personal protection beyond what you might have professionally.
Speaker 3:I think I heard. Make sure there's a line item in the budget that says I've got some legal expenses. Make sure I've got that covered.
Speaker 4:Yeah Well, I thought about like with to your point earlier, about like data and where's it going and who has access to it, and like there's someone on both sides into those questions and there's a lot of financial outcomes based on those two directions and stuff. So my thought is, like similar to doctors, like anesthesiologists right, they've got very high malpractice insurance because they make a mistake and it's more of a life in that scenario. On the business side it's like you know, we see a lot of companies that do or don't get contracts based on their third-party risk question they put forth, they get to say yes or no, and that could be to a CISO that may have altered those things and so just kind of not directly aligned but kind of adjacent to our conversation today, these decisions become more and more financially incentivized and the outcome. So I just want to get your opinion.
Speaker 2:Yeah, I think cybersecurity has some really disorganized incentive models, and what I mean by that is that the people that reap the rewards of taking a lot of cyber risks are not the people that are impacted when that risk is materialized. People that are impacted when that risk is materialized, and I think, because of that, you've got to be aware that that's the case. There's an asymmetry. People take a lot of risk, they get a lot of reward, but if that risk happens, it's not their data that was lost, it's yours, it's mine, it's everybody else's, and because of that, it takes a lot of preparation and planning.
Speaker 4:Yeah, kind of wrapping up here like thoughts here Newer CISO, I get to do three things. Right, you know I should make it two. I get to do two things this year around getting my AI cybersecurity program built around AI. What are those two things I'm doing this year?
Speaker 1:Can he wish for more wishes? No, he can't.
Speaker 2:He can't, absolutely, not Absolutely not Wish for less wishes, and then the negative integer wraparound will get you lots, of, lots of wishes that way. Oh, if I could only do two things as a as a new CISO and of course it always does depend on where you start right. Absolutely, I would say the investment on the legal side understanding the landscape, the risk, the exposure, both for my organization as well as me personally. I think that legal investment would be pretty near the top for me. If I could only do one other thing, I would probably practice against an AI attack. By that I mean a tabletop or a simulation of what's going to hit me. If I've only got those two things, let's be ready to get hit in the face.
Speaker 4:I support that. I like that. You know, like I think if you got on a cruise, you didn't hit it in the face.
Speaker 1:I'll keep that in mind, Cody.
Speaker 4:Like a cruise ship. It's like you know the first thing you get on. I hope never have a cruise go down or a fire with. The first thing is a fire drill. Right, we'd hope it doesn't happen, but can't assume it's never going to happen. So first thing we do is let's just do an instant response and make sure it doesn't happen, or if we do know what to do. So good answer. I like that.
Speaker 1:Awesome, tim, any maybe, maybe one last wrap up question what would be your advice to a new CISO or CIO that just inherited cyber or, let's be provocative, a CEO that inherited cyber because the prior reporting mechanism didn't work? What's your advice to them maybe those that are not as close to it with all your years of experience of how you'd start that conversation with them or have that coaching conversation of what do I even do here, tim?
Speaker 2:have that coaching conversation of what do I even do here, Tim? That's a big question. You've got to start with the basics. What am I protecting, Whether that's business process, whether it's assets, whether it's information, whether it's people? You've got to start out with what am I protecting?
Speaker 3:And.
Speaker 2:I'm not saying you have to have 100% clarity and a perfect asset inventory, but you've got to have at least a business understanding. This is what I'm here to protect. The second part of that story then becomes what do I have in place to protect this today? Do I have anything in place for all of these critical assets? And then I've got to use my business acumen now to prioritize where I've got gaps. And then the third thing I would do again is practice. You can get hit in the face at any time. So if you've never been through that, if you've never thought about that, that's what I would do. Next, I'm going to understand what am I protecting, what have I got in place to protect it? And then, what do I do if something really bad happens to the things that I'm trying to protect?
Speaker 1:Awesome, tim, thank you for joining the show. This was a great conversation. I learned a few things and always appreciate time with you.
Speaker 4:Awesome. Thank you, Tim, it's been fun guys. Thanks Bye.