Simply Solving Cyber

Simply Solving Cyber - Tim Sewell

September 12, 2023 Aaron Pritz
Simply Solving Cyber
Simply Solving Cyber - Tim Sewell
Transcript
Aaron Pritz:

Welcome back to Simply Solving Cyber. I'm Aaron Pritz and I'm Cody Rivers. And today we're here with the elusive Tim Sewell, the CTO and co-founder of Reveal Risk. My co-founder and business partner here at the company.

Cody Rivers:

Very elusive.

Aaron Pritz:

Tim, how are you doing today?

Tim Sewell:

I am thrilled to be here finally. Yeah, it has been a long time coming.

Aaron Pritz:

We'll get into, why you were Mr. Elusive on this specific show, nothing else, here in a bit. But let's start, uh, how we always do by giving the audience a chance to get to know you a little bit more so. Start with telling us how you got into cyber security and maybe your defining moment of when you knew it was your permanent spot.

Tim Sewell:

Yeah. So I've been in cyber for a long time. It's always been an area that's fascinated me all the way back to when I was a kid for my ninth birthday. I think about the kind of presence that you would get maybe a Nintendo or some roller skates or something. I got the orange book. Which is the TC SEC Guide to Trusted Systems Evaluation Criteria.

Aaron Pritz:

So page Turner there.

Tim Sewell:

Oh, it's kind of known as the hacker Bible from the late eighties, early nineties. Very hard book to get, at that point in time.

Cody Rivers:

Hardback or paper?

Tim Sewell:

It's paper. And it is a very bright orange cover.

Cody Rivers:

Okay, excellent.

Tim Sewell:

First edition, orange book.

Aaron Pritz:

Don't want to lose it.

Tim Sewell:

There's a whole series of them, the rainbow series of books and computer security, but the orange book is the kind of the core foundational.

Cody Rivers:

Interesting. Okay.

Tim Sewell:

And, how did I know that I wanted to do cyber as a passion kind of for my career, for my life? Um, I do love this stuff, uh, but it really started out with the video games. So early on you could make modifications to the game and, suddenly your characters would no longer die from dysentery in Oregon.

Aaron Pritz:

What kind of video? Oh, Oregon Trail. Okay. Nice. Old school. Apple IIe?

Tim Sewell:

Yeah. Yeah. Apple IIe early, early Packard Bell. Good times.

Aaron Pritz:

Nice. So then tell us about your journey through cyber. So it started with games and orange book and kind of being interested in the topic. Where did you start and how did you progress?

Cody Rivers:

And it wasn't called cyber then.

Tim Sewell:

Oh no, no. It was still a called computer security or information assurance, which is kind of like what the government term for it was. Uh, and they tried to extend it beyond. Yeah, digital information actually talk about protecting paper copies of information so information assurance.

Cody Rivers:

You're doing this thing and you're liking it and say, this is cool. I like doing this. And then how's it, how's the story go?

Tim Sewell:

Yeah. So I kept pursuing that. I went to undergrad for computer science and focused on information assurance and got right into defense. So I literally was working while cyber was still underground in the underground at a U. S. Stratcom for a while and

Cody Rivers:

How far underground?

Tim Sewell:

That's classified.

Aaron Pritz:

You take an elevator or a ladder or spirals staircase.

Cody Rivers:

Firepole would be way more fun, man.

Tim Sewell:

Firepole would have been cool, but no, it's actually a series of ramps that go down because they have to be able to take like truckloads of food down there. When they close the door, they have to be able to survive for like two years.

Aaron Pritz:

Now I'm thinking of the show Silo.

Cody Rivers:

I was thinking of Interstellar, when they're going on its side and going around.

Aaron Pritz:

Okay. Well, what do you do underground? What did you do underground? How did you get out? Most importantly,

Tim Sewell:

Getting out was actually kind of interesting. Underground, I was the accreditation lead for a bunch of systems, which means I was the guy responsible for making sure all of the paperwork was in order that the systems had been properly tested, configured, installed, were adhering to all the security best practices and so forth. And that process was very, documentation intensive and typically took a very long time.

Aaron Pritz:

Sounds not fun.

Tim Sewell:

Well, you know, it wasn't. Uh, so I applied some of my, computer science background. I thought, Oh, I bet I can automate some of this. So I wrote a bunch of macros because at this point in time you could still have macros in, in word documents that hadn't become, you know, criminal vehicle of choice and I turned what used to be a multi month process into days to weeks, which I thought was fantastic. And my customer thought was fantastic. And my boss said, well, that's great. You've just automated yourself out of a job. But it worked out'cause I got to go to California. They sent me out there to work on some interesting stuff that's still in orbit I think. And, I got to continue to apply my talents and interests and do really cool stuff in cyber security for a long time in the aerospace defense and intelligence community. They were heavily focused on cyber, and so it was a great time to be there. And as I was doing that, we started to see banks and other commercial entities become more concerned about their technical security with the rise of the internet and e commerce and everything becoming connected.

Cody Rivers:

So at this point, was it starting to become called cyber? I mean, were we now in the continuum of the development of cyber on the commercial stage?

Tim Sewell:

Yeah, it was starting. We had a lot of debates about, was it still information security? Was it cybersecurity? Were those two things different where it was information assurance still in here? You had folks talking about electronic warfare broadly, which includes things like signals, intelligence, and radar jamming. But, the terms of evolved over the years, and they're going to continue to evolve because we keep learning more and more. You know, when I started out in security, you could learn all of security. You took four or five years of dedicated study. You take things like compiler construction, network security, a little bit of maybe identity management, some application development, and you could credibly call yourself a security person because you knew all of computer security. It's impossible to do that now. The field has exploded. There are so many disciplines and sub disciplines and so many different applications of all these different technologies that no one person can keep up with any of it.

Cody Rivers:

So many channels, so many different mediums to get to and fro.

Tim Sewell:

Exactly, exactly.

Aaron Pritz:

So then post defense, where did your career take you after that?

Tim Sewell:

Yeah. So I got tired of making things that blew up people. Started thinking I should do things that help put people back together. So I went up to a little hospital in Minnesota called the Mayo Clinic. Work there for a few years and help them do some really interesting things in terms of transforming the way that healthcare approaches network security and medical device security. They do some fantastic work up there and I was really proud and privileged to be a part of it.

Cody Rivers:

That's excellent.

Aaron Pritz:

And then we got to work together in pharmaceutical industry.

Tim Sewell:

Yeah. Yeah. So the one challenge with Rochester, Minnesota is it gets quite cold. So, you know, one day it was 52 degrees below zero. I got a call from a recruiter that said, hey, have you heard of Eli Lilly? I said, yeah, that's south of here. We should talk. So I moved here to Indiana, uh, it was about 2016 and, worked at Eli Lilly for a little while before leaving in 2018 to co found Reveal Risk here with Aaron.

Cody Rivers:

I never asked, how'd you guys first meet at Lilly? Was it over like a lunch cafeteria cross or did he have a pudding snack that looked good? Do you want to share that?

Aaron Pritz:

Probably in a meeting, but I think we did have some lunches and coffees.

Tim Sewell:

We did. I think I had 20-30 lunges with people in my first month that Lily, uh, they did actually a really good job at making sure that I got to meet many of my peers and folks that I would need to work with, uh, in my role there as part of my onboarding.

Cody Rivers:

Awesome.

Aaron Pritz:

So I guess back to the question of your defining moment, as you went through, was there a point in which you knew? Hey, you weren't going to pivot to something else or cyber was kind of where you, I mean, you were passionate from the beginning, but what was that defining moment?

Cody Rivers:

It's a good question.

Tim Sewell:

It's almost like I've just always assumed it. It's been such a core part of who I am and who I've always been. I don't know that I considered much else. I mean, I've always liked business too, so this is, this has been a good. Good decision over here.

Cody Rivers:

Kinda nailed of the two things.

Aaron Pritz:

Nice. Well, as I alluded to when we kicked off, you have been a little bit elusive on the podcast front, uh, for reasons we won't get into, but let's just say important things have always come up. Uh, client, which, clients always come first, but, every time we call it the Tim Curse, when we try to do a podcast with Tim, something interesting comes up and, uh, no excuses today you're here, but Gotcha. We did actually prepare Cody and I. We've been doing a lot in AI and we created a AI based deep fake Tim, which I think, I we didn't have to use it today, but I think we should introduce him. What do you think?

Cody Rivers:

You know what? I think it's nice to have and say a few words. Okay, well, let's let's do that.

Tim:

Hi everyone! I am Tim-bot. The A.I. robotic clone of Tim Sewell... and see, I even misprounounced my own last name like many people that real Tim meets. Thats ok. I like many things in cyber in life. Networking with hundreds of people per day and putting myself out there playing guitar and singing in my barbershop quartet. Oh wait, none of that is true and I have indeed hacked real Tim.

Tim Sewell:

Okay. Well, you know, a little flat, but, uh, you know, imitation is the most sincere form of flattery. So thanks a lot, guys.

Aaron Pritz:

The intonation a little off, but the voice, I definitely can tell that that's Tim. All right. Well, on the topic of AI, Tim, you've been doing AI before the, recent resurgence of AI through, Chat GTP or what we know now. It's every app has a an AI bolt on. But, give us your thoughts. What's been AI to you and cyber and, uh, maybe pre and post the commercialization of it here recently. What's that landscape?

Tim Sewell:

Yeah. So when I was working with what I would consider early AI in cyber back in the mid two thousands and late two thousands. The idea was, how can we leverage technology better to help the computer defenders really do their job? How can it be a force multiplier? The challenge in cyber is often so much data, so few analysts, such a niche set of expertise. We have to have better tools to help us find the needles in the stacks of needles that are hidden under haystacks. So it's been a force multiplier for the cybersecurity industry for a long time. What we've seen recently is AI becoming accessible much more broadly. Tools like Chat GPT, the new AI art tools like Mid Journey. You've got general users making things with AI and it's become really a flashpoint for culture at this point.

Cody Rivers:

Definitely a buzzword now.

Tim Sewell:

And it is everywhere. So of course in the computer security realm, you've got attackers, you've got defenders, the attackers love AI generative AI helps write amazing targeted fish. It helps create self mutating malware. It will help write, it'll analyze code and find flaws and then it will help you write exploits for those flaws. It's a tremendous performance boost for adversary community on the defender side. Similarly, we can use generative AI to write better awareness content. We can use it to more quickly analyze large amounts of data to discover the anomalies that are caused by bad behavior. So it's a little bit of an arms race. The attackers will figure out something. Then the defenders will figure out a counter. The defenders will figure out some really cool detection. The adversaries will find an evasion.

Cody Rivers:

Using the same tool is kind of wild. It's like this double edged sword.

Aaron Pritz:

Absolutely. Is it just me or like until the last year or so, maybe less than a year, AI had been overused as a marketing jargon term, like everything has AI and really nothing reflected that it had AI. So, again, I was kind of writing it off, but you were doing stuff, maybe AI before it was called AI. Talk to us about what you really consider AI and then what's the shell hype that a marketer might put on something to create more intrigue about their product.

Tim Sewell:

Yeah, so it's maybe it's easier to answer the second half of that question in what is kind of called AI. Anything that the computer does for you, somebody is going to call it AI. It's"Oh, the computer figured out this pattern for me." Well, okay, you put some rules and some conditional if statements if then else some logic stuff and it came out with an answer. And anytime you put that input in, you're going to get that answer out. It's not really AI.

Cody Rivers:

Thinking it's just executing a set of steps.

Tim Sewell:

To me it becomes AI when that output becomes less deterministic. So Chat GPT is a great example of this. You go to Chat GPT and you give it a prompt and you ask it a question. It'll come back with an answer. And even the bottom of that answer is going to hit regenerate and you give it the exact same prompt, it'll come back with a completely different answer.

Aaron Pritz:

There's not a defined set of multiple- choice answers, right?

Tim Sewell:

Exactly. There's not a defined output based on a specific input. And that's what really makes it kind of feel magical, right? It feels like the computer is doing some thinking for you. It's still doing cluster analysis and grouping, and it's putting these words together because it sees these words together a lot in its corpus of training data. So it's still a computer, it's not really thinking, but it feels a lot more like it is now because you're getting these non- deterministic outputs from your input.

Aaron Pritz:

Do you think AI will ever be an, in the near term sentient, I know there's been people claiming that it already is. What are your thoughts there,

Cody Rivers:

Jarvis?

Tim Sewell:

I think it's an interesting philosophical point and one that we're going to have to struggle with as a society for a while because I don't think we have a really good definition of sentience. I think this is going to force us to create one and then we'll have to decide if. Our computer AI reaches that threshold or not.

Aaron Pritz:

Makes sense. Well, we've had a couple discussions on AI. It's really hot right now. It's hard to not avoid it. But maybe let's pivot for this discussion a little bit more into deep fakes. And Tim, you and I were out at DEFCON and Black Hat and, specifically at DEFCON, we saw a talk on that with a live demonstration. Do you want to talk about that, and then we can unpack kind of where that leads us to think that the deepfake society or space will go from here?

Tim Sewell:

Yeah, so the presentation that we went to was a live demo, and basically pulled together a few different open source tools to create a model of an individual, including their visual likeness and their voice using surprisingly small amounts of source content.

Aaron Pritz:

Yep. So it was the guy's real CEO, right? And that was one of the ones he mocked up.

Tim Sewell:

Yes. Yes. One of, one of them was his real CEO and incredibly convincing.

Aaron Pritz:

Didn't demo with that one per for probably good career reasons.

Cody Rivers:

Man. So they're taking AI and creating real like images and people and video.

Tim Sewell:

Yeah. And the cool part, or maybe the scary part for this one is during the live session, he's standing up there on stage and you can see it's him and you see the camera on him and he starts to run his script and you see the camera image transform into, uh, Jeff Moss, who is the founder of Black Hat and DEFCON and, very well known figure in that community, but...

Cody Rivers:

That's wild.

Aaron Pritz:

Not only looked like, but he got the audio replica in real time.

Cody Rivers:

So how do you, man, that's wild. So then how, what are some things out there to detect? What things can AI not do yet?

Tim Sewell:

Yeah. So as impressive as the real time capability is there, there are still some limitations. We'll see how long these limitations last, but, side-profile views. So if somebody turns to the side on camera, oftentimes that will confuse the model. It will blur or it will have some sort of distortion or glitch. Similarly, nuanced facial expressions can be a challenge. Somebody will be grinning and their cheeks will still drop because the model is not well articulated that way. As far as vocal? You can find some odd intonation for things like laughter, things that are not necessarily spoken words. You might not get a lot of good samples to create a vocal model out of.

Cody Rivers:

What about like interacting, so if it's like on a phone call or, or maybe it's even to your point, via prompt, which is like you see on LinkedIn nowadays with a lot of that stuff there, what are some like interaction flags?

Tim Sewell:

Yeah. So how do you know you're dealing with something that's generated by an AI? AIs don't get jokes, so they're not very good at humor. Okay. And they sometimes drop a lot of context. Again, they're finding groupings of words or groupings of concepts or ideas that they see frequently in their training data. Yep. And so if you're asking questions in a bit of a roundabout way, it'll get some weird clustering, some weird... Weird responses.

Cody Rivers:

No, no humor. A little bland. He sounds like I dated this person in college.

Aaron Pritz:

Didn't we all? Yeah. My mind goes to social engineering, especially with the real time. Cause Cody, you were asking about fully generated like a person that doesn't exist saying things. If I've got a camera trained on me using some of the same technology that was demo and released open source, to the broader community, I could call you on Teams or Zoom, emulate whoever I trained the model to emulate and have a conversation posing. The other thing I was thinking on the glitching, and this just makes it harder. You think about Teams and Zoom. The background blur, or the art backgrounds. When you turn to the side, you get glitching normally, so I think that's almost a mask that makes it even harder to detect on these video call platforms. So Tim thoughts on social engineering? Do you think that the attackers are already jumping in with this? Do you think it's early? Where do we think we are with this?

Tim Sewell:

Oh yeah. I think we've already seen examples of these techniques being used to create a big political debates and arguments. You'll see politicians making statements that they didn't actually make, but it's almost impossible to tell.

Cody Rivers:

Yeah.

Tim Sewell:

And we've also seen cases where executives have been impersonated via video chat channels telling. They're staffed to do things that they wouldn't ordinarily do, but Hey, it's my boss on video telling me to do this. I guess I'm going to go buy those gift cards after all.

Cody Rivers:

Yeah. He's on video telling me this is pretty real.

Aaron Pritz:

Yeah, gift cards are never a real request until they are right.

Tim Sewell:

I was gonna say, didn't you ask me to buy some gift cards?

Aaron Pritz:

I think for a conference or something. So touché touché. On the election front or really any big public debate. Like then we get into influence campaigns and trying to sway perceptions. Obviously already seen cases of that, or, negative ads that show things that were faked or whatnot. Are we going to see more of this? Is this an unfortunate status quo? Or do you think there's going to be some regulation to try to prevent that? So people have decision making capability based upon reality?

Tim Sewell:

I think we're going to see it almost unending stream of it and figuring out what to trust is now a very hard, it's very hard problem now because it's so easy to create compelling, realistic, fake content. and there are a lot of social institutions that are susceptible to that kind of an attack or that kind of a threat.

Cody Rivers:

Yeah.

Tim Sewell:

Elections obviously one that is terrifying really. And you think about how social media can influence an election or a public policy conversation. Then you apply the ability to fake what your opponent says or does in a compelling way in a society that's already challenged with fact checking.

Aaron Pritz:

You mean fake like this?

Tim:

Hi!!!! I'm back everyone! I just want to put in a plug for the 2024 presidential campaign of Statler and Waldorf. I love those muppets. They have great coaching and feedback and tell the truth without backing down.

Tim Sewell:

That was a little too good.

Aaron Pritz:

So I guess maybe turning to the future. What recommendations around AI do you have for cyber leaders and then maybe company owners and executives two part question? Because the remit or the angle is different. But we've got listeners that are cyber professionals and those that might be small business or executives of companies. Like what? What do people need to be focused on now to get ahead of this or to deal with it?

Tim Sewell:

Yeah, there are a lot of policy and legal questions around AI and intellectual property Those are going to take years to work through various legal channels to get to resolution. And I am not a lawyer, so I don't feel too qualified to speak on them other than to say that they exist and they are real and they will cause challenges for all organizations, either the organizations trying to use AI or trying to prevent the use of AI for something or the other. I think from a technical perspective to use AI safely, you've got to think about ways that AI can be attacked in and of itself and how you fend the AI in and of itself.

Cody Rivers:

Yeah.

Tim Sewell:

So poison training data is a real threat. If you can give the model bad data, it's going to learn bad data. Similarly, if you can control the inputs to the model or change the inputs to the model, it will generate bad outputs. So if you can control how somebody is interacting with their AI and change words or change prompts or change things you can force the AI to do things it's not supposed to do.

Cody Rivers:

Kind of because you're feeding the library of options. The salt or the thing you're putting in there that's not right will come out because it doesn't know.

Tim Sewell:

Exactly.

Cody Rivers:

Gotcha. So man, is there hope for the future? I hear a lot of things, how AI can do that are nefarious, but I think there's a lot of great things, but talking about maybe the hope for the future.

Tim Sewell:

Yeah. So we're in the infancy of general computing AI. Similar to the launch of the internet or networking of computers. Very early on, there was Microsoft Mecca. A lot of concern about that too. What is that going to mean? How are we going to handle that? And I think this is a similar period of societal inflection as we adopt more AI solutions. It's hard to tell. I like to look to science fiction because I think it does actually a pretty good job at giving you a vision of at least potential futures. So I like to be a bit of an optimist and look to a Star Trek-like future where AI removes a lot of the drudgery from human existence and frees up people to pursue arts and-

Cody Rivers:

The nuance versus the kind of mundane?

Tim Sewell:

Pursue what I would call the truly human endeavors, get rid of all the drudgery. Of course, science fiction will also tell you that the AI, as soon as it becomes sentient, is just going to launch all the nukes and kill us all. So that would be"The Terminator" version.

Cody Rivers:

There we go.

Aaron Pritz:

Where does"Minority Report" fit in? It's kind of dystopian.

Cody Rivers:

That's the precogs. That's actually-

Tim Sewell:

That's in the future, which is AI can ostensibly do that, right? I mean, there are people saying"Hey, if I feed all the financial data into an AI, it should be able to predict the stock market, right?"

Cody Rivers:

That was the precogs, man. That wasn't, that was the little human people thing. So, speaking of AI, what is probably your favorite science fiction AI? I mean, I think of Knight Rider, you got Terminator, you got Jarvis from Marvel, Marvel Comics, the"Iron Man."

Tim Sewell:

I'd probably have to go with Data.

Cody Rivers:

Yeah, that's a good, that's a good one.

Tim Sewell:

Data the android from"Star Trek: The Next Generation." Probably my favorite. I think about some other ones that I really enjoyed that were maybe a little more nefarious. There's a book by Daniel Suarez called"Damon" that has a really interesting AI component. I do find a lot of the science fiction books about AI really are trying to be warnings about how AI could be abused. And then I find organizations or companies or, venture capitalists, they grab those same books and they say, Hey, we should go build this. Yeah. It's like, no, did you read the book that said that was a bad idea? We shouldn't do that. Uh,"I, Robot" is a good example of that. You know, you have the three laws of robotics and how that can ultimately lead to, um-

Cody Rivers:

Now you're asking the right questions. There you go.

Tim Sewell:

Exactly.

Aaron Pritz:

Cody, I'm going to use the question you asked Shelly, but what's a fun fact or something that no one knows about you? Very few people know about you? Fun fact, hobby or Tim- ism that, that you can unveil here today. We missed this one in prep. So apologies in advance. Kristen, here's the edit part.

Cody Rivers:

Did you meet any school celebrity on a wild happenstance?

Tim Sewell:

You know, I did meet Jane Goodall once we were on a flight together from Philadelphia to San Francisco. I just happened to get seated next to each other and I look over and go, are you Jane Goodall? Yes, yes, I am. We had a lovely conversation. She's a charming woman.

Cody Rivers:

That was from Philadelphia-San, that's a long flight.

Tim Sewell:

That was a good long flight.

Cody Rivers:

Any good conversations you can share?

Tim Sewell:

We talked a lot about the Sand Hill Crane migrations in western Nebraska. I'm from Nebraska and she goes there every year to watch the migrations. They really are quite spectacular.

Cody Rivers:

What a time, man, on this little flight.

Aaron Pritz:

Any closing thoughts, Tim, you want to leave with our listeners? What recommendations do you have for them? Kind of protecting their company given you've been doing that for over 20 years. What's your number one recommendation for focus?

Tim Sewell:

I think it's still pretty consistent. If it sounds too good to be true, it probably is, especially when you're looking at an AI solution right now, there's a tremendous amount of and there's a lot of unknown. So I would say continue to be vigilant. Continue to do your due diligence, and don't believe everything you see.

Aaron Pritz:

Good words. Thanks, Tim, for joining the show. Have a good rest of the day and weekend.

Cody Rivers:

Yeah, thank you. Glad we finally got you here. May know it's probably the hardest 15 feet to get you, but we got you for episode one, so we appreciate this.

Tim Sewell:

All right. It's been fun guys. We'll do it again.

Aaron Pritz:

See ya.

Cody Rivers:

Bye.