The Human Element: AI, Deepfakes, and the Evolving Threat Landscape with Perry Carpenter

TLDR;

  • ​​Security with AI can be bucketed into two areas. Detection based (where humans detect the spam using past experience, tone, etc in conversations) and Process based (using tools to block spams).
  • ​​AI should leveraged for automating tedious manual activities. At the same time it should be verified before the knowledge is amplified.
  • Measuring security programs should be Impact driven, focusing on outcomes and behavior based assessments rather than only focusing on attendance. Role playing is a key exercise to help employees understand attackers mindset.

Transcript

Host: Hi everyone, this is Purusottam, and thanks for tuning into the ScaleToZero podcast.

Today's episode is with Perry Carpenter. Perry is a multi-award-winning author, podcaster, and speaker with a lifelong fascination for both deception and technology. As a cybersecurity professional, human factors expert, and deception researcher, Perry has spent over two decades at the forefront of exploring how cybercriminals exploit human behavior.

I'm pretty sure we'll touch on some of these areas today. But before we kick off, Perry, welcome to the podcast. Anything you want to add to the journey that I just described?

Perry Carpenter: Yeah, I'll just say that, number one, thank you for inviting me. But on the adding to the journey, it was not a straight, well-thought-through route that I took to get here. It's been, you know, fumbling my way into the position that I am now. And so I just as a way of encouraging people who are starting their cybersecurity journey, you don't have to have it figured out at the very beginning.

What you do is you take the jobs that you can get, you have the things that you're interested in and you find the ways to integrate your interests into whatever job you have and let that express itself. Then you'll end up, career and life is almost like an escape room.

One little thing that you do unlocks another possibility that unlocks another possibility. And so that's the way that I approach everything. And it has let me really integrate the passions and the interests that I have into whatever job that I have at the time.

Host: That's a great starting to the podcast. Like honestly, often we feel that I need to know everything before I get into any domain, right? Like security also included as part of it. But you cannot figure out everything unless you get your hands dirty, right? And you get into it with your beginner's mindset. So yeah, that's a great start. So before we go into the security questions,

Perry Carpenter: Yeah. Right.

Host: One of the things that we ask all of our guests and we get unique answers to is what does a day in your life look like?

Perry Carpenter: Yeah. Yeah, I have a weird job in that I kind of got to create my own seven and a half, almost eight years ago when I came to know before. And so like my title has changed over the years. So it started as chief evangelist and strategy officer. And that was just to show the multiple domains that I would be able to touch within the organization.

And right now it's changed to Chief Human Risk Management Strategist. And that's because of the fact that the market has shifted a little bit. And terms like Chief Evangelist and things like that are a little bit out of vogue these days when before they used to be very, very common. But really, when you look at a day in my life, you can't really look at a single day. You almost have to look at a week or a month.

And it's because of the varying areas, scopes of responsibility that I have. So one that's really easy to point to because that's what everybody sees is the evangelist portion. It's the getting out and working with the media, doing things on LinkedIn, going and giving presentations, all the outward things. But then on the inside of the organization, I either lead or I integrate with a number of research efforts. Helping understand like, because the organization that I work for has access to so much data and very unique behavioral security data, that means that we have opportunities to get insights out of that and either deliver those insights to the rest of the world through published research or to use those insights internally to make our products better or to deliver new products.

So I work in that area. Then I also work with internal product development on taking what I'm hearing out in the rest of the world and helping focus products around that. And then also working with marketing on the messaging around things. So both the product side and the marketing side, there's always the last mile is what I would call it of understanding what the product does, but then being able to talk about the value of the product in a way that a potential customer will understand and appreciate.

So that you don't have, you're not just talking features and functionality, but you're talking about problems and the things that this product has been developed in order to address and to make somebody's life a little bit easier and to alleviate some of their stress.

Purusottam: That's a lot that's going on in a week. So understanding what's going on in the market, translating that for your internal teams, also sending messages with communication or messaging to the external world or internal world. Yeah, it's a huge task list, it sounds like. Hopefully we can cover some of those things today in the podcast.

Perry Carpenter: Yeah, right. Absolutely.

Host: Today's focus is primarily around AI and human risk management, like why people fall for scams or how are cyber criminals exploiting the human nature for their benefit. So to kick off our episode, let's get started with a new product or tool which has been in our minds for the last couple of years, which is AI. Let's understand the impact of AI on security.

We know that AI has pros and cons, of course. Can you help our audience understand how can AI be a powerful tool for enhancing security?

Perry Carpenter: Yeah, and I think when we talk about AI, we have to realize that AI has been around for a long time since the 1950s. And we've seen it manifest itself in different ways, from things like your recommendation algorithm on YouTube to the way that Spotify and other streaming services incorporate insights and predictions to the way that marketing and tracking works.

You know, all of that has been very algorithmically You know, the learning for that has been basically numbers and recommendations. And that's how we've seen that manifest more than anything else. The difference in the past couple of years has been the emergence of what's called generative AI. And the seminal paper for that was written in 2017 by the folks at Google Deep Brain before it was called Deep Mind. And they wrote a paper called Attention Is All You Need. And out of that paper came this thing that we call the transformer model. And the transformer model is the backend for generative AI.

And so with generative AI, the critical difference is that it can simulate human creativity in ways that haven't yet been reflected with AI. And with that means that there's interesting new ways of doing analysis. There's interesting new ways of doing things like simplification and aggregation and writing of text. And of course, there's all the video generation, the audio generation, the image generation and so on.

So the multimodality is really interesting in the fact that it does it in very human-like creative ways.

And that opens up a Pandora's box of both good things and potentially dangerous things. On the good side, creators like you and I can have access to capabilities that were really out of our reach before. We no longer have to look at a blinking cursor on a screen for hours wondering what's the first sentence in my document going to be. Cause I can use ChatGPT or Claude or Google's Gemini.

And, I can brainstorm with that and say, I've, you know, I've got an idea about this thing. I need 20 initial ideas just to help get my brain started. So that becomes really, really easy. Or I need a perfect image for this. Now I can go to Dall-e or mid journey or flux or something like that. And I can a simple text my, you know, put in my text-based idea and then out comes something potentially useful. Same thing for video and audio. can have all my favorite people doing or saying things that they never really did or said. And that's really, really great for unlocking creativity. It's really, really dangerous for unlocking the potential for scams.

And that's the way that I think every every tool in every generation has the same thing. The innovation of a hammer, I can either use that to pound nails in and construct something, or I can use that to tear something apart, or I can use that to bludgeon another human. Same thing with fire. I can use fire to warm a house, or I can use fire to burn a house down. AI is the same way. It has that dual purpose built in with it.

And what I say over and over and over again is that every tool will mold to the hands and the intentions of the person who picks up that tool. And I think the generative AI is the same.

Host: Mm-hmm. Yeah, so you're spot on like AI has been there, there has been an algorithm like Facebook newsfeed is all AI, right? And I think I was listening to Facebook CTO's podcast and he mentioned that like newsfeed was like the first AI product out there. But it was not that you were interacting with the AI system directly, right? There was some system behind the scenes. But now with ChatGPT and all the Cloud and other systems, in a way, you are interacting with AI directly. So that's where it becomes more visible.

Now we spoke about the benefits for creators, security folks. What are some of the risks that you see? Like, do you see any new threats or vulnerabilities that AI can introduce into the cybersecurity landscape?

Perry Carpenter: Yeah. What I see is many of the same types of scams and threats that have existed for a long time can be expressed in new and different ways. And so that's the thing I keep coming up against is that there's not the creation of something new. There's a new and innovative way of expressing the oldest tendencies that we've had as a species around deception or how to trick somebody into giving me something or how to trick somebody into believing something.

And so for me, it all comes down to the fact that I can use AI for scams or deception. And if I'm a cyber criminal picking something up, or if I'm just a scammer that's been out there, regardless of what technology exists, I'm after one or two things either money or minds. I'm either out to try to take money from somebody to scam them out of something that they had, or I'm out to influence and influence them into doing things that benefit me. And that could be believing things so that you're affecting the results of an election, or it could be believing something so that you can start a riot in the streets, or have an economic outcome.

So that's what generative AI is able to do. And it's able to do it at a scale and with precision that is new. And so now I can be halfway around the world and I can scam somebody in a language that I've never learned. And I can do that in a way that sounds close enough to somebody that's a native speaker of that language, because I can use different translation technologies. Or I can think of the perfect thing that I want somebody to believe. And now I can generate an image that matches that and the right narrative. And I can do that, again, at a scale. So my end changes, right? So it's no longer one-to-one. It's one to as many as I want to with a diversity that I can, know, anything I can think to prompt for, can create that kind of diverse outcome for.

And so it can be different population sets. It can be different hooks for the emotional bit of the scam or the emotional bit of the deception. And I can do that with a with precision segmentation. And I think that that's the different thing.

Host: Yeah, so that's a very good example that you highlighted. And I think AI-generated spam, like spam messaging and spam emails has been on the rise as well. We reached out to a common connection, and Deb Radcliff has shared a question around this particular topic. The question is,

If it looks like a CEO asking finance to transfer money, And if it sounds like the CEO, then the finance person will send the money, right? This is mostly called as business email compromise. Same for social media as well, where we believe in some misinformation or we click on something. What is the role do you see of human factors in these deep fakes and how can these be shaped by AI or how can we prevent some of these things as well?

Perry Carpenter: Right. Yeah, the biggest way to prevent us like actually doing the thing. And I'll back up a second. So one of the things that I don't advocate for is the thing that would be easiest first to talk about, which is how can I tell if something's a deepfake? And you'll see cybersecurity people get on TV and they'll say, well, with deep fake technology, you can look at maybe like the hair or Yyou can look at and see if there's weird fingers or some kind of weird modification like that, or something doesn't track right.

That's good, but that's only good if the scammer is lazy. If they roll another generation or they use a slightly better piece of technology, or the technology just evolves for another month, all of that goes out the window. So for me, I don't really want to advocate for any of that because it will be perfect.

The other thing is, is we've been conditioned to deal with compression artifacts. We see bad video all the time and stuff that's grainy and looks blurry. so, our minds are already conditioned to kind of wash all that out and just say, well, it's, you know, bad internet connection, whatever. So for me then, I will assume that the technology will get good enough to fool any of us.

And then I want to say, well, what is the way that I can deal with the fact that you and I will be fooled by this? And then it goes back to very old school security controls, dual-based processes, a second factor that's needed to be, needing to be pulled in, looking at the behavior patterns, the voice patterns, the terms of phrase that somebody uses that are hard to emulate. So it's one thing for somebody to, to be able to make a really good copy of your face and your voice.

It's another thing for them to mimic your speech perfectly. So the texture of the voice, I can clone really easy. All the little things that make any of our voices unique is a little bit harder. And it's even harder to know the words that we would use, the phrases that we would use or the common experiences that you and I might refer to with each other.

And so if we got on the phone and for a second, I thought that, well, you maybe I should verify that I'm really speaking to the person that I believe that I am. You could ask about a common memory. You could ask for the secret ingredient in your, you know, your famous chili recipe or whatever like that. Something like that, that the scammer wouldn't know.

And then that bit then becomes a way of authenticating the person. Or, assuming you can't do that, maybe you're in a culture where that kind of pushing back isn't allowed, or in a situation where you're just, you know, it's really fast. Well, then that process-based workflow comes in. It's, believe that I need to transfer this money, but in order for me to do that, I have to put in this ticket and then there's a secondary authentication that needs to happen where they reach back out to the person supposedly that had initiated it. But you're putting gates in front of it.

And I think that that more old school security based way of doing things, adding friction, adding gates is the thing that we're going to have to rely on. And it can't just be that if you trick me good enough that I can click a button and send $25 million. That's not acceptable in today's society.

In the same way that I would say if a user in an organization clicking on the wrong link takes down the entire organization, there's multiple failures, not just that user that decided, you they got tricked into clicking into something. There's network issues that you've got around segmentation things that you could have done with application sandboxing. There's endpoint protection that should have been updated a little bit more. It's not just the person that decided to click the link that's at fault at that point.

Host: Yeah, I like how you structure. So to summarize, like there are two areas, right? One is the human or the detection aspect where you not only just look at the picture and then you believe that it's your CEO, but you also look at your try to find out if it is genuine or not. And whether it could be based on the voice tone or your past experiences, things like that. That is one area.

The other area is around the process where you say, right, like if you get an email from a CEO, you should not just send it, but rather you need to have processing in place like MFA and other multi-factors, which sort of blocks you from taking such action. So it's a, both sides need to play a role in this to stay secure. Love that. Love that. So, so that we spoke about the attack side of things, right?

Today, a lot of security research and subject matter experts are focusing on how they can use the AI's role in security operations, like how they can improve it. So what do you see? Like how can AI augment human analysts so that we can improve the efficiency and accuracy of some of these security operations, like the process part of it that we spoke about?

Perry Carpenter: Yeah, yeah. Well, and this is an evolving area right now because generative AI specifically isn't perfect. No AI is perfect and generative AI is less perfect than even old school algorithm based, you know, computational-based AI.

And so what I would say right now is that anybody that's looking into the market needs to be able to speak with whatever vendor and get an understanding of what, when they say AI, what are they talking about? Are they talking about glorified decision trees? Are they talking about algorithms and recommendations based on, you know, computational dynamics that are going on? Or are they talking about something that is creating and building generative reports or something like that?

And once you understand the different flavors of AI that may be integrated into a potential product, then you can start to evaluate the efficacy of each of those. And you also understand the limitations of those.

So what I would say is there's always going to be a combination of old-school and generative AI going forward. And when it comes to things that are looking at the analytics-based AI, and then bringing that into a generative framework and then writing a report based on that. What you realize then with the report is you've probably succeeded in cutting several hours of work out of an analyst's job, which is great because that frees the analyst up. They're not doing the part of the job that they hate. But it also means that when it generates that report, you need to go through with a fairly fine tooth comb and realize any time it's making what could be a factual assertion or assumption-based assertion. And you need to go verify all of that because AI can hallucinate and it does hallucinate and it can get facts wrong and it can get mixed up the same way that you and I can get mixed up.

But the biggest problem is that when we lean on the generative AI system as a crutch a little bit too much right now, at least, those factual errors and the hallucinations can just go through. And then somebody will either pick up on that on the other side and they'll go, Perry generated this report. It's clearly wrong on X, Y, and Z. Perry's an idiot. And then you get disciplined or somebody thinks bad of you because of that.

Or the even worse scenario is that it flies through under the radar and then people make wrong decisions based on that. And so for me, this is a growing area. There's great promise. It will save us all time and money, but we have to understand what the current state of capability is, what the current weaknesses are. And then we dive in and we work with that understanding so that we can capitalize on the time savings, the effort savings, the mental load savings. But then put the effort into making sure that the stuff is right before it gets passed to the next stage.

Host: Yeah, so there are two points that I gathered from this. One is like you can obviously automate some of the tedious activities of your SOC analysts and security team members using AI. But at the same time, trust but verify because even though you use some tool, you got some output, do not accept it blindly, but verify it, make sure they are right before you apply it or even share it with others.

Perry Carpenter: Yeah. The way that I would think about it is when AI is creating something, it is like a really, really smart intern. It's like you've hired potentially somebody with a master's degree or something even above level of capability, but they don't yet know all the intricacies and the weird things about your business and the way that you do things. And so when you have a smart person making assumptions, sometimes that's more dangerous than when you have somebody that you don't trust that's making those assumptions.

And a smart person making assumptions can make those assumptions in a more believable way too, because they're using terminology and they're using inferences that can sound very, very plausible. It's like when you get a really knowledgeable security expert that's just given a presentation, And then somebody go that you then you go to the Q &A part and somebody and ask them a question that they don't really understand or know about. But they're smart enough that they can be us their way across the room with that.

And they're not going to say, I don't know, for whatever reason, they decide not to say, I don't know. And they just kind of go off their assumptions. AI is kind of like that. And we have to get the products that we bring to market when something is an unknown and it hallucinates itself through that, hallucinates its way through that, we need to find ways of the products documenting that and saying, you know, these things don't necessarily connect, but here's my assumption based on that, plausible assumptions based on that, go fact check it.

So the key right now is having a human in the loop and knowing the potential deficiencies that are in the products.

Host: Makes sense. So we spoke about the sort of the challenging side of AI. We spoke about how it can be used to get better in your day to day and things like that. Can we use AI to do some prediction and do any prevention from a cyber security or cyber attacks perspective? What are your thoughts on that?

Perry Carpenter: Yeah. Yeah, I definitely think you can. When it's doing the prediction and spotting anomalous patterns, that is the type of AI that has been around for decades and has continued to improve. So really over the past five to eight years, there's been more and more advancements in those spaces, especially in the security field.

There's been a ton of great work for decades for that in things like social media algorithm and algorithm curation, recommendations on things like Netflix and all that. But people have really brought those kinds of insight-creating algorithms into the security field. And we're seeing that expressed as products in SOC or other large-scale analysis.

And then what you're going to see is this slathering of generative AI on top of all that for the initial report creation. And so you're going to see a lot of really old school technology that's doing the brunt of the work and then kind of a presentation layer that's being curated by generative AI.

Host: Yeah, that's a good summary of it. One question that comes to my mind is, so we spoke about humans, AI, and you highlighted that humans should be in the loop always, right?

From a human, like from a security or SOC analyst perspective or security member perspective, what new skills or competencies do you see security professionals should have so that they can thrive in this new AI-driven world?

Perry Carpenter: That's a really good question. And I think the skill that I would say, and you hear people that work with AI say this all the time, is AI is not going to replace our jobs. What will replace our jobs is somebody who's embraced AI and uses that to work at a capacity level that's higher than somebody that shuns AI.

So learning what AI is, how it works, and what the strengths and weaknesses of different flavors of AI are, is going to be a non-negotiable. And you're going to need to understand it in at least two areas.

One is generally what is AI capable of. So if you go natively to ChatGPT, or Claude, or something else, how do those products work? You have to understand that. So you understand the ins and outs of the core technology that's being layered into or woven into everything else.

And then you're going to need to understand how those bits of AI functionality make their way into the products or the types of work that we do within each of our career silos and how they get integrated into the product sets that we have and what unique opportunities that brings and what unique threats that brings.

And when I say threat in this context, I mean, what can it make me assume wrong? What are the blind spots that the AI has? And where do I bring my human intelligence into that so that I can get a best-of-all-worlds approach of this, where the AI is helping me be better than I am, and I am helping the AI be better than it would be naturally.

Host: Okay, makes sense. That makes a lot of sense. Now, so we have been speaking about whom humans AI to understand this human-centric nature a little bit better. Like we started with like people falling scams and criminals exploiting. Now with with all of this knowledge, a lot of us are aware like what type of scams are happening, how are cyber criminals exploiting, but even then, we are still falling for these scams. Where do people go wrong? And how do cyber criminals still are able to exploit humans?

Perry Carpenter: The easiest thing that a cyber criminal can do is prey on the fact that we're emotion-driven machines is if I can, if I can weave fear or authority or curiosity or urgency or something like that into a message, then I've started to win already because we as humans, we react to things. We don't necessarily process things, slow down, reason through, and then come out the way that we would hope we do. We tend to be very stimulus response, stimulus response.

And then only when we start to maybe feel something a little bit within our gut and go away, maybe I should think about that. Then you slow down. That's, that's when you can start to kind of detangle where the scams are. But the scammers are are really capitalizing on the fact that we are stimulus response machines.

And then add to that, where generative AI comes in even more, is not only can you have the emotional component, but it's way easier now for scammers to wrap that emotion in a plausible story that has the right image, that has the right video, that has the right voice, that has the right language and everything else.

And when you're wrapping emotion, the thing that is the stimulus into a story or a narrative that works with a worldview that somebody has or works with an expectation that they already had, well, then it becomes even more powerful. So generative AI bringing all those things together is the new opportunity for scammers.

Host: So I think you earlier mentioned human should be in the loop. And one of the things now you are highlighting is like scammers often play to our nature or our emotions, which impacts, which helps them with new, maybe attack vectors or new ways to scammers. And we often hear security leaders say that the weakest link in security is human, right?

So there are multiple aspects to it. Humans get exploited, and there are also insider threats sometimes which impacts. So looking at both of these, how can organizations respond to some of these emotional or like emotion-driven attacks or insider threat attacks to avoid like data leaks or data leaks or IP leaks, things like that?

Perry Carpenter: Yeah. So the first thing that I would do, and this is above even like the base level strategy on how to react to those things, is I would adjust my thinking.

So an organization that's saying a human is the weakest link is already wrong. Because if a scammer has made it through your secure email gateway and has landed in front of a human, Well, then the email gateway was already weak. So is the human clicking on that weaker than the email gateway that failed?

And if the human clicking on something takes down the organization, then were they really the weakest thing or were they just a link in the chain and every other link that could have prevented something broke as well?

So I don't say humans are the weakest link. I say humans are a critical link or they are critical layer within a security stack. But if you're having IP leaks because a human clicked on something wrong, then everything else in that security chain failed as well. And if we are going to look at the end user and blame them for that, we need to be way more introspective and say, wait, we're paying how much for this email gateway? We're paying how much for this application sandbox? How much for this endpoint protection?

And they all failed as well. So why would I look at Bob and blame Bob? I can't do that. So that would be the first thing. And then I would start to say, if a human clicking on something is causing all this havoc, what additional layers should I put in that will make it to where it's not the humans, and I'll use this in quotes, fault for that? Because a human is just gonna be a human.

And they're working with the design paradigm that was given to them. So every link in an email was explicitly made to be clicked on. They're just following the design pattern. Every USB that was given to them was made to be plugged in. That's the design pattern. They're just following the design pattern and the paradigm that was given to them. The problem is that the ecosystem that they've been put in is inherently insecure. That's not their problem. That's the ecosystem's problem.

But at the same time, because the ecosystem is fundamentally broken, we have to strengthen the human at the same time that we're trying to strengthen everything else. And the way that we strengthen the human is through a combination of efforts around things like traditional training and awareness, things like simulations, like fishing simulations that are trying to strengthen muscle memory. Things like multiple systems of control or gates that we talked about where if the human does something or is about to transfer money, that there's a process that goes through that will double check to make sure that that's supposed to happen.

It's really a collective way of doing it. And then I would say in today's society, there needs to be an added layer of media literacy or digital literacy of understanding how scams take place, how they weaponize our emotions, and how narrative gets weaponized.

And one of the things that I do that you could do in the traditional security world with phishing scams or whatever, or in the broader world of disinformation and misinformation is I want to put that person, I want to put any human that I can in the position of somebody who is a scammer and say, all right, now it's your turn, write a phishing email or now it's your turn, write a piece of disinformation. What's the clickbait headline? What's the perfect image? What is the thing that you want somebody to do or believe because of that?

And as soon as you do that with somebody a few times, they view the world entirely differently and they scroll their social media feed a little bit differently or they look at their email differently. And so I want to do those things because that starts to break somebody out of stimulus response.

Host: Mm-hmm. So like role playing where you are not only acting as the victim, I'm using it in quotes like victim, but also you're acting as an attacker so that you can understand both the mindsets and that helps you better, sort of better be aware of what possibly could go wrong or what type of messaging I might receive from a potential scammer, right?

Perry Carpenter: Yeah, yeah, exactly. If somebody is going to scam me, how would I create the perfect scam that would trick me? Or like in the book that I wrote, “Fake” the last three chapters or a whole bunch of exercises like that. And I do have like if you're working in a family group or a book club, create a piece of disinformation for the that you think the person across the room would fall for. And then don't release that on the Internet, but show that to them and say, you know, what would this If you saw this in your Facebook feed, what would you think? What would you do? What would your initial reaction be?

And I think having us go through some of those exercises. Number one, they're a little bit fun and they do make you view everything a little bit differently after that, but you don't view it in a way like where you're scared. You feel a little bit more empowered with it. And I think that showing people the tactics of scammers is important, but we have to do it in ways where they don't feel scared or helpless. We have to do it in ways where they feel empowered.

Host: Yeah, that's spot on. And I love how you connected this back to those two different ways, right? The detection-based approach where you sort of teach or you train the human so that they are better aware or better capable of identifying the phishing attacks or scams, at the same time improving on the process side so that you have less number of such scenarios for your for your employees or for your team members at the first place. So sort of keeping a balance on both the areas.

So you spoke about training and improving awareness of employees. So one of the questions that we got from Dustin Lehr is around that. The question is, how can we move people from simply being trained and aware of security best practices into taking proactive actions to follow them?

Perry Carpenter: And I assume by proactive actions, they mean like going and reporting suspected incidents or better educating those around them. That gets to be kind of a company culture initiative. And so one of the books that I wrote a couple of years ago was all about that aspect of things. And so when you look at security, one of the things that we're wanting to actually the things that we're trying to affect

Number one is somebody's mindset. But more than that is somebody's behavior patterns. Because I can have all the right mindsets and I can pass all the right tests around security. But if if in the moment I am just that stimulus response machine, then my mindset may not be enough to save me. But I can influence a habit because, you know, everybody has habits that they're not thinking about all the time.

And you can build a habit pattern. You can do that one of two ways, at least. One is drill the mindset and make somebody intentionally create the habit through repetition, through exposure. Or you can build a habit through simple exposure, conditioning, and social pressures to where people just because it's been modeled by everybody around them,

They naturally do the secure thing. And so from a security awareness and security program management perspective, we have to do both of those. We have to work top down through the knowledge side. There's lots of reasons why we have to do that. One of those is just compliance check boxes.

But then the other and the more important thing is to work the culture side of that, the social pressure side of that, to where people naturally do the right thing. And then when you're working the culture side, you can also make an effort to have people actually care about it as well, because it's a more community based reason for doing things. It's not just securities tells us this. It is that this is the way that things are done here. People might or might not understand all the reasons why, but they do understand that it's important.

And once they understand that it's important and it's not just for the security team, it's for everybody. Well, then they might take that extra action to let somebody know something, to look out for somebody, to report an incident, to talk about a disconnect between a policy want that everybody has and then the behavioral reality. All those things start to come in.

Host: Makes sense. one question that comes to my mind with this is, yeah, I see the value in it, doing the training, making the people aware, improving the tooling and things like that. All of this comes at a cost, right? So how can businesses, organizations measure and report on some of these improvements? What KPIs do you generally recommend?

Perry Carpenter: So I'll tell you, I'll start with the ones that I don't recommend. And then we can work into some of the things that could be valuable to measure. So number one, people will measure things that are easy to measure. And so when it comes to security awareness, they typically measure, you know, how many people have done this security module that I sent out, or I did a security training, how many people came to that. I think that those are

I mean, they're fine, but they're not, they don't measure whether people actually care about what you're doing or will do better. So they are a metric, but they're not a helpful metric in the long run. I don't really care about number of eyeballs on screens or butts and seats. I don't think that tells me anything about my security program. More than anything, what it tells me is, can I force people to open something or can I force people to attend something?

What I care about is, does an organization understand the behaviors that will lead to a security outcome, either a positive outcome or a negative outcome. If they understand the behaviors that will lead to those things, then they should build KPIs around that. Figure out what the baseline is, and then figure out what program they want to put around influencing the behavior, and then measure the before, the intermediate, and the after.

And so you can do that with things like fishing. That's why the entire industry for so long coalesced around sending out fishing tests and getting click rates for people that would open and click on things. That's easy to measure and it's behavior-based. But we can take that same mindset. So we can collect that.

That's a good KPI. We can take that mindset and say, if they're looking at potential fishing emails, the thing we want them to do more is not to click on that and to replace that behavior with reporting it. And so then you end up with all these phishing buttons that exist, which is great because now you're sending stuff to the SOC and you're letting threat investigators look at things. So that's a positive behavior.

We can take those mindsets and we can shift those into other behaviors that we might want to measure. So if everybody's back at work in the office,

And one of the things that you have realized is that people need to use the shredding bins more for paper because paper can disappear and it's got company secrets or private information on it. And people aren't doing that. Well, I should put a behavior-based metric around whether people are doing that. That's a little bit harder than measuring click rates.

But what I can do is I've got a shredding vendor. And I understand that they will probably bill me by weight. And if they're not billing by weight, then I can weigh the bins before they're sent out to the shredders. I can understand the volume of shredding that's being done. And so I use that as the baseline. I send out my initial campaign, whatever that is. And then I measure that after a month. Has the volume increased? If it has, then I've had a positive effect on that.

If it's the same, well, then I need to change tactics. We can do the same kind of measures for anything, for tailgating or whatever. It just takes being creative and building the right KPI around what is the thing that will show whether that behavior is happening or not.

Host: Yeah, I love the example of the Shredder, right? Like comparing weights before back to office, after back to office, and then see the impact. Yeah, so to summarize, measure for impact or outcome-driven rather than participation in a way, right? I think earlier this week, I was doing a security training where I had to have my window open for a security video.

I didn't have to look at it. I just had to have the window open. That's all. That was the way to check off my security training. So I see the value of the point that you are sharing across.

Yeah, so last question that I have is, again, around AI. So with AI, it uses a lot of data for trading. And then does prediction or it helps with our writing or our coding or various tasks.

How do you see the data confidentiality be protected? Because as an employee, can just take all the rows from my table and just send it to ChatGPT, write me a summary of it. In that, I got my objective fulfilled. But at the same time, I sent all of the confidential data from my organization to ChatsBD. So how do we keep a balance of it? How do we ensure that the data confidentiality is not messed?

Perry Carpenter: Yeah, I think this is a problem that we talk about a lot that has been thought through in other areas over the past 20 or so years because we've been dealing with SaaS for a while. We've been dealing with multi-tenant architectures for cloud-based storage and things like that where we've not had control of our own data in lots of ways.

But we've been dealing with that through contracts. And so the way that we will deal with that with AI will be very similar. And so there's always two ways of doing things. One is you have something local or on-prem that's fully under your control, or you have something outside of your organization that's under somebody else's control.

When it's fully under your control, you're not having to think about contracts or anything like that. You understand the data sovereignty. You understand that, you know, all the, all the, you know, the pathways in and out of that. But when it's outside of your control, you're kind of taking things on faith and you're managing it through risk assessment and contracts. And chat GPT or Claude or anything else is the same.

So when we don't have natural ways from our organizations to deal with things that are solving problems for us in official ways, we don't have official ways of doing that. Or we have to know that there is shadow IT that's out there. People are using other third party tools to solve real problems. AI is the same thing.

If we don't have an official way for our organization to use AI, then we will have shadow AI. We'll probably have some of that no matter what. But the easiest way to get the behavior that you want and to get the outcome you want is to make that the easiest way to do something.

And so you do need to have official accounts or paths into your AI systems of choice, ones that you've vetted. You need to have contracts with them. You need to have the understanding of the data protections that you want with those third parties. And then you need to educate your employees of when you do want to throw all that into chatGPT, here's how you do it.

Here's the account that you use or the portal that you go to so that every, so that you get the outcome you want as the employee, but the organization gets the data protection that it needs to fulfill all the regulatory and other best practices requirements that you have as an organization.

So that's, that's how I do it. And with most of those types of systems, open AI or Anthropic or Google, soon as you move to that paid type of subscription, that's the word I'm looking for, then you have those data protections built in and you also can bring your legal team in to look at all the terms and conditions to make sure that you're protected.

So that's what I would recommend. And then more than anything, you're trying to make the corporate way of doing it the easier and more productive way than the private way that you or I might just side channel use ChatGP to get something done.

Host: Yeah, I absolutely love your response on this because we have had like shadow IT challenges or we still have shadow IT challenges with security. Now we have shadow AI and the solution that you gave is spot on like where you the organization is sort of providing a path so that you can adopt AI rather than saying that, hey, you cannot use chat GPT this way or that way, but maybe use it this way, like have a recommended way versus not allowing them to use at all, which will eventually sort of force the employees to find their own workarounds to it. So yeah, that's a great recommendation around some of the data protection that can be built while working with any third-party GenAI-based services.

Yeah, so that brings us to the end of the episode. That's a great way to end the episode.

But before I let you go, I have one last question. Do you have any reading recommendation? It can be a blog or a book or a podcast or anything.

Perry Carpenter: So for, for my stuff, if you're interested in like the security awareness side, I have two books there. One is called transformational security awareness. The others, the security culture playbook. If you're interested in AI and deception and, AI scams or even the history of scams, then my new book fake it's spelled FAIK, is out there and that's great for that. And that's also written for kind of a mass market audience. So not just security people.

I also have a new podcast called the fake files, FAIK files that, looks at all of those types of things. outside of that, I really love, behavior science. And so the work of BJ fog, kind of the, the creator of the fog behavior model that you hear a lot of security people talk about that B equals MAP behavior equals motivation plus ability plus a prompt to do the behavior.

His work is really great. He has a book called Tiny Habits that simplifies a lot of his behavior based work into something that's really easy to understand. On the podcast side, I love the Hacking Humans podcast by the Cyberwire. Of course, Darknet Diaries is a stable favorite for everybody. And yeah, there's probably, I could go down the road of several different podcasts, but I think I'll leave it at that for now.

Host: Yeah, Darknet Diaries, I love that as well. The stories that they share and how they have worked around the scams and things like that. Yeah, that's a great recommendation. So with that, thank you so much, for joining and sharing your insights around AI, human emotions, the impact of AI on humans, and how humans can work with AI and things like that. So we covered a lot of topics. So thank you so much for joining and sharing your insights.

Perry Carpenter: Thanks so much for the invite.

Host: Absolutely. Thank you to our audience for watching. See you in the next episode.