Rethinking the Framework: Addressing Inherent Cybersecurity Risks with Gretchen Ruck

TLDR;

  • GenAI Tools are good at generic information. But, it often lacks contextual awareness. Building contextual awareness it into the GenAI tooling to get the maximum benefit of this technology trend.
  • Data privacy & Privacy Enhancing Technology (PET) plays a critical role in organisation’s GenAI strategy. Enough guardrails should be put in place to ensure privacy is at the top of mind.
  • Inherent risks are difficult to understand. Some of the cybersecurity frameworks gloss over it. But, it’s equally important for success of cybersecurity programs.

Host: Hi, everyone. This is Purusottam, and thanks for tuning intothe ScaletoZero podcast. Today's episode is with Gretchen D. Ruck.

As a trusted advisor to established boards, senior executives, and investor, she focuses on helping her clients examine and address the impacts of cyber threats, data privacy regulations, and technology risks on their overall strategy and security program performance.

She has held key roles at leading advisory research and financial service organizations and is a frequent writer and speaker on topics ranging from emerging technology trends to cybercrime to inclusiveness and advocacy.

Gretchen, thank you so much for taking the time and joining us for the podcast.

Before we start, do you want to briefly share about your journey?

Gretchen: Sure. Thank you so much for having me on today. I really appreciate it. It's great to speak with you. Yeah.

So I started out with one of the big, at the time, eight, which is dating myself, consulting for advisory consulting firms back when there were eight, there's now four of them, but working in IT overall. basically, through the nineties, I was working overall, just more generally in IT around the year 2000, I started with a global financial organization. And the year that I started, that group was being audited. And as I being audited, I watching the auditors and thinking to myself, wow, I'd really like to do what they're doing.

So I want to join that group, which really shifted my focus to risk, to cybersecurity in particular. And from there, I mean, I've worked a lot in financial services, just in general in consulting companies, doing a lot of work just around simply like everything dealing with risk related to technology and data. So cybersecurity, privacy, some work in fraud.

And over the last 10-15 years, as my career has progressed, I've worked more and more, I think, with the top end of the folks, the suites, the executive suites and such, where it's harder to necessarily communicate what it is that matters, why it matters, why cybersecurity is relevant and something they should care about.

So that's been a big part of my experiences and journey for the last 10-15 years as I branched out and done more. yeah, it's like in the last couple years, I find myself doing more things like this, which is really great. Be able to share thoughts with others through things like podcasts, writing, speaking. So this is really cool. Thank you.

Host: Yeah, thank you. Thank and I see that you touched on few areas which we want to get like go into in today's podcast.

But before we start getting into security questions, we generally ask this question to all of our guests and we get unique answers.

So what does your day look like?

And what I mean by that is apart from let's say work, how do you start your day? Do you have some rituals? How do you wind down? Because that's also a key part today's life nowadays,

Gretchen: yeah, but thank you, yeah. You know, for a lot of people, of course, COVID has had a big impact on what their day looks like, right? Some people have had to return back to the office, which is unfortunate, but for everybody else, they've gotten to recoup that time, you know, yelling from inside their car at traffic that they can't control or whatever the case is. So we're able to really use the time more for ourselves or family.

So, you know, for me and in my household, we start around doing things with kids because that's our family at the moment. So we do children things, take care of getting your kids out the door. And then for me, it's usually reading through my different news feeds and honestly just trying to figure out like, has anything changed since yesterday in the world that we're facing?

So usually that's a big part of the day for me. I usually try to do things like work out, but it's more usually the middle of the day or at the end of the day.

But for me, those are parts of digesting what it is I've done for the day and trying to keep myself alert and doing the best work I can is trying to do something every day that just gets me out of the chair because I guess the negative of remote work is it's really easy to get like, oh wow, look, I've gotten my 10 steps for the day versus 10,000 or 2,000. So making sure that's part of my day has always been a big component of this kind of work remote lifestyle.

Host: Makes sense. And I think one thing that I like from your answer is you start with information diet, as in like some of the newsletters or some of the publications that you follow, you read up to get up to speed in a way, what has happened in the world of cybersecurity while you were asleep in a way.

So one of the things that I saw in your interest, you must have seen many technology trends throughout your career.

Today, we are looking at a new technology, is Gen .AI or AI, right? So often these new technologies bring tools and ideas for cybersecurity and also challenges.

So with Gen-AI, more and more customers are adopting it and quickly becoming a topic of debate and concern as well.

My first question would be, how do you envision GenAI playing a role in overall cybersecurity as there is a rise in the adoption of GenAI?

Gretchen: I mean, that's a great question. And I like that as you brought up the topic, Puru, that it's, you said AI, then you kind of corrected to general AI because AI and machine learning have been part of what we've been doing in our field for quite some time.

But AI in our field has generally been like, hey, we want to make sure we do a better job of protecting our network and stopping intruders from getting in. And so we might use AI for that, but that's not that's not general purpose AI that we're seeing today. Those things are great. They do what they're supposed to. work in a very closed environment.

But this is very different. this is kind of cool also because it's like, I feel like when we all started adopting really cool smartphones, was like businesses had to adapt and really go, hey, consumers want this. I guess we have to work this into how we do our work too. so smartphones were driven more by consumers than by businesses in their growth.

And it seems like the excitement around Gen. AI has to some degree also been driven more by consumers than by folks in the profession. So I mean, it's people are professionally excited about it, but I think it's most folks are reading about it from more consumer sources before they get into it themselves. Just see, well, what would it look like if I combined this into that, know, whatever it is that they're doing.

So I know that my first time playing with it was just goofing around with my kids, you know. So I think a lot of people, that's cool too.

So it's funny, like for cybersecurity AI is like something we talk about a lot. And it seems like we oscillate between it being something huge. could have the potential to transform like a lot of what we do in our field. It could transform also like all the things we think of as risks to being bigger risks and maybe even new risks we didn't need, you know, black swan-type risks. So everybody's caught up in that and there's so much sensationalism around it.

You know, we're still at this period where It's cool and it's exciting, but now what? It takes a lot more energy to run it. Is it worth it? What are we actually getting out of it? As a tool for good, like I said, these things like AI overall and machine learning have been part of what we do. They're part of our tool set and that's not going to change. With generative AI, I think in cybersecurity, we could talk about the risks, but first, if we talk about how it's being used, by our colleagues.

The generative, more general piece, I've seen people using it to write policies to draft other parts of their program documentation. And I just kind of cringe when I see that because you can give a prompt to generative AI. You can take that prompt and you can give more detail or constrain it. But it's still using very general information. And if I was still an auditor and I was looking at one of these policies, I'd be able to tell that it was something that was pretty generic.

When you're auditing these things, you're always like, well, how does this really work for you? Yes, this is a good model, but I don't know if this is really right for you. Can you show me how it's really right for you, your organization? So it's being used generative AI by our colleagues like that. It's being used by attackers in more ways than we know about probably, and certainly lots of creative ways right now overall.

Host: Yeah, so I think it lacks that context, right? That you have general information, but it lacks the awareness of the context.

Gretchen: yeah, absolutely. you know, AI, I think that for now and for the foreseeable future, AI is not going to create any new risks for cybersecurity. What it's going to do is it's going to like magnify the existing vulnerabilities and all the different challenges and attack types we see now. So it's going to make them come faster, look more savvy, like be frankly stronger attacks, but it's not going to be a new unthought of vector for attack. That could be generations away, if at all, but it's really not thinking of new things.

It's just making it much easier to create a convincing dialogue to fish somebody. It's doing a great job at being undetectable, helping attacks be undetectable. I could give lots of examples, but I'm sure you get the idea. There's a lot of ways that a creative and lazy attacker can use this stuff to really do a number on folks more efficiently than they've ever done

Host: Yeah, that's a good point. Like the same set of tool set that we have access to improve our cybersecurity practices. Attackers also have the same set of tools, right? And maybe even better. So there are both sides of GenAI. You touched on some of these challenges.

How do you see this becoming a pain for cybersecurity practitioners? Or do you even see that, maybe that's not going to happen?

Gretchen: it's going to happen, right? Of course it is. It already is, right? So, you know, but instead of saying, is it going to become a challenge for cybersecurity practitioners, I'd say, how is it going to be a challenge for cybersecurity for organizations overall? Because cybersecurity doesn't operate as a silo. It doesn't operate by itself. It takes the entire organization, just like the impacts affect everybody in an organization, right?

So just as the general statement, cyber practitioners are going to be collaborating with leadership, with business departments, with people that are not technical but are very influential. To understand, how can our business use this without negative effects? How can our business avoid things that might make us look bad in the press? you know, cybersecurity working with users and customers to make sure everybody understands, well, what is the risk? Before we've always talked about cybersecurity training, risk awareness,

But it takes it from here to here, right? Because the need for those things are just so much more when the attacks become so much more savvy. We need to work together to escalate things that look a little funny, like, wow, my computer's getting a little weird. Maybe before you go, meh, not my problem. But maybe now people will escalate these issues. And maybe people will think of themselves more as somebody that's going to be helping and coordinating how we respond to these events to see what's really going on, because it's just going to be that much trickier.

Host: Right, right, totally. And to follow up on that, right, so one of the challenges that we face with Gen .AI is it's sort of like a black box today. We do not have enough knowledge how it works, what data it has been trained on, what type of data does it have access and things like that, which begs the question of transparency, right?

So can you shed some light on the concept of explainability in AI and why it is important when using AI for cybersecurity tasks?

Gretchen: Yeah, yeah. mean, it's just, I feel like this subject gets plenty of press time because of when AI, Gen. AI fails, it fails miraculously. It fails, right? There's stories about an attorney who used cases that never actually happened as reasoning in front of a judge. These things are just problematic for sure. AI was doing its best to help, I'm sure,fact checking would have been really helpful there, I'm sure. so, you know, again, you know, in business, like, it really counts to understand the why component of this stuff, right?

So, you know, when one of my kids says, you know, like, I'm going to go jump off the roof of the house, you know, and my answer is no, don't do that. And they go, why? If I'm in a hurry? And he says, well, because I said so, like, don't do that, you know, and the kid may still do it. You know, it doesn't empower them to think through it, doesn't give them anything to work with. doesn't help them apply this situation to any other situation. Right?

So all of those are very analogous to the business, right? So if this is the answer, we just take it as the answer coming down from some mighty source. We don't know if it's the right answer. We don't know how to apply this answer to make ourselves a better organization. It's like using these canned answers with the black box, it damages trust. Like you don't like whether these things are really reliable or not. just stymies how we get folks to think rationally.

And the thing is, think about Google and Google search, So we type in, like, what is the cutest dog breed? Or we type in, like, what's the name of that new attack that did X, Y, and Z? And there's an order to the results. And Google has in a black box to determine that order. So they have promoted the order that they think is appropriate.

And it might not be the right order for us, and it frankly might not even be the correct answer. Maybe there's a new answer that isn't that popular yet, so it doesn't pop up at the top of the search. We just don't know, right? So mean, these things become problematic.

Or if you think about filter bubbles, I don't do Facebook, but a lot of people get their news from that. And so Facebook more acts like an echo chamber. They pick up from your past reading habits, from your interactions, where you stand on political spectrums, economic spectrum and they feed you things that they think you kind of want. And so you're not making a choice about what news you follow up on Read More About. They're making that choice for you, which creates a lot of divisions. These things are problematic,

And this just gets more into explainability. It's like, some people don't want a lot of detail, but I think everybody wants reliability and wants to know what's inside their hot dog. It's like, there's always thing about people use hot dogs as an analogy to like, it tastes good. I don't care what's in it. Well, you probably do care what's in it. And if you don't care in it, then you maybe care, that kind of thing. So I think it's very, important.

Host: Mm -hmm.

Yeah, and I love your example where you highlighted it. Somebody referred a case which doesn't even exist. And I think some of the organizations who provide the GenAI capabilities, they say that it's the creativity part of the model as well, right? Where it can think outside of the box and give you some answers. Where at the same time you have to do some fact checking before you can even use some of the information that you get from these models.

Like one of the things that many programmers do is when you ask a question and you get a response, they maybe write unit tests to figure out if the solution that was provided or the program which was provided by the LLM model, whether that even is valid, because sometimes those are invalid, straight invalid, right? So yeah, I agree with you that there should be some fact checking, even though you get some information from these models.

To double click on it, another challenge that comes with this is the user privacy. So how can AI -powered security solutions be designed to respect user privacy while effectively protecting against cyber threats?

Gretchen: You know, that's a great question, right? And I think there are lot of layers to that question because, you know, anything that there's this much excitement about just grows tremendously. so while there are no standards, there's no real laws. There's a lot of guidelines, but there's really solid laws about don't do this, don't do this. I mean, there are directives, but how they're going to be fulfilled is a different matter.

So, you can't force somebody to do the right way or wrong way, whatever even that means yet. So these things are an issue. So when you're creating these models, people are pulling in generally for the generative AI models anything they can consume. If it's out there, they're going to om nom it up. And so what they're trying to do is they're trying to put some sort of curtail on what the output looks like. So any kind of, if you think about cybersecurity, we can, we can control input to a model. We can control a model how it processes. We can control the output.

So they're using really output controllers to constrain it. So if they're pulling up all the information there is, I can say to a model, what do you think about Gretchen Ruck? And I've done that because know, it's curious. And so says, there's not enough information about that person to come to a conclusion. So the reality is there's plenty out there if you really want to look.

So, there's a constraint in place telling you not to do that. There are ways people trick these things and that's fine.

But the thing is this is not really protecting privacy at the end of the day. This is like one component of really what's needed. So, what it comes down to is there needs to be rules and regulations about what these models can consume, what's intellectual property, what's considered too private to put into the model.

These things are important. It's like, you know, there are principles of security about need to know. Like, is this something that's vital? Is this something collected because you might want it later? You know, clients make that mistake. Don't keep it because, well, that could be interesting. You you keep the data set that you need for your purposes. And know GenAI is supposed to be so big in general that, well, we might need everything tomorrow, today. But that's not really always appropriate. You know, if ethics say, hey, well, let's not like, you know, talk about everybody's personal issues and maybe don't collect that because those ethics are not going to really change so much!

So again, there's also, and we could talk more about it, but there's a lot of aspects of PET, so privacy enhancing technology that can go into this. Privacy enhancing technology is kind of a new emerging field. think that you've heard of it, but what it includes, how do you categorize it? That's still something up for debate. I've spoken about it two years ago at Stanford, I did a talk about types of privacy enhancing technology, what's effective now, what's something on the cusp of being effective, what's theoretical, but there's a lot of ways to do this stuff better.

You know, like big data, 10 years ago, everybody's like, oh, cloud, big data, big data, digital transformation, but big data, like people were just eating up data. So this is not the first time we've said, well, is too much data, too much data. So these are something that we need to really think a lot about. And we're talking about gening.

Host: Yeah, agree, totally. And thank you for bringing up privacy enhancing technology. So what we'll do is when we publish the episode, we'll tag that as well so that folks can go and listen in to your Stanford talk if it is publicly available.

Gretchen: yeah, yeah, it's on YouTube, absolutely.

Host: Love that. So yeah, we'll tag that.

So one of the things that you mentioned earlier is that with generative AI, this is consumer facing, right? This was not designed only for enterprises so that regular consumers don't interact with it. It exploded because regular consumers started using it a lot, right? And that is something we cannot miss out on, right? As humankind, we often try to play with new technologies. And that comes with, of course, the privacy challenges and all of that set of challenges. And there is a new term which is coming up is differential privacy, when focusing on privacy.

So for our audience, can you highlight what is differential privacy and what role does it play in case of generative AI?

Gretchen: Sure, absolutely. Just to give a broad view and work my way into something more specific. Again, privacy enhancing technology is a concept of what technologies are out there that can do more to protect privacy. There's a lot of ways you can do that. For me, I break that down into four general categories.

Which of technologies are used to protect data? Things like encryption. Things like decentralizing data, keeping certain data separate. You have things like how you manage data as part of privacy enhancing technology.

So this is like putting in rules for governing data, having data governance in place, having data classification schema in place, things like that.

You have areas like around data control, controlling data, which is filtering. What things can we allow through, again, with generative AI, which results is it okay for us to put out? That's like a filtering function, you have data discovery, things like that that are part of it.

And then the fourth part of them is it's about altering data, so data altering. So this is like, what can we do to make the data somehow a little less problematic by changing it? So you can anonymize data, and they're just different. You can do that. You can what's called pseudo -anonymize data, which is like masking it. So like if you have a bunch of fields in records, like one of the field is my name, one of the field is other features, but you don't need my name, it just happens to be in this. You can just replace my name field with something that's a token of that. So it doesn't really reflect it, it doesn't get rid of it, it's, so that's how you pseudo -anonymize.

But then you can also simulate data, make data that looks like a proper data set, but there's something suspicious about it, so to speak. So either synthetic data, which is like, hey, you know, developers, who are making new versions of software for their company, they want to test it to make sure it really works right. Didn't break anything, new features work. So they look for a dataset to test it on. They always want to use the real data that their company has, but that usually can be problematic. instead, can you create fake data, like a synthetic dataset that feels real, but isn't? It's tough to do.

Also do something called generative adversarial networking, which is fine, but the technique you just brought up, which is part of altering data through simulation is differential privacy. So what it is in this case is like you take a data set. you could have, it could be everybody's HR file, you know, so like record of every person is in there. And, know, there's some things that we maybe somebody wants to look at with AI. They want to look to see if who's needed a certain kind of medical care or who's needed to take time off for medical reasons or something like that. And so they want to protect the data. So what they do is they inject noise in the data.

They've got a thousand people or 500,000 people depending on the size of your company. And so they may alter some of the data's fields. like the change Gretchen stuff to say that Gretchen did take medical leave, even though it was inaccurate. And they might add some employees that didn't exist. They'll add Gretchen Grock. I don't know, make something up and give them some false data.

So most of the data is real. Some of it's been made into something that's called noise. In other words, nearly real. But the idea is that even with whatever percentage noise is inserted, the overall results of your research will be precise. So won't be accurate 100%, but it'll be precise enough to use for whatever the thing is that we're trying to learn. But if you look at any individual in that data set, there's a chance that that data may have been altered.

So the idea is it's supposed to use some sort of statistics to protect individuals, because for any individual's answer, if somebody actually gets the data that this was trained on, they go, well, I don't know if Gretchen really did do this, because I know some of these were altered. Whether it's 2 % or 10%, that's different.

But people have of glommed onto differential privacy as, hey, this is really the thing we're going to hang our hat on. Like NIST has a very recent paper on it. But the thing to remember is if you read NIST paper, if you think about it, if you look at this at all, it's really for research at this point. Meaning, hey, we're researching this as a tool to see if it really works as well as we think. It's got limits for how well it works.

There's a very famous saying that was made popular by Mark Twain. There three kinds of lies. Lies, damn lies, and statistics.

Differential privacy is a statistical model for creating privacy. When somebody's handling my privacy, I'd prefer they just delete my data than potentially statistically make it less reliable. Because if somebody casually is observing the data, they're going to assume Not that, well, there's a little doubt, but more like, this is probably all accurate, no matter what the noise is.

Because it costs money and take time to insert the noise. You have to do lots of testing to make sure it still works.

I mean, I don't feel great about it. I think this stuff should be used, but probably sparingly. For certain research techniques for like one-time usage, you know, like if you're doing HIPAA, there's all these protections in place that are required.

But there are also exceptions for certain kinds of datasets being used in models for one time only. And Again, if you leave something with differential privacy open for future uses, the results may not be good and the privacy may not be good, but for the one use that you've defined, it might be fine.

So it's kind of a standout discussion amongst PET, privacy enhancing technologies, is differential privacy. I think another one is confidential computing, frankly, but then like I said, trying true techniques like encryption and things like that, that also still really super important to this.

Host: The example that you gave like with the noise introduction of noise, the example that you gave where let's say my data is altered to for some evaluation or whatever it is makes me scared that even though I have maybe I have never gone to a doctor in last one year, there is a record somewhere in the database which says that I have and which is scary, right?

Even though it's not accurate data, but some might consider it as accurate because it's in the database or it's in the tool I'm looking at, it's showing me that information. So maybe I consider that as accurate. that's.

Gretchen: Well, for me, a more scary version of that is a lot of companies are interested in adding self-identification to their diversity, equity, inclusion, belonging programs. And so they'll ask people to self-identify as being maybe being gay, being transgender, actually lots of other things as well. And so this data is typically kept in an HR file. This is an example where data should be decentralized, frankly, because of the vulnerability that this data creates. If you have an international company, they ask this question in the US, and maybe that's fine.

But if you have employees in another country where being gay could result in going to jail or something worse, that's a problem, right? So playing around with this data and telling people, well, it may not be as accurate as you think, statistically speaking, won't save somebody's life, and that's something to think about.

Host: Yeah, We started with a small example, but the example that you gave makes a lot more sense, that it could impact someone's life if the data is... The noise that is introduced is thought as truth by someone else and could create more problems than it could solve problems for us.

I hope the NIST research that you spoke about that gives us better results so that we can build frameworks around it and have better guardrails.

Gretchen: But it's not a silver bullet, right? There are no silver bullets. And NIST aims to give people guidelines and help people understand what good probably looks like. And that's great. mean, again, there are EU directors around this too that are very fantastic.

But there's no single solution. We can't just do this without giving them

Host: Yeah, absolutely. So there are two things that you touched in the previous response around NIST and HIPAA, so which are like some of the cybersecurity frameworks, which are widely adopted depending on which vertical you are in.

But according to you, they are not handled really well. Why do you feel they fall short when it comes to inherent cybersecurity risks?

Gretchen: So, you know, when I was kind of giving my journey, my background, I mentioned that in the last 10 to 15 years, you know, I've had the pleasure of like working more with boards, like with investors, with like attorneys, with non-technical stakeholders that are very influential.

And, you know, when we present security concerns, we have security shortfalls to folks that are in these levels and these positions. You know, they just sort turn off, like they become sort of vacated because to them it's like, they're like, I've heard this before. I don't know what you're saying now. You're always asking for more money. You know, I'm not doing that. You know, they just turn off.

So, you know, the thing is like, what's important in cybersecurity should be important to everyone because it affects all of us in so many ways. And so we need to use something such as inherent risk as our means for creating a perspective that people can kind of grasp and go, wow, okay, that does make sense.

Inherit risk is a concept that's unfortunately largely absent from the different models you're going to see out there and and reasons why so like you know if I give examples of like like I guess the way that it's missed out on you know I can certainly do that but but just to begin with like inherit risk is you know well what is risk risk is like the potential damage that could result from threats both intentional and unintentional ones that could exploit vulnerable assets.

An asset can be a system, it could be data and exploit could be, and it could break it. could make it not operate the way you want. It could be, it can make it less confidential, whatever the case is. So that's what a risk is.

So inherent risk is basically, it's the risk to an organization in the absence of any kind of security safeguards. And when you put safeguards in, what you get is what's called the mitigated or residual risk. So what would be most damaging if it happened to an organization that could relate to data or technology. Those things are cybersecurity risks. So we're not saying what are the biggest cyber risks, but what are the biggest risks related to data and technology that an organization could fix that in fact could be treated, mitigated, whatever you want to say by cybersecurity.

So that goes from inherent risk to something that we usually talk about more. like the examples that I was thinking about, with NIST cybersecurity framework. The CSF we call it. Version 2.0 just came out and there's some pretty big changes. They focus more on governance, more on big topic issues. So I was excited about that because I do a lot in risk and I figured that would be covered more. Some aspects of risk are, but what isn't covered is anything much about inherent risk. They have one control of the many that are in there, the 100 plus controls that mentions that threats and vulnerabilities, likelihoods, impacts are used to understand inherent risk and inform how people prioritize and respond to risks.

Okay, cool. So then there's nothing in there that says, what does that mean? What's an inherent risk? And for a dictionary within NIST material, they refer to something outside of NIST and it's easy to gloss over.

And there are other things in NIST that do risk and they don't cover it. If you look at one that people use for risk a lot, which is FAIR, the Factorial Analysis of Information Risk, it's this big institute, a big program they look at a bottom-up approach to creating a view of risk, and that sounds great. But they, in fact, in their materials, they say, this is what inherit risk is, and they get a good definition. But then they basically say, but that's too hard. So instead, let's look at the current level of risk, the future goal as maybe the mitigated one. So they just throw out the definition that's useful and say, let's start with this.

And the reason that that drives me crazy is because, no, if you need to be able to think about what's important to protect, and if you need to evaluate comparatively what's more and less important, you need to understand the impact on the organization. saying, what's protected today versus how much more protection to give it doesn't give you any feel for what the organization prioritizes, how much risk is really important to organization. does none of that. So it creates this huge gap that absolutely should be part of these systems.

Host: So I want to dig deeper into it. if an organization neglects some of these inherent risks, because it's very easy to gloss over, as you highlighted,

What are some consequences that organizations might face while managing their cybersecurity risk?

Gretchen: Question and conveniently I just published another article last week that deals with inherent risk and looks at some of these factors. absolutely I would suggest viewers have a look at that where we go into more detail talking about what the problems are and why this is a really important topic and how do you do it.

So what the consequences are. Well basically you know cybersecurity has been around for some time and usually in organizations that are less mature, it's a compliance function. Did you do it? Did you not do it? If you didn't do it, you're in trouble.

So that's what we're used to. Lacking compliance has gotten a lot of people fired, But it's not a very strong way of thinking about things or keeping tabs on what's going on. From there, as groups mature, they start looking at the technology and go, hey, my systems, what's the risk to my individual systems? Is somebody going to break into them? What does that look like?

And so, you know, that's a nice level and that's something that a lot of organizations have reached. But the ultimate goal should be to help be a partner to business by communicating, hey, what is the value that cybersecurity prizes to the business and how can that value be used as something to drive business?

We're more than a call center, this review says. We're something that's, again, is like a value -added center to the business. And so you can never get there if you don't look at inherent risk. First of all, you would just focus on compliance. This compliance mindset, this like looking at technology and the risk each system has mindset. These don't really future proof your systems either. They really look at what's enough today. They don't look at, well, we might be going after this new market or our business is going to introduce this new line. And these are the things that are changing. You know, it does nothing like that. So it doesn't help people understand like, well, does that change our risk profile? What does that do? And when I say value, mean, real financial value.

Because you know, you're going to hit somebody you want to hit them in the pocketbook, like hit them where it hurts, but also explain to them where they can understand it. You know, there's this much potential for damage if nothing is done. So mitigated, there's this much damage, but if you do this new thing, it could go up and down. And so this is a big part of what you're missing out on in an organization. And it stops organizations really from like progressing to be more efficient with cybersecurity to get their budgets to be justifiable to, you know, make everybody their alla.

Host: Mm -hmm. That makes a lot of sense. One thing that you touched on, which I wanted to touch on as well is the age old debate of compliance versus security, right? Where some folks are very compliance heavy. If you are in a regulated industry, you have to be compliant with some of the frameworks versus some which are very security focused. Finding that balance sometimes is a challenge. So yeah, thanks for bringing that up.

Gretchen: So, but to be clear, it doesn't always have to be separate though too. Because like, if you look at New York's DFS, their financial regulator, they specifically in the regulations point out that boards of directors need to be educated in a manner that they can exercise good oversight or good governance of cyber risk. So in other words, like if once a year you're giving a presentation, because you've been asked to get in front of the board and you're like, yeah, these are the things we're working on now, we're going to try to do better. That doesn't give them the ability to really make decisions or help you make decisions about risks and which ones are acceptable and which ones are too much.

So there are regulators that are trying to, not many, but some that are trying to push folks to, hey, you have the compliance mentality, but guess what? Compliance is changing to include this. So that's kind a very positive thing to see.

Host: Mm -hmm! Yeah, thank you for highlighting that as well. Yeah, you're absolutely right. These are not like two different spectrums. They work together to improve your overall security, which makes sense.

Now that let's say I understand the value of the inherent cybersecurity risks, as an organization, what key considerations should I think about, should I prioritize when addressing these inherent cyber security risks?

Gretchen: So therein lies the rub. The reason that folks don't do this often is because they find that they're getting confused by getting different answers. They find that cybersecurity is so complex, there's really no way to kind of put a name on these things. It's hard. so folks are busy with their daily jobs. There's just so many reasons why folks don't do this. And I think they don't have the confidence to necessarily bring it up.

A lot of times now, the way we're cyber risk, measuring it, talking about it, is bottom up. go, hey, well, these are the things we do well. Some people might say, well, if you were attacked, would those things that you do well be enough to prevent your attack, to detect you're being attacked, or to really do the right things to recover fully from it?

So maybe people get to that level. And that's better than just saying, hey, do we follow a framework? But we need a top -down piece of it, which this is going to introduce, which says, Okay, so maybe you're vulnerable to attack, but is that attack going to really hit us where we're most concerned? And where we're most concerned, usually we have a good idea.

Any company has a business continuity program, if they've got disaster recovery plans, their organization has identified, well, what are the critical systems to make our business run? What are the business processes that are critical to serve our customers? What are the pieces of data that we need to make sure we can restore, we can use, we can't get wrong.

So these are all part of those plans. So, you know, this is part of what we need to get there. And it's not something you can overlook. you know, as I talk about in my paper, what I've done is, you know, and this is what folks overlook is this part of it. I've taken, you know, inherent risk and I've broken it down into pieces that anybody can understand. It's just like, I've created a model to make it simple.

I've created like five categories of kinds of risks that organizations face depending on what they do, these could be more or less important. You have things like espionage, right? So it could be state-sponsored espionage, it could be corporate. You have some sort of abuse of personal data. You've got business disruption or destruction of property or data. You've got something dealing with endangerment, endangering people, deceiving people, you to take the other actions. You've got financial crimes.

I have five categories, and for each of those, I create two risk scenarios that could be used to kind of drive into what a business does and how they work.

And that's all it takes really to fully investigate this bit of inherent risk. And to do this work, it's not a ton of work. It's a couple of meetings with your high level folks, your board maybe. It's really just a quick process to get everybody on the same page. All the work you're doing all along feeds it. This is just a way to create a shared journey to get people using business terms.

If we pick one of the five categories, personal data abuse, like the two scenarios I have here is exposure of customer employee data. So exposure of personal data. It means we didn't do enough to protect it, somebody broke it and stole it. One way or another, there's too much data that's out there. So that's one of the scenarios. We can think of a lot of ways that people in certain consumer facing businesses care about it. If you're retail, you care about it. If you're doing business to business work and you never touch personal information, then it's not important to you, but for some it is.

The other scenario I have in that category is disregard for privacy rights or exploitation of data. So just because you can find a public source for data doesn't mean you should. So if I haven't consented to capturing this information about me for marketing purposes and pairing it up with information that you may already know about me so you can try to sell me stuff that I don't want you to try to sell me, that's abusive and that's often illegal as well.

So these are areas, again, that might focus on personal data, but these are scenarios in that category. So this is what's lacking in this conversation is like a model for kind of grounding the discussion and sort of a reminder that the only way to really express to people what's our risk is going, well, what's the most valuable to us? Maybe it's our customer data, maybe it's our secret sauce, maybe whatever it is.

So we look at how a tax could be successful or not based on our risks and how much it exposes those very specific things we're talking about.

Host: So what we'll do is when we publish the episode, we'll also tag the new paper that you have written so that folks can go in and read how you have structured it, right? By different areas and they can get benefit of it and use it for communication with the leadership and also for prioritization work when it comes to addressing the inherent cybersecurity risks. And yeah,

Gretchen: Okay. Yeah, I mean, to me it's like it's a paper I wrote because it's a tool I've been using that I developed and nobody does this right. When I saw that Miss CSF did nothing, like, this is ridiculous. It's time to create something that people can use.

Host: Yeah, absolutely. And thank you for doing that. And thank you for even going through different types of inherent risks that you have highlighted in the paper so that that works as a brief overview of what you are covering in your paper. So we'll definitely share that with our audience.

Before we go to the next section, which is rating security practices, I have a question from one of our friends, is Jeffrey Wheatman. And his question is;

What's your opinion on difference between cybersecurity, risk management, and privacy?

We touched on all of these. So yeah, what's your opinion on the differences between these three areas?

Gretchen: So cybersecurity, the way we use it now, it's an organizational control function. So human resources is an organizational control function. It makes sure that you have the right people getting paid and getting care.

And cybersecurity is a control function too. It makes sure the right people get access to things, the right things are protected. So cybersecurity, in the perfect world, we wouldn't need cybersecurity, but it exists to protect things, to knowledge to protect data.

So I see cybersecurity as one of the tools in the toolbox that helps to protect organizational interests such as privacy, such as like managing fraud, such as, you know, protecting users from doing stupid things.

But so the thing is, it's a tool. It's a control in the whole system. And risk management, really the means to understand which of these tools needs to be applied in what circumstances and in what combination, the tools being, again, things like HR, things like finance, things like cybersecurity, to make sure that we're fulfilling the needs of consumer privacy, the needs of protecting folks against fraud, etc, etc. So that's kind of

Host: OK, thank you for sharing your response. We'll share it with Jeffrey as well so that he gets his answer. So that.

Gretchen: I think he knows the answer. I've known him for a very long time and before he was gray, which he is now. So he's just trying to make it fun. That's fine!

Host: Yeah, we love that. The next section is Rating Security Practices, where the idea is I will highlight a practice and you need to rate from one to five.

Rating Security Practices

So the first one is, Conducting Periodic Security Audits to Identify Vulnerabilities, Threats and Weaknesses in Your Systems and Applications.

Gretchen: So as a former auditor, I'd love to say five, but I think that this should be limited to like probably three or above or below. So I'd say three. The reason why is like, it's actually, it's absolutely mandatory. know, like there's so many times I've been to organizations that say they have an annual vulnerability assessment, test, and you just, they give you last year's and you go, wow, did you fix these? And they go, nope, nope, let's do this year's. You know, so, that, that stinks, It is a really important control, the reason I would limit it out of three is because it's not like a key control, it's not protective. It doesn't stop, you know, something from going wrong. It just goes, that's what went wrong and why. So things that are like reactive like this and not protective are going to be your top controls.

Host: OK, now that makes sense. The second one is usage of strong password that contain a mix of uppercase, lowercase letters, numbers, symbols, and changing it frequently and avoid using the same password for multiple accounts.

Gretchen: It's better than not having any password, but beyond that, it's not good. So like a one or two, maybe, right? You know, everybody in our field and I wish everybody in the world knew that like multi-factor authentication is, is it, you know, so this is not strong. It's not helpful. And, know, it's amazing to me that there's still banks that don't offer customers multi-factor authentication. If they do, it's bad multi -factor authentication.

So, so, you know, the bad passwords are just the worst. you know, it's everybody knows this, you know, but it's sort of finally has gotten out through other sources that like changing passwords regularly for no reason is terrible. It means that it forces people to leave their passwords around and come up with less memorable things that they have to store somewhere. And God forbid you even just stored it in a password vault because a lot of them have been trying to be the weakie.

So, you know, let's not make people change them unless there's a really good reason. And, you know, let's just keep pushing the MFA, multi -factor authentication bandwagon for it.

Host: Yeah, absolutely. MFA has become like a standard nowadays, right? Like pseudo standard that everyone should follow it.

The last one is granting users unrestricted access to systems and applications so that we can move fast and new capabilities can be rolled out.

Gretchen: I would definitely say one except for for me personally. So whenever I join an organization, of course I want that. So like, you know, I'd say like, no, no, I want the unrestricted access. I want to learn about us. But the reality is, it's definitely a bad practice. All organizations like sit on a spectrum between having, allowing a lot of things and restricting a lot of things. And, you know, some organizations collaboration and ingenuity are an important part of what they do.

Like, if you're working in a university setting, you put in few restrictions because you want people to know what's going on and try new things. It's not like working for the Department of Defense or something, which is the other end of the spectrum.

But overall, unrestricted accesses, it should never be the case.

Host: Make sense. that thank you so much. That brings us to the end of the podcast. But before I let you go, one last question is, do you have any recommendation for our audience? It can be a blog or a book or podcast or anything.

Gretchen: Yeah, I'm gonna say read something other than things in our field, please. You know, I think that like, you know, as much as I like to start the day by seeing what the news is, especially as it relates to like our field and with the new regulations, all that good stuff, you know, I've got various books on my nightstand and I can tell you none of them have to do with cybersecurity. If they do, it's probably because it's a science fiction book or something. But no, get out there and find new things to read.

If you don't read fiction, you're just missing out because that's the stuff that really just makes you think what if and it's the stuff that really drives new ideas in our field. Find what that is. I listen to so many podcasts. I think one of my favorites is called Lightspeed and it's just a science fiction podcast. They have new science fiction stories all the time. And I like it because it's all about the what if, right? It's about what's next and what could be the future.

So much of science today is based on science fiction of the past. So do yourself and do all of us a favor and keep your mind open to new things by keeping an eye on these new things out there by reading fiction.

Host: I love that recommendation because most of the time we even read around our work that we never detached from the work. So this is a good way to detach and be imaginative as well, like creative as well. So yeah, that's a great suggestion. With that, we come to the end of the podcast.

Thank you so much, Gretchen, for joining and sharing your journey and your insights with us.

Gretchen: Absolutely. Thank you. Thank you very much for having me on. This was a pleasure.

Host: Absolutely. And to our audience, thank you so much for watching. See you in the next episode. Thank you!