From Reactive to Proactive: A Conversation on Modern Threat Detection
TLDR;
- For Detection Engineering, understanding your network architecture, application ecosystem and people of network plays a major role.
- In order to treat an alert as false positives, wait for at least three occurrences. Second could be a co-incidence. But, marking an alert as false positive on the first occurrence could be misleading. Constantly re-evaluate and verify the audit logs for over corrections.
- Best way to stay ahead or close to latest threats is via security advisories. It can be done via RSS Feeds, Discord Servers, Reddit, Hacker news, following threat researchers and more such sources.
Transcript
Host: Hi everyone, this is Pursottam, and thanks for tuning into ScaleToZero podcast. Today's episode is with Reanna Schultz. She's the founder of CyberSpeak Labs, a platform dedicated to fostering community engagement through collaboration. She hosts the podcast Defenders in Lab Codes, where she delves into cyber threats. Throughout her career, she has won many hats, all the way from endpoint security engineering, detection engineering, to leading a SOC team as well. She holds both a bachelor's and master's degree in cybersecurity. She serves as an adjunct professor at University of Central Missouri. And she frequently speaks at events, sharing her knowledge and best practices around cybersecurity.
So Rihanna, thank you so much for joining us in the podcast today.
Reanna: Thank you guys for inviting me. I was really really excited for this highlight of my day
Host: Thank you! Before we kick off, I just did a brief introduction. Do you want to add anything to your journey or anything that you want to highlight about your career?
Reanna: Yeah, and I always say one of like the cooler things about cyber is just the unique path that everyone takes. When I first got into cyber, it was completely an accident. I didn't know anything about computers. I don't even know anything about programming and my upbringing. I grew up in a very farmish town. I always joke that there's more cows than people in the area I lived in.
And eventually when I went to college, I thought I was going to go more on the law enforcement side. And if you do not see me in person, which, you know, it's hard to tell on camera, I am very short. am five foot two. So I'm super small. So it worked out that I didn't go be a police officer, but I will say one of the cool things, especially in universities here in the States is you take different courses for electives.
And one of the courses I took was Python programming. And I ended up enjoying Python programming more than anything in the world. And eventually, that's how I accidentally stumbled into cyber. And I got very, very blessed with my first career. It was out of Fortune 50 company. And that was because I did a lot of networking. I didn't have certifications. I couldn't afford it at the time. In fact, I was serving at a Buffalo Wild Wings, which is basically a fried chicken place to like pay for my studies.
But that allowed me to build my network and really feel my way out into the community since I couldn't afford the certifications. And that eventually led me to my, my first job. And eventually I got really tired of corporate America.
And I decided to go into a startup company. So I went to the MSSP side and worked in this app for a little bit. And I will tell you why working in an MSSP is way more stressful than working in corporate America, in my opinion, just because you have to remember all the different SLAs and customer requirements and all these different environments. And you know, we're corporate America, of course, like there's the stress of,
Hey, this company's worth X amount of money and you're the sole decider, something good or bad happens. I mean, there's different stress, but I realized MSSP wasn't for me. I really missed big corporate America. I really missed the different policies and the rules. And essentially I enjoyed a bigger collaboration area rather than just a company of 50 or a hundred people. So I ended up finding my way back into corporate America.
And they hired me as a technical analyst. So I did a lot of the detection engineering at my current role. And then eventually I found my way into leadership. So I lead a section out of our security operations center. And I always say every day, it's a fun day working in a SOC.
Host: Yeah, so that's a very unique journey to cybersecurity. And you touched on detection engineering, right? Nowadays with the threat landscape evolving rapidly, you always have to be on top of things, right? So it's a very demanding job in a way, rather than stressful.. stress, I guess that's part of the job. But it's a very demanding job where you have to stay up to date and you have to be prepared with playbooks and things like that.
So today we'll talk about detection engineering a little bit and AI, how it can be used or how it can help in the day to day. So let's kick off, right? You have been in the consumer electronics space for some time, considering there is a need for high value data, like athletes performance or user location, and you have to respond to security incidents quickly as well. How crucial is like real-time threat detection in fast paced environments?
Reanna: Yeah. And I always like to like step back a little bit when we talk about threat detection, because it's going to be different for each company. some people, they often ask me, Hey, how do I even start detection engineering my business? How do I start that threat detection? And I always tell them to look at a framework called the pyramid of pain. And it is exactly what it sounds like. It's very painful. The more you climb the pyramid, I always tell people to look at the base of the pyramid.
And this focuses on indicators of compromise, such as IP, hashes, domains, and go and hunt for those in your own environment. And this is a way for people to get comfortable with their SIM tools. They get comfortable knowing essentially the lay of the land, because you can't do true detection engineering, in my opinion, if you don't understand your own network architecture.
Just because you don't know where the blind spots are. like, for example, if you don't have endpoint logging, it's gonna be really hard to detect threats at an endpoint level. understanding the gaps and where things are can really easily be done at that base level of that pyramid. And the reason that it's also the foundation, it's not only easy to learn your environment, but indicators of compromise are very easily changeable. If we're hunting on hashes, for example, if a threat actor updates a file or whatever recompiles it, that hash is completely different now. And so that old indicator is no longer available.
And a lot of times, especially I would say in fast paced companies, those indicators of compromise, if they're truly being targeted, then probably six months to a year, those indicators aren't really applicable anymore. And that's just me being very gracious with those.
As you continue to climb up the pyramid, eventually we get away from indicators. And this is where we start looking at behaviors or tactics of how malware or even APT groups can really behave. And if people aren't like, I don't know what these different tactics are. Well, you know, look at MITRE framework. this is a great framework where you can start looking at, Hey, this is, you know, bit locker ransomware or something along those lines, you can look at the different tactics on this framework and say, okay, so if this execution is hit, followed by the source process, and maybe this malware is coming out of a link file, you know, these are different behaviors we can essentially detect on.
And people often come back to me and they're like, hey, cool, like I can look at this framework, I can look at a pyramid of pain all day. I don't know how to how to essentially get a mature model in my business. I understand these foundational concepts. understand slowly building these. How do I take this and how do I make it scalable to my program? Especially if you work in bigger businesses, there's more footprint and data coming in and out. Eventually, threat hunting is crucial. And some people will rely very heavily on their security tools.
And I always like to promote your security tools, especially EDR, XDR platforms that really promote quote unquote AI hunting and behavior. They only know threats as fast as they scan them. And this is why it's really important for us as defenders to have these jobs and to understand these front lines of defense.
Because if we're putting all of our money into our security stack and a threat actor slightly changes the behavior of how they deploy something, then your security stack isn't going to know that's a new type of malicious threat in your network until they start seeing a trend or a signature gets pushed and update. kind of circling back is how do I build that detection engineering to make it scalable to my program?
And I always say start by building relationships with your security vendors or even with those in the community who work in similar industries as you. Even though a lot of the times, know, malware might be changing or now we're talking about, hey, what types of threats are specifically targeting my business? If you really build those relationships with your security vendors, I guarantee you… they have a whole research team dedicated to emerging threats or even customer databases with people who are working in similar industries as you are.
And a lot of the times for free, they're more than happy to share this trend analysis or even it might be part of your current contract with them that they do provide this. And people forget to ask this part because they're so busy trying to implement and integrate the security tool that they forget how to scale it as they continue to grow as well.
So I always say focus on the pyramid of pain, understand the MITRE framework, and then start building relationships with those in your community and then also your vendor as well, because now you can start tailoring your detection engineering to essentially threats or APTs that are targeting similar industries.
Host: That's a great start to the episode I feel like. We started with the pyramid of pain and we are going one step at a time. one question that Ross Young has asked, which touches on this area, is what type of things do you look for when you are detecting bad actors? And how do you go about it?
We spoke about the pyramid of pain, different vectors that someone can use. Do you look for any particular signals Or yeah, how do you go about it?
Reanna: Yeah, and especially so there's a couple of ways we can do this. I always say you have to know your network and the people that make up your network before you try to even find anomalies like understanding the different types of businesses within your department. know, most companies, they probably have HR, finance, payroll, an IT department, right? And if we want to focus on IT, these people probably have admin accounts. And so now we're starting to put what their footprints like in the business.
And so if you think about admin accounts, admin accounts probably shouldn't be browsing the internet. They probably shouldn't be opening port 22 externally everywhere. So this is where we start trying to find what is our potential gaps within our business and what would be abnormal for these different types of users.
And as we of scale to the technology that we have today, a lot of, I would say, Sim and SOAR vendors now are promoting machine learning. I always like to say machine learning is very different than gen AI. So be very careful when you hear a security vendor pop out the word AI security. It's most likely machine learning, and they don't know how to use that word technically correct.
But this is where machine learning can be also used in your environment if you have tools that support it. A very easy example is, for example, I work in Kansas City, right? So I work in the central standard time zone. So it's probably normal for people to work the nine to five job hours within my location. Now it might be more suspicious to use that machine learning to find those abnormalities within the existing data set.
Because now I might be able to detect on a user who is logging in probably 3 a.m. Central time, or maybe I'm trying to find a large amount of data being exfiltrated outside of normal business hours. And this is, we can see this from many different breaches throughout history. Like for example, Home Depot had one in 2014 where a threat actor got into the network, used compromised admin accounts and was exfiltrating data outside of business hours because it would potentially go unseen and undetected.
So that's where machine learning can use to find those abnormal data sets within general day-to-day behavior as well. So that's one side. Now, if we're talking about actual malware itself, it's really important to know how the technologies are integrated in your environment.
I like to use e-commerce platforms since I feel like a lot of companies have a checkout page and stuff like that. What accounts on the backend of this e-commerce platform and how is it used? Are you using third party services? And so now we start thinking about all these other integrations such as, hey, if we have logins, are they able to use MFA or do they have SSO enabled because maybe a threat actor can use multiple accounts to log into the service. And then now they can probably use their own API call to go through a checkout process and buy a bunch of products and then sell them on eBay for triple the price. So having that abnormal data set and understanding what that program and applications designed to be used for, and then understanding what wouldn't be normal. so getting into that, would say almost reverse mindset and really looking at those different behaviors that would be outside of the scope of the user or the application and technology.
Host: Makes a lot of sense. So now like we touched on different types of threats and how to go about it. Now let's say if I'm going to my management and telling them that, we need to build a threat detection capability for our organizations. What are the biggest challenges that you have seen organizations face so that I can go prepare to my management?
Reanna: That is, that is a loaded question. I always like to say whenever you're pitching something, especially if it's cyber, you have to be a salesperson. You have to have data to show proof that this is value add. And I hope if people are listening to this and I hope that they're sitting down, this is news breaking. Cyber security does not make a company revenue, we protect revenue. And so we are known to constantly say no, but ask for more and we're very expensive.
So whenever people are like, how do I get buy-in from management? How do I get buy-in from even my C-suite? Most of the time, if you're at the point where you're considering detection engineering, this is a awesome revelation for the department because now it's that next step of maturity. And a lot of times people are already working with pen testers, whether it's an internal pen test team or an external pen test team. If they did that pentest and you did not get any detections on it, that is an eye opener for security management because not only, yeah. Yeah.
Host: I thought I'm all secure, like there is no finding.
Reanna: Yeah. And a lot of people, don't think about that because they're like, we did our pentest, we did our finding. No, there's more than that, right? Now you're talking about the defense side and this is how we get purple teaming. I hate that buzzword, but this is how we get purple teaming is because now we're taking our security findings. That's probably related to vulnerabilities with a server because the owner refuses to patch because it might break something Right?
And then now we're showing that it is vulnerable and we can take it and we can exploit it. And now we don't get detection alerts because our security stack can't find it. And it's not because security stack failed. We go back to the security stack only knows with the signatures that it scans or provided updates for. And now there is a huge gap and you can probably tie this into revenue loss. If this was absolutely true. now you have data to go back to your management and say, hey man or gal, like we could not find this. And then now this is a huge gap because all of XYZ confidential data and reputation is hosted through the service. So if you're okay with the risk of us not being able to craft and create our own alerts, that's a different conversation we might need to have.
And then another thing I always like to tell people, you can go in with a problem, but what is the solution behind this problem? So if you're going in, hey, we have these gaps in everything. This is what we can do with our existing tools to create alerting. And a lot of times, EDRs have their own custom-built option in their baseline contract for people to create their own detection engineering rules. Or if you already have a SIEM, most of the time they have a security section where you can create alerts or administration alerts outside of the data that's being pulled into the SIEM as well.
So that is my TED talk on that. Is that the correct answer? Probably not, but that is my best advice I can give people because once you start creating your own, it can be a little bit scary because, well, false positives usually right out of the box. But can be scary because now you're finding the skeletons in the closet that people have probably ignored or accepted throughout the years.
And whenever I walk into my job or if I'm doing my own detection engineering alerts, I walk in with the mindset of my company is already hacked. How do I find them? And I tell people like my job is a professional hunter, you know, except in the more I would say animal right approval away.
Host: So that's a very different way of looking at your job in a way. So you touched on false positive. So we'll come to that. That is one of the topics we want to talk about as well.
But before I go to that, so often in the podcast, what guests say is like, security team, when we speak with leadership or other departments, you have to tailor your conversation. But you used a very different word, right? Like it's a very different mindset, right? That, you are a sales person you're trying to sell security to your organization in a way.
So one of the things when it comes to detection engineering, one of the things that organizations do is around playbooks. You would have predefined playbooks that you were in the SOC team, so you might have built and you might have used them as well. So one of the questions that we got from Evgeniy from Canada is that, how do you create playbooks for detection? How do you improve them? What is the feedback mechanism? How do you know that they are doing a good job?
Reanna: Yeah, for sure. I, again, right. Going back to know your land, know your business environment, who makes it up. because this is where you start building out your low, medium, high, or super like everyone's hands on deck type of alerts. And I always tell people what is, if this is true, what is the impact to the business if a user or if a user clicked on a phishing email? And went to a scareware website to download a fake antivirus, right? What is the potential data lost and brand and reputation loss if this event turned out to be successful?
And so I always tell people to start there because you want to make sure whenever you have a high or severely critical alert fire, this has weight to your security operations team or whoever's looking at these alerts, because when you see those, that is hands on deck, like we need to probably start opening up our playbook, we need to start doing containment, you know, running forensics, whatever that incident response procedure looks like.
Now, I'm not saying that every time a lower medium fires, right, it's going to be dismissed. And I'm not saying that at all. I'm just saying that when these do hit that it is huge brand and reputational loss. Now, if we're talking about false positives and stuff like that, I always tell people that work in a SOC, you guys are not sock monkeys, meaning you're not coming in pressing buns, closing stuff, pressing buns and closing stuff.
There is so much more to the security operations side that people don't really sell on. You have to have customer service. You have to know how technologies work. You have to be literally the SME, the subject matter expert of everything going on in the business because when people ask security questions, most of the time they're going back to the security operations center because these are your frontline defenders. These are the first people to show up on a, on an incident.
Um, it's similar to firefighters or police officers, right? If there's a fire or a crime, police officers and firefighters are going to be the first people to show up. That's your security operations center for cyber events. Um, now I often get asked the question is, Hey, if we are doing alerts and I'm starting to see false positives. The more false positives obviously is going to bring down the purpose of the alert. And when we make playbooks, I always encourage people in my community to put what is the purpose of the alert? Was there historical action to drive the creation of this alert? Like, did this come out of an incident? this something that was part of a threat advisory from CISA or Microsoft or whoever?
Because when we bring on new analysts or you know, we slept for a few days. We're gonna forget essentially what half of our alerts do. And so when we open up that playbook, it's really, really important to have that tribal knowledge documented.
So that way when the next person opens it, they're like, I remember why we had this now. And so that also brings reason as to if it's not firing correctly, we need to realign back to the purpose of that alert as well.
And my rule of thumb is if you've had three false positives in a row, meaning that activity was seen and it did not alert correctly, that is a false positive. Some people use logics as like true negatives or true falses where the alert fires as expected, but the outcome is not malicious. It could be things like a developer doing something developy. Where it might not be standard practice. And so this is where you might have to work with the developer and be like, Hey man, you're the only one like kicking off this alert. What are you doing? Okay. Well, you probably shouldn't be doing that in PowerShell type of conversations and just really promote those secure practices.
Because again, the business doesn't know unless we educate them. And that's essentially our job. But I always go with the rule of thumb of three because two is a coincidence. Three is a pattern in my eyes.
And this is where you can go back and start, you know, testing potentially exclusions or reworking the logic of the alert. And I always tell people, again, that's totally fine if you go and do this, but you need to have an audit log in your playbook or wherever you might keep that audit because someone might go, Hey, I haven't seen this specific alert fire for six months. It used to be noisy and I haven't seen it in a while. Well, hey, we excluded basically all of Microsoft's suite applications. you know, obviously it's not going to fire. And so that's why that autolog is very important because then everyone will know, okay, this change was made by whoever on this date. Okay. We're going to revert that change back because it wasn't, it wasn't a good change. So, know, we might tweak it or whatever that case is. yeah.
Playbooks are good for again running through instructions, but it's also good at providing the reason for the alert and also keeping those audit logs as well to make sure that alert is value add.
Host: Yeah, so one of the things that like I love the audit log part because often what happens is when you get alerted and there are like hundreds of resources which are getting impacted and you are just trying to finish like resolve the threat, let's then sometimes you say that yeah, exclude some of these or most of these from the check and then the threat is done. But it's actually not done, right? If you look at the audit log,
maybe some of the resources should not have been excluded. So yeah, audit log definitely plays a major role. And what I'm hearing is like, before I go to that, like, I think I liked the three alert approach. Often we have seen that even on the first alert, sometimes we feel like, this is a false positive, but maybe we should wait for some consecutive alerts before we mark them as false positives.
Yeah. Yeah. And it like, would say one of the most grind my gear moments like things that are my gherks is when people are like, oh, it's a false positive. So I just closed it out. And I'm like, why are you closing alerts? Why are you doing the same thing? Expecting a different result? I was like, I'm pretty sure Albert Einstein called that insanity. Like you closing false positives, you're going to go insane, man. I was like, fix it.
Host: So the question is, let's say for an enterprise where there is high volume of activity happening and maybe attackers are trying to attack as well, how do you find the balance between immediate threat detection with potential false positives in your environment? How do you quickly determine them and whether to pay attention to it or whether to ignore it? How do you find that balance?
Reanna: Yeah, and it also depends on the business, right? And the number of resources you have. If it's a low resource, high impact day, meaning there's a lot of volume of alerts, it's just a busy day, there's not many people working the queue.
From my professional experience and things I also promote to people is there should always be a way where you're tracking the tuning of an alert or that needs to be tuned. Because if we are all hands on deck because there's just so much volume for that day and things need to be tuned out. Make a note of it. I don't know if people have their internal like ticket system that they track internal issues with or like a storyboard or a scrum board or whatever that case is. Add those potential tuning alerts to those scrum boards and come back to it when you have time. Especially if it's in a very high, fast paced type of day and you get to the point in the day where things eventually slow down and stuff like that. I would take those potential tunings and just throw it on the backlog for right now.
Because again, you want to document the fact that this needs to be tuned, but you don't want to forget about it when you go home and have a sugar-free Red Bull and watch TV. And you just want to be a vegetable on the couch.
When you do have that, right? And especially if another analyst sees that false positive, having that documented somewhere on a backlog can be very beneficial because now they can refer back to it. Especially if you close that alert, you most likely should be able to link that backlog to that alert. And so that way, when it opens back up or if you have something similar, the analyst can look at historical tickets or alerts and be like, OK, yeah.
All right, yep, this is the exact same activity. I can make a fast decision based off of this and then close it out and then move on as well. So at least there is a audit for backlog tracking. And then there's also references for the rest of the team as well, or even yourself as you continue to go on through the day.
Host: And also, another analyst sees it, they can also sort of upvote in the backlog ticket, right, so that it gets higher in the priority whenever you are doing backlog grooming so that you can pick up the most seen alerts in a way, maybe because of the load you had to move to back.
Reanna: Yep. Yep. Yeah, and it does happen, you know, especially if you're dealing with like in the middle of an incident and then all of a sudden maybe your EDR pushed a bad signature and you got a thousand alerts for nothing. You're like, are you serious? You know, like I am, I am knees-deep in thick soup right now. And you decide to blow up with 500 alerts on like Microsoft essentials. I'm like, nah, man, you know, I don't have time for this today.
Host: Yeah, yeah. That would be painful, right? Like you are already dealing with a sort of threat and you have 500 more alerts to look at. So it adds to the stress that you have already for the day.
Reanna: Yep, yep, yep. Just a normal day, right?
Host: So yeah, if all of question to that which comes to my mind is then how do you stay up to date with the latest let's say detection capabilities while the threat landscape is evolving that rapid?
Reanna: Yeah, and there's so many cool answers to this as well. I always tell people, find the platform that speaks to you that you can use very well. I will say threat intelligence can be very expensive. There's a lot of really cool threat intelligence providers out there. I won't name the security people, but there's a lot of really good sophisticated ones that you can pull feeds in as well. Again, APIs cost money, licensing costs money, so having a really good cost-effective threat advisory can really, really help a lot of, I would say, public news threats like Bleeping Computer or the Hacker News or even threat researchers. They have their own RSS feed that you can pull in and have an internal, I would say, threat emmumulation.
I've seen some people create their own scripts where they'll read through that article and scrape any TTPs that are found based off of specific keywords or even IOC listings off of a regex that they had created. And then they can essentially do whatever they want with that, you know, in the network or whatever the case is. Some other platforms I also recommend, as weird as it sounds, Reddit. There is a lot of good stuff on Reddit. And I think it's because it's so unfiltered, but I Some people, again, right, it just depends. There's many Discord groups people like to be part of. LinkedIn, follow a lot of good cyber researchers. Same thing on Blue Sky and X because these people are the ones posting the thread advisories. And it's also kind of cool if you go on Wiki links just to see what got leaked out there recently and you're like, huh, that stinks for that company.
Host: So you touched on many sources, like Reddit, X, Blue Sky, then following the threat researchers and things like that. How do you automate it? Otherwise, it sounds like following a lot of people and going into your own backlog of reading and staying up to date.
Reanna: Yeah. And again, I really, really, really recommend RSS feeds for a lot of this stuff. There is a application that I also recommend, especially if you're building a, I would say Threadintel program on a budget and you don't really have engineering mindsets on this team that are building it. know, people that might not have a development background. There is an app or a application called if this, then dev.
And it's really good for, would say taking the backend mess and making it user friendly through buns. And, know, I want this to go here. Or if I see this post, then post here type of logic as well. And again, the more you use that, obviously it's going to be a cost because you're using their services, but it's a good, it's a good start. If you find someone that in my opinion is very active on these platforms that post very frequently.
You can definitely create your own logic on that app and then pull it into your platform and essentially build your own internal Wikipedia of information that might seem beneficial to you and your team.
Host: And for organizations building detection capability, would you recommend a similar approach or a different approach?
Reanna: I would use similar, and I'm just going to add a little bit to that. If we're using detection engineering or even findings from our security tools, you should be able after, I would say, probably 60 to 90 days, 90 days for sure, start seeing a pattern within your own security operations team. So hey, regardless of the alert, if I've seen X amount of times fire, that is a true positive, that this is true malicious activity.
Maybe there is a gap here in our current maybe GPO settings or how we're doing application security code review or whatever the case is because now I am identifying that I'm seeing targets of this type of threat that is trending for months on end or even phishing. Phishing is also a great metric, especially if you're trying to build your security awareness program. Hey, the sock has seen X amount of phishing that's mimicking this service provider that we might have relations with, or people that will use on their day-to-day.
Hey, security education, let's give you an example of this, remove the threats and that way you can share it with the business because this is also a trend that we're seeing as well.
So I would say these metrics are more than just this is bringing weight to our security program. It's also what can we do to ensure the business stays secure? And they're also getting that elevated knowledge because they too should also be seeing these new types of trends and threats. They're not security professionals. They're not gonna go online and be like, wow, this is so cool. No, no, it's our job to take the coolness and make it more user-friendly and more of an awareness campaign as well.
Reanna: I mean, we could sit here and read security news all day and eat popcorn and have the time of our life. And some people are like, that's boring. These are just a bunch of zeros and ones. And you're like, yeah, but it's interesting.
Host: I think this is where the sort of sales part comes into picture, Like translating it to the other departments so that they are also equally excited to either give you the budget or give you the resources so that you can secure the company, right? Secure the organization.
Reanna: Yes. Yes. And it still blows my mind. We're heading into 2025 and the amount of people I see in public that just scan random QR codes on on like a light post. I'm like, what are you doing? Stop that. Stop scanning random things on your phone.
Host: True, So now we touched on the detection side, like how do you do detection, having playbooks and things like that, and understanding false positive and true positive. The very next step that comes after that is fixing it. Let's say if it is a true positive, you need to work with either your vendors or engineering to get it fixed, right?
So one of the questions that we got from Marius Poskus is, detection normally leads to detection and then fixing, which means there is a constant cycle of firefighting without maybe seeing a lot of tangible results. How do we go away from that?
Reanna: I hate to say it, I don't think we will ever go away from fixing other people's problems. I wish. I will say, you know, it also depends how we present the findings as well. Very similar to pen test reports, right? It might be this fat essay PDF of a hundred pages and findings. And then it is someone's job to take that and sell it to whoever needs to patch their server and explain why being on patched is bad because it led to X, Y, Z.
And again, a lot of the times people aren't trying to be malicious. They're not trying to bypass security because you know, as much of a delight we are, we also want to make sure that people are able to do their jobs and they're not constantly being bombarded with us.
I have learned that providing simple education as to why this is bad. Like, hey, even the phrase of this is known to be attributed to no malware because malware is seen doing X, Y, Z. You you could prevent this by implementing this or that. And having that solution provided takes away a lot of the stress of telling someone they need to fix their stuff. Most of the time, because they don't own why they need to fix it because to them they're doing their job, they're doing like they're following company policy or whatever that case is. But they don't know until you kind of give a high level explanation and providing that insight or it even goes back to people just downloading random GitHub repos that aren't really GitHub reposts. And then the next thing you know, it's like lit up like a Christmas tree on a sock dashboard and you're like, this person again.
And, just providing, Hey, this is, this is how you can tell if something's authentic. You know, this is how you should be downloading this type of application. This is where you should be going for this type of service as well. So will we ever get away from that risk remediation type of phase? No, I don't, I don't ever foresee that happening unless we just make, unless we just take the internet away from the business.
No. That's just some of my general guidance, right? At the end of the day, we're just trying to educate our users. We're trying to make sure the business is still making revenue. And we're also going home and being able to relax because, you know, we got someone excited about a security principle.
Host: So one of the challenges that often like SOC analysts face is the volume of alerts. Because that often leads to stress. So. If you have it, do you follow a rule breaker? Like how do you prioritize and investigate alerts? Or how does your team do it to manage their own stress levels and alert for these?
Reanna: Yeah, and I always tell people if you're starting to go and well it starts with how do I know if I have fatigue? How do I know if I'm getting burned out? And there's multiple stages of burnout to the point where you're probably crying at your desk because you can't stand work or you forgot your password that one day and it's just like a complete meltdown. You know, that's like stream case of burnout.
But I tell people you start feeling burnout or alert fatigue when you're just going through emotions and you're just clicking buns and closing stuff. And you're not really taking that second step to be like, is it this or this, know, this or that? No, you're just, okay. And you go through it. I said this a little bit before, why are you doing the same thing over and over again to expect a different result? I think it's very important, especially, a manager that's overseeing a team and a individual contributor to have honest conversations about how they're feeling.
And if a manager ever uses that against an individual contributor or, you know, essentially like doesn't listen, doesn't try to provide solutions, it's going to be very hard to have high morale and good culture on that team because working a sock is very stressful, very, very stressful. People don't realize the weird hours. Sometimes you have to miss holidays. Sometimes you get called in when you're a family.
It's literally a first-response situation. And some solutions I always advise people is, are you alerting on multiple things that could be combined into a single alert? Is it a lot of the same behaviors? Can that be consolidated into a single alert rather than multiple ones for each stage of a potential attack chain? You'd be surprised how many people will be like,
We had 30 alerts and it was all related to, you know, a user logging in from an abnormal location multiple times. Okay. Well, that's, that's more of a story, um, rather than a user just was seen in Russia. Well, maybe they have family in Russia. don't know, but can it be combined into something to tell more of a value add or have more of that potential threat that you're wanting to hunt? Again, go back to that playbook with the purpose of the alert as well.
Um, another thing I always advise people because I too was once in the hot seat. I might seem a little bit older now, but at some point I was working the day-to-day queue and it is, it can be a lot. It could be mentally exhausting. You're switching from potentially one environment to another, whether it's on an endpoint, then you're back to networking logs. And then now you're looking into, active directory for XYZ and then now you have to talk to this emotional person. It's it's a lot, it's emotionally exhausting. And I always tell people, why did you go in this field? What made you passionate, right? Maybe you can start correlating your passions to your day to day.
So for example, people that are like, well, I enjoyed… the pentesting side of things, but I didn't want to be a pen tester. I wanted to go hunt for the pen testers. Okay, that's awesome. So let's take your passion and maybe find it in your day to day. And this is where you can start maturing your alerts with people in those passions, because now you can start aligning to them, understanding different tactics and procedures and behaviors and creating those more mature, more defined alerts as well.
Or, maybe they can work with the pen testers and actually test some of the alerts and see if they are truly value add. You know, why are you alerting on this if it's just gonna be a bunch of junk? know, what is the risk if you just completely remove this from your environment? Or is your security tools already preventing this from happening? Why are you creating alert if you know it's just gonna be blocked? Because now it's just more work for everyone.
I always tell people also review the alerts that are coming in and see if action needs to be taken. If you're having just informative alerts, what is the point if your SOC isn't doing anything because now this is just noise in the queue and they're not even gonna look at it. Same thing, I've seen some companies where they'll get specific alerts for everything that was blocked at the firewall.
And I'm like, you can't be like, pick your specific firewall categories like C2, phishing, malware, you know, maybe a couple of like maybe adult content categories. Things that are going to be value add that could be potentially violating policy. And then, you know, look at those. I'm like, you shouldn't be getting alerts for people that are trying to go to social media in the middle of the day. Like you're just, you're just burning out your team with that. Like what, what is the SOC supposed to do? Send an email and be like, Hey, I saw you going on YouTube. Stop it. Like, you know. Like, what are we doing?
Host: Yeah, yeah. So I think you touched on multiple things like having that clear communication, at least with the leader, so that if you are feeling burnout, because often what happens is even though you're feeling burnout, maybe you are not able to share it with your immediate leadership or team so that you can at least get some breather, right? So having that clear line of communication, then having maybe work with vendors so that you can get rid of the info once, as you highlighted. If they are not a risk, then what's the point of having them sent out to the queue? Because somebody has to spend time analyzing that, which adds more to the stress than reducing it. And I think one of the areas that you touched on earlier, which is around exclusions, like doing exclusions and looking at your audit log, like if you have done it in the past or you need to do that should also help with reducing the queue and overall stress of the SOC analysts as well. So yeah, some good insights there.
So one of the questions that Jeevan Singh has asked, and which is slightly in line with this, is then how do you scale your detection programs? There are thousands or millions of ways attackers can attack your system.
How do you scale your program and prioritize which directions that you want in the first place?
Reanna: Yeah. And this is a really good question. And honestly, it's common. I always tell people, do a true gap analysis on your environment. And what does that mean? And so I always go back to the MITRE attack framework, because that's, in my opinion, industry standard across the entire globe. They have a free tool that you can download off of their GitHub and you can essentially deploy it in your environment. And so it layers the entire enterprise. They also have one for mobile devices, for mobile security, but they have one for the entire enterprise as well.
And so really looking at the list of different TTPs that your security vendors will publicly publish and be like, hey, these are things that we typically look for, map it. And eventually you can even look at the different TTPs that you have created in house. So your custom detection queries map that as well. And then you can honestly start truly seeing the different types of gaps that you might not be detecting on. And so this is where you start asking the question as to how, how do I, how do I start figuring out these different gaps? are, why are there gaps? Maybe you don't have logging in these specific areas. Maybe, the business can't afford things such as a web application firewall or stuff like that.
So now you can start truly seeing the areas where you need to focus your security program. And this is why defense and death is really important because maybe some businesses are really focused on securing the endpoint and the servers, but they really forget that network layer as well. So maybe you don't have any network alerts looking for proxies. Maybe you don't have anything really detecting potentially inbound RDP or an FTP hosted internally.
So these are areas that you can start looking into because now you can start developing roadmaps for your program and really decide where's our true threat. And some people I've seen actually start taking that ideology and compare it to their pen test reports as well and see what were the high risk findings on these pen test reports.
And this is where the detection engineers will go back to their internal minor map and go, yeah, we don't have anything over here. So now this is more of a priority because of that potential finding that was found. I also tell people another way that you can also look at it is look at your vulnerability assessments. See what critical and high vulnerabilities are in your environment and if they can be patched or not. And if they can't, what detection rules or areas do you have in place that if there was a potential exploit, maybe it's an external facing database server, if it was exploited, are you guys able to detect on this? Are you able to find it? So really see what your vulnerability scans say?
And then I guarantee you there's also gonna be a published exploit associated with it. And then that's where you can start recreating that in your own environment and trying to define potential hits or whatnot, because then that's also gonna be an additional type of finding.
Host: Okay, so there are multiple areas that you test on, Like network, following the MITRE methodology, and also looking at like vulnerability management programs and using all of that to write your detection rules for your organization.
So far we touched on external attackers, what they can do and things like that. Often organizations face insider threats as well. So do you see them differently? What are the biggest challenges you have seen in detecting and even responding to insider threats versus the outside attacks?
Reanna: Yeah. Yeah. I always like to say, some people are, there's two different types of people in my opinion, in the business. there's people that want to believe there's no insider threats and their organization and everyone gets along and Hakuna Matata. And then you have the other set where everyone's just paranoid that they're like everyone's stealing data or whatever that case is. it's again, right? It's going to depend on the organization.
But I always tell people insider threats should always be taken serious. Start with your security education program, right? When you go through onboarding, I always think it's important that not only do people recognize potential phishing emails, but hey, my coworker might be high risk because like I saw him in the back of a McDonald's, you know, or, know, with another person and I heard him talking about work and, know, a project that we're working on, that's not supposed to be really, you know, like there's different types of behaviors.
And I always tell people to like really think about what types of sharing platforms are allowed in the business. such as one drive or Google docs, or, you know, an external hosting platform, because this is where you can start looking for those unauthorized platforms as well for potential data leakage or data access. And also really understanding abnormal user behaviors, right? Do we have John Doe logging in in the middle of the night and then now we're seeing him download 3000 emails or we saw him do XYZ, right?
And so I really think it's important before people even talk about insider threat to not only build relations with their senior leadership within IT or security, but also really build those relationships with HR because of, you know, a business is really wanting to tackle this. is so important to align with HR policies or what the legal team has because again, you don't want to be like, I caught you, you know, and you know, they technically.
They technically, according to policy, did nothing wrong. I, you know, really sitting down and having a conversation with HR and be like, Hey, our team, we're wanting to expand roles and responsibility to insider threat. This is what we mean by insider threat. This is some common things that the industry sees. Is this something you guys would potentially be interested in exploring in with us? Just because we're trying to really protect our business and stuff like that. It's like when the Nintendo Switch 2 got released, there were so many products about the Nintendo Switch before Nintendo even released it. Like my entire X feed was flooded and I was like, well, I was like, that stinks.
Host: Hmm hmm… I like that how you phrase it. It's like working with, let's say, HR or other departments to make sure they also understand what does it mean by insider threat. And when you take action, they are not surprised.
Any tips you have how to involve other departments when it comes to building, let's say, detection rules or even involving them in the overall security program, designing the security program?
Reanna: I feel like security in a lot of businesses, especially the bigger the business, security often becomes the hidden shadow department. some people call us basement dwellers. as you can see, I don't stay in a basement. have very bright, happy sunlight. and that's because some people might be introverts and they don't want to talk about the security program. You have to sell your security program to the business for it to be successful.
Because again, we don't make revenue, right? We have to provide value add through incident response or how we, I would say deploy our security stack to the business. Because if you think about CrowdStrike and that big blow up that happened, I guarantee you a lot of security departments probably were walking on thin ice because that was their security tool. That was their ugly baby that everyone got to see on national news.
But, It is, it's absolutely crucial to build those different relationships, especially if you're providing remediation, if you're providing risk insight and you're asking people to constantly fix their technology. It is so important to have a very value add relationship. And I've seen some departments, they'll actually do tabletops with, different businesses, right? They'll, they'll, you know, might do a live. Um, a live pin test in front of them on their application to show like, Hey, this is how easy it is for me to get access to your server. And these are things that you can fix it.
And so a lot of the times people really essentially they need to see it to believe things can happen because a lot of times it's just like brush off the shoulder, like, yeah, you know, whatever. If it happens, have is no dude, like this is very, very serious stuff that's going on. So having that small insight can provide a lot a lot of weight as well
Host: Makes a lot of sense. One of the questions that comes to my mind is like early on you touched on like ML versus GenAI and things like that. So now since we are in the JNI era, I have to ask this question. How does like AI or like machine learning or GenAI, how does that help or that impact how we do threat detection today?
Do you see it helping? Do you see it affecting? I know that even attackers have access to it, right? Not just the folks who are detecting it. How do you see that impacting?
Reanna: Yeah. And this is something I've been kind of dabbling in on the side with my business and stuff is taking these different feeds. So we talk about belief in computer hacker news and feed it into a gen AI model and allow them to essentially take that article and create those detection queries for us. And so then we can cater and, you know, and tweak it a little bit. So it fits the need of our business.
But a lot of these ways, you know, we can use GenAI essentially to build our platform to really even summarize data. I took a, again, right, this is still in the development phase or whatever. I have my own Discord server and Discord offers their own internal AI features and stuff like that. And so I was able to take a Bleeping computer or Hacker News article and feed it to my little AI bot.
And I'm like, Hey, how can I take this and give it back to my business? How can I provide what like value add an insight? Sure enough, it gave me a whole spiel as to like, Hey, this could be a security awareness message. these are typically the industries. This threat actor group has been seen targeting. It's really important that you implement XYZ security features, and then you can also ask for, you know, the TTPs or the minor attacks out of it and sure enough, it gives it to you.
I think AI, good and bad, right? There's a lot of good and bad things. think AI is really going to help at least the security community automate this stuff at a more fast paced scale because literally threats are evolving so fast now with how technology is moving that we need to evolve just as fast. I always say we will probably never be ahead of the threat actors because we have to know how to defend the new threats. have to see the threats first.
But we can at least try to swim in the same lane as them, as close as we possibly can. And I really do believe AI is gonna take us to at least that next step as it's going on.
Host: That's a great example that you gave, right? Like how you used AI for taking an article and getting insights of it, which you can use for your own business to improve your detection engineering. So keeping that in mind, like one of the questions that Jeevan has asked is, what does the future detection engineer look like?
Reanna: Yeah, and I think that's one of the most cool things about detection engineering. It doesn't have to be a direct path. I've worked with some people who were network administrators before coming into cyber, and they really understand how the network flows because now they're able to create queries off of that.
I really think as foremost, foundations is absolutely key for talking about technical skills, right? Knowing how to parse logs, knowing how machine learning works, knowing the environment in general. And I always say if you're looking for a certification, Blue Team Level 1 is always a good one. There's also a Security Plus (Comptia) to really understand those foundations as well.
And of course, if you're already working in an environment, if you're not a college student and you're already working, see if your security vendor offers training for their platform, for that custom creations. Most of the time, most of the time they do allow it. And a lot of the times they will also offer it for free for your team because again, it's them selling and pitching their product. I always say it's very value out to have those vendor customer relations. Absolutely.
And then, another thing, if you're really curious about looking into the MITRE attack framework, cause I've talked about it throughout this entire podcast. MITRE has free training on their tool as well. And they talk about threat intelligence. They talk about how to map TTPs. They talk about all these other things out there. And of course I always harp on soft skills, such as writing and communicating because you're going to be the one that's gonna be talking to probably people in the business as to why what they're doing is not good or why this finding is crucial.
And then you're gonna be the one writing the playbook. And I feel like the longer you're in the field, the more technical your writing gets and you're becoming the person that's speaking a different language because now you're so officially technical.
And so I always recommend people write it at a language that you're trying to pitch to a 10 year old, because that's going to be universal at that age and anyone and their mother can understand it. I also encourage people to take their documentation and give it to their entry-level analyst and see if they can understand it. Because if they can't, I guarantee you the rest of your team will not understand it or even your intern. So, those are always my recommendations when thinking about those things.
Host: Yeah, that's a great tip, like writing as if you're explaining it to someone who has no idea or who is just getting started. Yeah, that's a very good tip. Another thing that you touched on, like MITRE tools, yeah, those are absolutely amazing. Organizations should definitely use that. And on the soft skills, I think you highlighted another one is selling, right? Like not only tailoring your conversation, but also selling it to other parts of the organization so that they also see value of the new initiative, let's say, that you want to bring in or the detection engineering policies that you are building.
So one last question. This is from Jeevan Singh. So you spoke about blogs, So there are several organizations who have started building their own SIEM systems, Rippling being one. And they have written a comprehensive guide on how to build SEM and things like that. Do you think industry is moving in that direction? Are there any benefits of thinking in that direction?
Reanna: Yeah, I mean, it also depends, I always go back, it depends on your environment. I think one of the benefits if you really utilize a SIEM is that you're able to scale across all of these different big data sections. And that's essentially what your SIEM is. It's organizing big data to make it into a readable format. And you're taking information literally across from different types of platforms. Like you have your firewall logs, you might have your endpoint logs, your exchange logs, whatever.
Whatever that flavor of ice cream is, it's putting it in a way that you can read across it. And so a lot of the times when we start mapping in a specific attack chain, you might need those different layers of those different types of logs, right? You might need to look at, maybe those windows firewall use this point. Did it make it out to our internal firewall? Did we see it on the edge? And so now you're able to bring value back to what you're alerting on, depending on how successful that attack chain truly is. Because again, that's also going to provide insight to the different types of how critical that alert could be, right? If it was successful going all the way out through our different layers in our business.
Eh, you know, probably not a good day. But if it was stopped at some point, then you're like, OK, so maybe this isn't going to be a higher critical, right? Clearly, there's still something malicious in our network, but it's not successful. So now we could probably bring down the severity to a medium or something like that.
So to me, I really think it's absolutely valuable. I always encourage people to build their own home lab to really understand more about logging and how logging works. It's not It's not gonna be something that's gonna be going away anytime soon, in my opinion. I perceive this to be a forever ongoing thing. If you do wanna build your own home lab, there's a couple of vendors. Elastic is one where you can have your own community license, essentially, and just build your own service through your own VMware, whatever, how you wanna do it. There's plenty of YouTube videos out there. And then also Splunk has their own community license as well.
Splunk can be very expensive in the corporate world, but there's a lot of free stuff. again, right, reach out to your vendor. If you do have a SIM in your environment, reach out to your SIM vendor and see if they do offer some sort of security detection training as well, because that might bring insight to you or whoever that really wants to try this in their business.
Host: Makes sense. That's a great way to end the security questions.
And we are at the end of the podcast. But before I let you go, one last question is, do you have any reading recommendation? I know that you touched on flipping computers, Reddit, and many areas. Any reading recommendations? It can be a blog or a book or a podcast or anything.
Reanna: Yeah, absolutely. So depending on what you're looking for, if you're looking more for self-improvement or leadership types of stuff, I always recommend Brene Brown. She does a lot. One of the recent books I read, I I forgot the title, but one of the recent books I read, she talks about becoming vulnerable and taking risk. so knowing it's okay to fail because that's how you grow as a person and that's how your journey can potentially inspire others.
Host: Is it dare to lead or is it dare to lead?
Reanna: That was, that was a fantastic read. recommend it for anyone who brought a lot of insight to me. dare to lead. Yep. That's it. Yep. That was it. Dare to lead. now technical books, there's this one book that stuck with me for the dawn of time and it's called the seven sins of software security. this, this brings a lot of insight to, secure coding. And of course that book, I read it when it was like,
in 2016, so I'm guessing there's gonna be some newer additions from that now with essentially how we do development. And I always like to promote my newsletter. It's a lot of community-driven content on there, so people talking about how they pass a specific exam, maybe it's a breakdown of an APT or a threat group, or even just some general life coaching or leadership advice.
Host: So what we'll do is when we publish this episode, we'll tag all of these resources.
So thank you so much, Rihanna, for joining and sharing your knowledge around. I felt like we went quite deep into detection engineering, how to work around it, fixing it. Like we touched on a lot of ideas. So yeah, thank you so much for coming to the podcast.
Reanna: Thank you so much for inviting me.. It was fun!
Host: Same here, same here. Thank you to our audience. Thank you so much for watching. See you in the next episode. Thank you.