Auto Remediation on AWS with Lily Chau
TLDR;
- There are two essentials to focus in cloud security. Strong security foundations / baselines for preventive measures and remediations for drifts from a reactive measures stand point.
- Biggest challenge with remediation programs is buy-in from other stakeholders like Engineering, DevOps, Leadership. To get the buy-in, show the current MTTR vs future golden standard.
- Prioritization of remediation is co-related with security maturity of the organization. In areas like Misconfiguration, Threat Intel, Attack Surface amongst others.
Transcript
Host: Hi everyone, this is Purusottam and thanks for tuning into the ScaletoZero podcasts.
Today's episode is with Lily Chau. She's a little blob inhaling copious amounts of food and is often seen riding a warp star. She is a silent spirit using lots of grunts, shouts and cheery elongated monosyllables. She was previously known as a Platypus caretaker. Lily, welcome the podcast. Thank you so much for taking the time and joining with it
Lily: Yes, thanks, Lure. It's a pleasure to be here with you today. But yeah, so basically, if you didn't get that, I'm a Kirby. And yes, I used to run the PlatypusCon hands -on workshop only Hacker Conference back in Sydney. And now I'm your friendly neighborhood security janitor at Roku. So I break things, I fix things. Sometimes I pretend to be a developer.
Host: I love that intro. So before we start, generally ask our guests to share about their journey. Do you want to highlight about your journey? How did you start and what do you do nowadays?
Lily: Yeah, so I'll try to be brief because my journey is actually fairly boring.
I'm a sub-short hacker who has pivoted to building security tools because yeah, it's frustrating being on the reactive side of security instead of the proactive side, which has a much higher impact on the company.
Host: Yeah, absolutely.
What does your day-to-day look like nowadays?
So we generally ask this question to our guests and we get unique answers because everyone's day will never be same, right? Everyone's day will be same. How does your day-to-day look
Lily: Yeah, so let's see. I wake up and I reply to about 50 Slack messages, but I don't read emails.
So when you're so low on time, you essentially have to choose what to be bad at. And I made a conscious decision to be bad at emails so that I can focus on other things. then I spend a good amount of time reminding people to fix security issues or informing people that, hey, I'm introducing this new technical control that might cause disruptions, so fix your systems.
And I'm actually creating a nagging bot to help me out with all of this. I also tackle some tech debt, but most of my time is dedicated to more complex project work that typically spans like two to three months of time.
Host: Okay. So one of the things that I have seen with guests is some guests do not do when they wake up, no slack, no email, Till the time they like, let's say go to a gym or they go for a run. And after that, they start some do everything and then go to the gym or a workout. And you have found a balance that you will do slack, but not email.
So that's a, that's a unique way of starting your day.
Lily: Yep.
Host: So today we are going to talk about, you highlighted fixes, right? So today we are going to talk about remediation and particularly in AWS. So let's dive into it. So this year at B-Sides SAF, you presented the topic, WIZBANG Lambda Fix, where AWS misconfigurations meet AutoFixit Antics, focusing on remediation, particularly auto remediation. Before we dig deep into the topic, let's start with some basics.
How does remediation and auto-remediation, in particular, play a role in organizations' cloud security?
Lily: Right, so first I want to dial back and say when tackling security issues in AWS, you have the first track, which you should absolutely do first, is the secure default track. And that's where, and then you have the auto-remediation track. secure defaults is where you set up AWS organizations with CloudTrail, GuardDuty, LeastPrivilege, IAM roles. It's where you set the service control policies and it's where you have thousands of secure by default templates.
So if there are any issues, it's in the code, which you can sample, block and correct via drift detection.
So the second track is somehow controlling the manual click operations outside of approved infrastructure as code workflows. So we do this via both scheduled and real-time remediation of these manual actions.
So some examples are quarantining EC2s that are manually spun up and performing residual remediation of manual deletion of elastic beanstalk or load balances, which can introduce subdomain takeover vulnerabilities.
So in a nutshell, would say auto-remediation is about fixing security issues that arise from manual actions. So that's more opposed to the traditional bug fixing where you file a jury ticket and you're done, or you send a Slack alert and you're done. Yeah, auto remediation is about going beyond Jira and going beyond Slack and actually fixing the issue.
Host: So you touched on the first thing that you highlighted is secure by default, right? And I love that because most companies sometimes just try to be reactive. That hey, there is something popping up, we'll fix it. But they don't go back and set the foundation right.
So in a way, with secure by default, you're doing the foundation right. On top of that, if there are drifts, you're fixing them using auto remediation. You touched on ISE as well. So we'll cover that in a bit. But yeah, I love that you started with the basics that you have to have your foundations. A follow -up question that comes to my mind is,
What benefits do I get by doing auto-remediation for cloud?
Lily: Yeah, I mean, essentially you're finally fixing that security issue, but you're also fixing it at scale while streamlining that remediation process. So we don't need to worry about resource and time constraints within Teams. And we don't need to worry about developers needing strong security expertise given the growing number of AWS services and all the possible complex misconfigurations that they can have. Yeah, it's tough to be a developer.
Host: Totally, totally. So I love that you highlighted the time constraints, right? Because when it comes to developers, their primary focus is to ship out new features. On top of that, we are asking them to do security things. Now, auto-remediation is another thing, right? So prioritization is often a big challenge for security teams. And in that bucket if we bring in auto-remediation as well that adds another layer of challenges for the security team. Now when I am doing the prioritization, let's say I am in the security team, I am doing the prioritization,
What factors should I consider to determine which area to focus on for my remediation?
Lily: Yeah, so prioritization is hard, and also knowing what to fix is hard. So I will detour and say, let's say you buy a cloud tool and you get millions of findings, but then you get stuck. Like,
What do I do with these findings?
I'm like, hey, I need to remediate them, but how do I do that? Well,
What do you remediate that will make a significant?
because we are time and resource-constrained. And then when you look across the industry, what you start to notice is that the only thing people fix and the only thing that people like jump up and down about are public S3 buckets. So you basically bought a tool to fix public S3 buckets.
So yes, I get that. Knowing what to fix and what to prioritize is hard. Or when you do know what to prioritize, You might know how to fix it for one system, but how do you fix it systemically for the whole organization?
So when prioritizing, you need to choose based on your security maturity, the categories you want to fix. So that's security misconfigurations, cost optimization, threat detection, reducing attack surface, or is it placing guardrails on policy violations?
Lily: So I'll break down the high-impact ones. So for security misconfigurations, you want to focus on preventable controls or detective controls for critical vulnerabilities, such as server-side request forgery, SSRF, remote code execution, and credential exfiltration. So that's anything related to IMDS v1 in your EC2s, auto scaling groups, AMIs, EKS. This is like the single best thing you can do to reduce the risk of SSRF attacks. So yeah.
And then the next thing is to focus on the next example route 53, because you want to a subdomain takeover and hosted zone takeover. So one example is where if you only have a record, if the only record you have for a subdomain are NS records, like so nothing else. So no TXT records, no CNAME records, no A records, then you just want to delete the whole thing. And that mitigates the risk of future DNS zone takeover.
Host: Yes, yes, I love that very specifically you highlighted NS record, right? Often we ignore that, hey, there is one entry for NS record. maybe it's OK, but it makes sense, makes a lot of sense. Please continue.
Lily: Yeah, it's like one of the few examples like, yes, you should clean up your things because it can lead to a vulnerability. And so weird. It's like, but it's not used, but you don't know that that's why you need to clean up.
I guess, yeah, the other cat. Yeah. Yeah. Yeah. yes. The category for threat detection. So for that, there is always a possibility for false positives. So you want to make sure you require a Slack user response before you move your workflow onto remediation. And you want to focus on things on threats that are high signal, low noise.
So some examples, cloud shell. If you detect someone downloading secrets more likely than not, it's an attacker. Honeypots. So in our environment, we place AWS Honeypot keys everywhere. So when they are triggered, it's because of the human that EC2 should be quarantined by security groups.
And so one thing to note is that the indicators of compromise for threat detection quickly go out of date. So it's important to focus on the attacker tactics that cause the most damage. Such as credential exfiltration.
Host: So I love how you broke it down, right? Misconfigurations, threats, and then subdomain takeover, attack surface, and things like that. And I was thinking it would be just, I'll just click a button and everything is fixed, but it looks like there is more work. So since there is a lot of work to be done, right, when it comes to remediation, for organizations who are investing in it,
How do they measure the impact of it and even though return on investment for these remediation type?
Lily: Yeah, so you basically want to compare the results of the two tracks. So the secure default track and the auto remediation track. So you want to measure the increase in secure default setups, which should go up ideally. And simultaneously, you want to observe a decrease in policy violations from your auto remediation of manual click operations. So that's kind of the dual-track approach to ensure that you're comprehensive and effective enough for the cloud.
So for track one, the metrics, you wanna track how many bug classes you've eliminated and the percentage of coverage across your code base. So for example, let's say you've remitted the bug class cross-site scripting across your code base. You also want to track the adoption rate of your infrastructure as code templates with predefined guardrails, and predefined CIC templates that scan and block critical misconfigurations from being pushed in production.
And I think also it helps that if you have like a bug bounty program or penetration testing engagements to help verify that, hey, the system was compromised, but in that compromised state, in that virtual machine, in that pod, with that gained user role, can you do anything?
Host: That's good way to validate both the areas, right? Secure by default, you see an increase in the number of policies that you are defining, or the foundation, you're getting better at the foundation. At the same time, how many, like the number of remediation are they going down?
Because if they continue to go up, that means your defaults are not where they should be, right? makes sense.
You touched on ISE pipelines a little bit. So I want dig deeper into that area. this, like in June, I was at the FwdCloudSec. And there was a birds of feather, like which is a off -camera discussion session, which was around why can't auto -remediations work? And some of the points discussed were, the first one was, as an organization, we follow IAC. Like you touched on IAC, right? Like you should do more things on IAC. So the question was around, as an organization, we follow IAC for managing cloud resources.
Why should we focus on auto-remediation in that case? Shouldn't we be investing more on the ISE front rather than the auto-remediation front?
What's your take on that?
Lily: Yeah, so as I was kind of hinting at, tackling security in the cloud does require a dual approach. So focusing on track one, security through defaults and guardrails alongside track two, which is addressing the policy violations or the invariant resources via auto-remediation. like, yes, definitely you should be investing heavily on infrastructure as code as part of track one and track.
Track two is about order meeting manual click operations outside of infrastructure as code. So we see it all the time, like edge cases for technical control are too strict or technical controls, like they don't cover every scenario. So let's come to a really simple example.
Like why are people spinning up EC2s manually? We have the secure by default terraform template and the predefined CI CD template for people to spin up EC2s. So why are they still doing it manually?
And then you hear things like, I just want to spin up something really quickly, like for test purposes, or I'm new to AWS, I'm more familiar with Docker and Kubernetes, and I still don't understand this service mesh golden standard that you're pushing on me.
So there's always edge cases, even for things you don't even think about, like encryption everywhere, that there's also an edge case.
So basically, if you're going to spin up an EC2, which you know is going to happen, you just want to make sure you are doing it securely. So while you learn the golden standard, the service mesh and make a plan to migrate, let's make sure the manual click operations don't introduce vulnerabilities that we can't live with. That's the track two that I am talking about. So remediating or reducing the impact of vulnerabilities introduced via manual actions.
But, like automation is also about the containment of the most common compromise types. So quarantining and easy to quarantine credentials, applying public access blocks during a compromise. And automation also is about mediating inactive resources.
So moving inactive I am roles after a predefined time period, removing unused privileges, and disabling any kind of security credentials that have been found to be compromised or leaked.
Host: Ah… So sorry continue.
Lily: Yeah, so, so I was going to say for track one, there's also a piece for auto remediation as well. So you do need to automatically correct deviations from known good state in production and to restore the original configuration defined in your infrastructure's code.
And that applies to correcting IAM drift and also correcting IAM drift in Kubernetes. So like, you know, you can have really odd repetitions on both sides technically.
Host: hmm. So you highlighted that even though you the best track, like you have secure by default, ISA templates, everything in place for your developers, sometimes they might spin up servers which are slightly out of compliance. I'm using the term compliance broadly, but like out of your standards, right? Baseline standards.
How can we avoid it, that your developers don't even do that. Do you think that's not possible because sometimes they want to move fast and they just do it, or there are better approaches that organizations can take so that the number of such issues is reduced?
Lily: Yeah, so the thing is it's always going to happen. so for example, for us, it's OK. If it's a production AWS account or GCP project, then we only allow changes of our infrastructure as code so that you can't push manual changes.
And the only way you can have manual updates is in sandbox environments. So then what ends up happening is people put their production workloads in sandbox environments or in the production environment, you still need a break glass in case something happens, something breaks down. So it's like, okay, you need to jump into this break glass account to restart manually or edit something manually. it's, it's unfortunately, it's yeah, yeah, yeah. What you were saying, you do need to strike a balance between, security and speed of features.
Host: Right. I was hoping that we will get there someday where there is no manual work. It's all IEC, but I guess we are far from it right now. Another question that was raised during that session was whether auto-remediation cannot be applied to all types of assets and all types of misconfigurations, especially let's say some AWS rolls out a new service. By the time you write auto-remediation and things like that, it takes a little bit of time. It takes a bit of time to catch
And if that's the case, if we are not 100 % covered, should we look at better alternatives than auto-remediation?
Lily: Right. So this kind of goes back to needing both tracks, the track one security faults and the track two auto-mutation for manual actions.
So track one can cover all your assets if their infrastructure is code and that's the aim of track one. And the aim of track two is to provide a good enough coverage against high impact compromise.
So you don't want to waste time on remediating low -hanging fruits as part of track two, for example. You also want to prevent issues you know are high in true positive rates. to kind of reiterate, ultimately, prevention significantly reduces the gap between the large attack surface and our limited remediation capabilities. But since there will always be edge cases, auto remediation is there. and it's there to activate in those exceptional circumstances where immediate corrective actions are necessary.
Host: Ah, that makes sense. That makes a lot of sense. A follow -up question that I have on that is, let's say I'm now convinced that, I need to invest in auto-remediation, and I want to start as an organization. We want to start building the auto-remediation framework.
What should I be prepared for?
What kind of challenges I might face while designing the auto-remediation program or even while working on it?
And if you can provide any examples that you might have seen at Roku, that will help connect the dots.
Lily: Yeah. So my biggest challenge was probably company buy-in and I'm sure I'm not the only person in the industry who was getting pretty fed up with knowing all the issues in your environment and nothing changes like year over year. It's the same probabilities and they're not going down. Like if anything, they're going up. So, and when new incidents happen, it's like, it's the same issue that's coming up over and over again.
And yeah, despite all your best efforts and like writing a super detailed Jira ticket outlining like exactly how to mediate step by step, like it, the timely resolution of that issue is still a problem. So this is where you collect metrics to show how horrible our meantime to remediate is. And so, so with that, it becomes a lot easier to convince people.
Hey, maybe you should give the security team a shot. You know, let us take the reins and start applying remediation ourselves because yeah, security professionals are uniquely positioned to identify, understand and also remediate those configurations.
Another challenge is balancing prevention versus auto-remediation or track one versus track two. So yes, prevention is the only thing that can significantly improve the difference between your attack surface and your small remediation capacity.
So yeah, at first it may seem like the best approach is to deploy as many auto-remediations as possible to maximize your coverage. But you should be focusing on building preventative measures and remediations that cover enough of the attack surface that it's unlikely a developer will make a beginner new mistake and unlikely for an attacker to find a path through your network without triggering at least one of your alarms. So yeah, ideally, remediation shouldn't fire unless something goes really, really wrong.
Host: So you highlighted buy -in, right? You didn't talk about technical controls and things like that. What other stakeholders do you see you interact with as part of this process?
I thought it would be just the security team, right? I'll just work on auto -remediation scripts and we are done. What other stakeholders do I need to work with?
Lily: Yeah, so the security team, the cloud infrastructure team, and all your developers across the organization. So the security team will drive the success of the auto remediation efforts. So outlining the streamlined remediation process for each issue.
The cloud infrastructure team will ensure that the Terraform, StackSets, IAM, and least privileged permissions are set up and configured to perform automation across all AWS accounts.
And so for developers, you essentially want to get their buy -in because it's their workflows that are being impacted. So yeah, the goal is to enable them to ship code faster. But with the Track 1 secure defaults, they can ship that code faster without needing to worry about infrastructure and without needing to worry about security. So yeah, basically making the secure way easy for them and the insecure way harder for them to do.
Host: Love that. So we spoke about challenges. We spoke about who are the stakeholders. Now, since you have implemented this,
What recommendations do you have for organizations who want to start working on auto remediation?
Lily: Yeah, so my recommendation is to collect metrics for the meantime to resolve and collect metrics for the track one secure defaults in comparison to the track two auto-remediation. You also want to show to the company that you are moving to the golden standard, and that could just be the generic secure by default standard. It could be the Istio microservices as the standard, some other form of golden standard.
So then you are basically showing progress in three areas. So for new services they automatically adopt that service mesh architecture for better availability, scalability, security. For existing services, you want to show that you're migrating to that golden service mesh architecture while ensuring uninterrupted service for business continuity.
And then you have the progress on everything else new and existing that bypasses your service mesh adoption. So that's like spinning up your EC2s manually, containerizing your applications with Docker and Kubernetes, and using AWS Lambda or other platforms as a service to deploy applications such as Heroku.
Yeah, I mean, yeah, well, it's technically possible to securely deploy applications in those nonapproved environments. It's uncommon. And more often than not, like such practices are manual and they introduce security vulnerabilities.
Host: Right, right. I like how you structured it. first of all, define where you want to get to, like the golden standard. And then you have metrics so that you find out what is your current state and then how you are progressing so that that way you know whether you are getting value out of the process or not and what type of value you are getting out of it as well. So yeah, I really like how you structured it.
So one of the things in the prioritization you mentioned is around threats. So and cloud security threats are constantly evolving. So keeping that in mind,
How can auto-remediation solutions for AWS keep up to the latest security vulnerabilities?
like you touched on IMDS v1. So similarly, let's some other vulnerability comes up. How do you stay up to date and how does the auto-remediation program stay up to date to these things?
Lily: Yeah, yeah, threats are indeed constantly changing. so staying ahead requires like a dynamic and proactive approach. So to keep the order of remediations updated, need, so number one, continuous monitoring and threat intelligence integration. So you want to regularly integrate your threat intelligence feed and security advisory into your monitoring systems. So that will help in identifying new vulnerabilities and attack patterns.
Number two is AI-driven analysis. So you need to leverage AI and machine learning to analyze logs and detect anomalies. And that will help determine with a really high confidence that a compromise or a zero-day has happened. And those AI insights can also help inform your automation strategies.
And the third is regular updates of patching. So ensure that your auto-medicine scripts are regularly updated to include patches for those newly discovered vulnerabilities.
Host: Speaking of AI, how do you see the future of auto -remediations? Do you see a lot of AI helping us, let's say, writing auto-remediation or even enforcing some of the baselines? How do you see AI being used in the remediation space?
Lily: Yeah. So I think the future of auto remediation on AWS is well-positioned to become much more sophisticated with AI. So with advanced AI and machine learning, it will help play a greater role in identifying and remediating security issues in general. So those future auto-remediation solutions will not only be able to detect anomalies, but possibly predict prudential threats based on the patterns and behaviors that allow you to do preemptive auto-remediation actions.
I think we can expect more self-healing systems. So basically where you automatically correct vulnerabilities and configuration issues without human interventions because of AI and those systems can use predictive analysis to anticipate and address problems before they even become threats.
Host: That would make our life super easy, right?
Lily: Yes. I think I also envision better integration with the DevSecOps pipeline. So it will play a more important part in the pipeline, ensuring security is baked into every stage of the development cycle. And it will stream like all the processes of deploying secure applications and infrastructure.
And probably one last thing is policy driven remediation so that your organization can define the policies and it would just be enforced automatically.
Host: Yeah, that makes sense. So we spoke about how AI will make our lives easy, right? We'll get better remediation, policy-driven remediation, and some of the self-healing aspects of it. At the same time, there are challenges with AI as well, right? That there could be more security risks coming from AI. So what do you think about Or do you even think about that? Do you even worry about that? Or you're like, no way I will do it for me.
Lily: Right. Yeah. So I guess overall I am very excited about AI as it applies to security. So yeah, maybe I'm not too worried when you can have so much more benefits. So I mean, yeah, we can have like enhanced threat detection where AI can analyze your traffic, your user behavior, your logs to identify anomalies that may indicate a security threat.
And that means quicker identification and remediation of potential breaches. The systems will be more adaptive because maybe AI can learn from past incidents and adapt the security measures accordingly. So as new threats emerge, the system becomes more resilient and it just continuously improves its defenses. and, so one of the challenges in security operations is the high number of false positives.
So AI can significantly reduce these by better distinguishing between like benign anomalies and actual threats. So yeah, and then this reduction would just help security teams focus on more genuine critical security issues and maybe automate the incident response process from detection to containment to mediation. So yeah, basically with AI, I think organizations can shift from a more reactive to a proactive security posture. Predictive analytics may also be able to forecast potential attack factors and allow teams to better adapt our defenses before an attack occurs.
Host: I'm glad that you're not worried about AI taking over everything and driving us nuts.
Lily: I mean, how cool would that happen if that was the case? I think it would be amazing.
Host: Let's hope that doesn't happen in our lifetime.
Lily: Yeah.
Host: So yeah, that brings us to the end of the security questions. One last question around security is, like generally most organizations follow some sort of cybersecurity framework, right? So I'm curious, any particular cybersecurity framework that you have used or that you have worked with and you like the policies and you like the program overall, any thoughts on that.
Lily: Hmm. So I guess a good starting point or an, and a boring point is the, the NIST framework as a base reference point. So if you know nothing, it provides like a good holistic view over all areas from detecting, responding, and recovering from incidents.
However it is very important to customize it to your company and take a more risk-based approach to the framework. And, and also pay attention to recent significant threats because these are what need to be addressed first. So I would say, yeah, NIST has been helpful to me as a base roadmap for improving our security posture, but you do need to be flexible and adapt to those new threats as they come out.
Host: Love that. Love how you structured it. Yeah, thank you for that. So yeah, that brings us to the end of the security questions.
Rating security practices
This is where you have to rate, have to give a rating between one to five, one being the worst and five being the best. You can add context to why you gave a particular rating. So let's start with the first one. The first one is
Conduct periodic security audits to identify vulnerabilities, threats, and weaknesses in your systems and applications
Lily: Okay, so I'm interpreting your security audits as manual. And for that, I would rate that as a three out of five.
So basically my stance is we want to solve as many problems as possible with secure defaults. And in order to get the best value from a security audit, it needs to identify advanced or complex bugs not like the low and medium stuff, because yeah, we're not trying to identify every bug. Bugs will always exist, especially given the rapid pace at which new threats emerge.
And although we don't have the resources to fix every bug that you're going to identify, so unless you're finding really complex advanced bugs, performing manual security audits, I would do a prior prioritize those.
Host: Okay, makes sense. The second one is provide training and awareness programs to employees to help them identify and respond to potential security threats.
Lily: Yeah, so I would give this a two out of five. So while it's a widely common practice, there are limitations to this approach. So, okay, when it comes to phishing, like users will always be vulnerable. Like even the most seasoned security professionals can fall victim if they get caught on a bad day. So yes, it's essential to teach security basics to users.
So knowing how to scrutinize a domain, verify the sender's address, and be weary of unexpected requests for sensitive information or money. But beyond those basics, detecting sophisticated security threats requires a level of technical knowledge that most users don't have. So expecting users to then be able to identify and respond to them is a little unrealistic.
So I think this is where it's more important to have technical controls, technical controls. So things like antivirus, email filtering, and automated threat detection systems, are far more reliable in identifying and mitigating security threats and mitigating them continuously and processing large amounts of data, which a human can easily miss one.
Host: Yeah. Yeah. Like one example is the spam filters in emails, right? We often don't see so many emails which are spam. And if we get those emails, we'll not be able to distinguish whether it's spam or not because attackers are super smart as well. So yeah, totally like training plus the tooling both need to go hand in hand.
The last one; DevOps practices are needed to move fast and deploy code to production. We'll focus on security later. Our security is not the most important right now. What's your rating?
Lily: Haha. So, I would probably give this a two out of five if you already got my drift. yeah, it's understandable that security practices might slow things down, slow things down, but it is important to strike a balance between that speed and security to ensure like long-term success and stability.
So to balance that security and speed, you know, there, there are a few more less, okay. Strategies that don't hinder the developer that much. So a shift left to incorporate security early in the development process. Automation in terms of scanning and blocking security checks in the CI/CD pipeline. And then you're secure by default code templates. So if you bake in all your guardrails in the code templates and in the pipeline, you know, developers can just focus on shipping code in the application layer without needing to worry about security and without needing to learn about the infrastructure layer.
Host: So that's making their life easy, right? They're just focusing on their own work in a way. They're not spending a lot of cycles thinking about security best practices and doing things like that because those are abstracted in a way for them. Yeah, love that. So that's a great way to end the section.
But before we end the episode, I have one last question, which is any recommendation that you have, reading recommendation, it can be a blog or a book or podcast or anything.
Lily: Okay, so I'm going to go a little bit out of the box and recommend a book outside of security. So that's Dare to Lead by Renee Brown. So the book talks about, you know, the idea that effective leadership and also good professional relationships and good personal relationships require vulnerability and empathy.
So it's a very practical book, like giving examples about actual insights and strategies, which kind of what we spoke about previously that without someone listing specific examples, it's really hard to know like what to fix, what to think about and what to prioritize essentially.
Host: Thank you so much for sharing this book name. I'm a huge fan of Renee Brown as well. I watched her TED Talks. The first one was, I think, the power of vulnerability or something, right? And that was so impactful. When I saw that, so many things made sense, right? Because as humans, we sometimes don't think about vulnerability that much.
We don't want to feel vulnerable even. So it talks about how it's a superpower and not something you should be afraid of. So thank you so much for sharing that book as a recommendation.
Lily: Yeah, thanks for... I can't believe... I was very surprised that you know of her and agree, so I'm so glad. We were not robots in this industry.
Host: Yeah, absolutely. Yeah, totally. And yeah, that's a great way to end the episode as well. Thank you so much, Lily, for taking the time and joining with us and sharing your knowledge and insights with us.
Lily: Thanks for having me!
Host: And to our audience, thank you so much for watching. See you in the next episode. Thank