The Secrets of Product Security with Anshuman Bhartiya
TLDR;
- The key to application and product security is understanding the risk appetite of the organization. And in order to understand this, organization needs to have an unified approach to risk quantification utilizing context.
- A key trick to incorporate security practices into SDLC is to work with secure-by-default libraries / frameworks. Building golden images, IDE Plugins, Git wrappers, etc. This ensures insecure code does not fall through the cracks.
- Culture is key to application and product security. One key aspect of it is to have empathy towards end-users, and engineers. This helps with building relationship with other stakeholders and building relatable & successful security programs.
Transcript
Host: Hi, everyone. This is Purusottam, and thanks for tuning into ScaleToZero Podcast. Today's episode is with Anshuman Bhartiya. Anshuman currently works as a tech lead of application security team at Lyft. Prior to that, he has worked at product-based enterprise companies like EMC, Intuit, and Atlassian.
He is also the co-host of the Boring AppSec podcast, and you can learn more about him from anshumanbhartiya.com! Anshuman, thank you so much for joining in today's recording.
Anshuman: Yeah, happy to be here. Thank you so much for inviting me.
Host: Absolutely. Before we kick off, do you want to add anything to your journey? Like on top of that, what I just shared.
Anshuman: No, I think that was pretty comprehensive. I'll keep adding bits and pieces as we go through our conversation.
Host: Sounds great. So before we dive into the security areas, one of the things that we do with all of our guests is we ask them how does a day in their life look like? And we get unique answers based on the position you are in, the team that you work with, the domain you are in, and things like that. So yeah, what does a day in your life look like?
Anshuman: Yeah, as a tech lead of the AppSec team at Lyft, my current day-to-day is pretty much all over the place. I spend anywhere from one to three hours in meetings, and these meetings are generally with engineering peers, with other security organizations within Lyft, with our own team members. I sort of spend some time doing, you know, like strategizing about what we're going to be working on this year, next year, the next three years and so on and so forth.
I also run my own projects. So in the sense that I'm still an IC at the end of the day. So I, yeah, so I don't really lead a team. I even though I'm the tech lead off like a team, but you know, I have my own projects and I try to unblock my team as much as I can. But running my own project would mean that, you know, I… I would have to go through all the phases of requirements collection, speaking with engineering peers, working with other folks as well. yeah, it's all over the place. also do some industry outreach, as you've already seen. I have my own blog. I do some speaking as well.
So yeah, there's no fixed schedule as such. I kind of go day by day. And that generally has worked well for me so
Host: Awesome. So sounds like you do a lot of collaboration with other teams and it's unique that you do IC and also you are a tech lead, right? So it's a unique position. Hopefully we can touch on some of those ideas, what challenges they bring and what benefits they bring as well, right? So today our focus is application or product security.
So you have worked at many organizations with various positions in cybersecurity. Before we kick off, what does product security mean to you? And there are many other terms also which are used, right? Like application security, general cybersecurity, InfoSec. So how do you define product security?
Anshuman: Yeah, I think that's great question. Something I've seen folks in the industry try to ask quite a lot in the past year or so. And I think that's happened because in the modern world where software is code basically, right? And it gets deployed and there's engineers like shipping and deploying code, hundreds and thousands of peers on a daily basis.
So the gap between, you know, software being built and deployed has reduced drastically as compared to how it was before. So I feel like that's also been the case why the term product security has evolved as well, right? So in my opinion, I kind of see product security as it depends on the company that you're working for. And I'll give you an example.
I worked for a healthcare startup, Anderson. In 30, Anderson, we used to actually build healthcare products. So we had some ointments for different kinds of chronic healthcare diseases. And we had a fully functional supply chain of where these things would be manufactured, how it would get shipped. And then we also had a platform that was based on technology-based services where we used to offer these user experiences. whoever wanted or products, would come to our website, they would go through the entire process, right?
So in a company like 30 Maddison, the definition of product can mean either the applications where we host these services, the infrastructure, or it could also mean the products themselves, right? So I think it kind of depends. And again, I've been with Lyft for the past couple of years, and Lyft's product is basically the app, and there are a few other services, right?
So the term product is loosely defined here. So I feel like it kind of depends from company to company, but the way I see it is how do you secure something that you offer to your customers? Now, it can be a physical product, it can be a software product, and how do you secure the entire process from the point where things are getting built to the point where it's actually being served to customers?
So it could mean that you cover application security aspects as well where you try to focus on secure coding principles, you try to do some activities like threat modeling, design reviews, so on and so forth. But it could also mean that you're focusing more on the cloud infrastructure where you're actually going to deploy these things. So yeah, security is integral in pretty much every phase. depending upon what your company is doing, you could define product in different ways.
Host: That's a very unique way of looking at product security. Like I had a slightly different thought that yeah, any product that you're building, it has to be that. But it's not just about a product, right? If you have infra, if you have services, all of those you have to keep in mind as well.
So do you think because of this gap in understanding, there are any challenges while implementing the security? Do you ever see that?
Anshuman: Yeah, I think one of the biggest challenges, at least in my opinion, the understanding of risk to an organization. So what I mean by that is the vulnerabilities that get surfaced by an application security program is fundamentally very different from vulnerabilities that get surfaced from a cloud security program. So in the cloud security world, you have containers, you have all of these fancy infrastructure, and these containers have CVEs.
CVEs are basically of vulnerabilities which have been found in either third-party libraries or something that you use in your container. Now, if you compare that with application security, application security vulnerabilities doesn't necessarily have CVEs. If you're just building something internally and you're just introducing an injection vulnerability, let's say a SQL injection, it doesn't necessarily need to have a CVE. So, how do you calculate the risk, which is impact into severity for each of these vulnerabilities when they're fundamentally of different variations, of different kinds.
So I think unifying or building a risk framework in any organization where you have a unified understanding of what the risk is brought in by either application security vulnerabilities or cloud security vulnerabilities has been a challenge. And generally, the SLAs of how soon you expect engineering teams to address these vulnerabilities are also different. So in my mind, I think that is essentially one of the biggest challenge in the vulnerability management space where there are different kinds of scanners, right? You have application security scanners, you have cloud security scanners, and all of these scanners, they produce so much stuff, True positives, false positives.
And if organizations don't have like a well, like a fully functional pipeline where there's a good way of funneling all of these vulnerabilities into one place and then doing the correlation, prioritization, contextualization, it becomes really challenging because you have no idea which vulnerability to address at like and what's the SLA.
Host: Yeah, that's an important distinction you made between like if you have using third party libraries, you have CVs coming from them versus you have your own programs where you may not have CVEs. How do you prioritize them? Right.
So, yeah. Now, a follow up question to or related question to that is like today what happens is we think development like engineering is relatively easy. Compared to that secure development, like following security best practices while doing the engineering and things like that, it's still a challenge. Now, in that case, how do you work with, let's say, engineering teams to prioritize security features or security fixes in the product development cycle? Especially, let's say, if you are at Lyft and you guys are planning to launch a new capability in Q1.
But as a product security team, you think that there should be five security capabilities which should be added to the product as well. So you are competing in terms of priorities, right? Like how do you balance that out?
Anshuman: Yeah, and I think that's pretty much what a skilled security engineer should be doing. In my opinion is, know, see, the thing is there's going to be bugs, there's going to be vulnerabilities, software is going to be insecure, right? It's the ideal world where you have some kind of security review going on on each and everything that is getting shipped, whether it is a small feature or like a big major release of a platform or product is just not something that is achievable and practical in any organization, small or big.
So I think at the end of the day, it all boils down to the risk appetite. How much risk are you willing to sort of accept as an organization? And then if you're aligned in that understanding with all your stakeholders, whether it is the CEO, CISO, different engineering peers, and so on and so forth, once you have that, understanding of how much risk is OK, I think at that point you can sort of start prioritizing what are some things that are worth looking into, what are some things that require manual intervention versus what are some things that even if there's no proper security review process in place, it's OK because it's not something important to the organization.
So I think that is one framework is you start with getting alignment on the risk framework from all your stakeholders. And then you sort of apply that framework on all of the different resources, how you discover vulnerabilities and how you try to remediate them. I don't think it's an easy problem, to be honest, because things fall through the cracks all the time. I think culture plays a huge role as well.
So I'll give you the example of what we have left, right? So at Lyft, the AppSec team is only four people. So we have only four AppSec engineers responsible for the security of all the code. How we handle such huge workloads is obviously we have to scale ourselves. It's just not possible for us to be involved in each and every process. And how do we scale ourselves? So I think it's important for us to take a step back and see, for Lyft as an organization, are multiple sub teams and multiple organizations, which of these organizations tend to bring the most risk to left, right? So we try to focus our efforts more on that particular piece.
After that, we try to consider some secure by default framework. So first we have to understand, okay, what is the programming languages being used at left? What is the technology stack look like at left? Okay, so once we have the information, okay, are there any secure by default libraries or frameworks which we can use. Which will make the security problem go away completely. So the engineers don't have to worry about, OK, is this secure? this secure?
As they start writing code, we have built some security for libraries and frameworks. And they're expected to use those frameworks to have the security added. So I think the combination of culture where engineers are generally very proactive reaching out to the security teams, they don't consider security as the blockers, right? And then combining that culture aspect with the fact that understanding the risk, right? And getting alignment from all the stakeholders seems to do the job reasonably well, right? And I think seems to be also a way where you can essentially scale yourselves. You can focus on things that actually require your intelligence versus
You know, things that can be automated, those that are mostly workflow related, things like that.
Host: Right. So yeah, it was surprising when you said that you have a very small team at Lyft. I was expecting at least like 20, 30 people who are managing the security.
Anshuman: I mean, so I just want to clarify the security is big. The app team is only four people. Yeah. Yeah.
Host: Yeah. The app sec. Yeah, I wanted to stress on apps. Even the app and the services, they are also equally have a lot of impact. So I was hoping you will have a bigger team, but I can understand.
So one of the things I like that you highlighted is injecting some of the security in the SDLC process so that developers or engineers do not have to worry about them, like secure libraries or frameworks. Similarly, do you have any other tools in your tool chest that you use to reduce the burden on the development team and at the same time have a secure SDLC process?
Anshuman: Yeah, outside of building secure by default libraries and frameworks, one thing that we've tried to do is we've tried to get as close to the developer lifecycle as we can. So in other words, our engineers, if they try to commit code, so we have built certain wrappers with the kit, so that as soon as they try to commit some code, that wrapper scans the code for some secrets and whatnot, right? yeah, there are some other sort of proactive scanning stuff that we kind of do in order to make sure that, you know, the basic stuff like we don't want our engineers to even accidentally commit secrets, right?
And, you know, like these are some things that once it's already happened, once the code is already committed and, you know, sent to a Svn repository, it's… it's often too late, right? Because the code is already there. The secret is already there. So I think that the trick is to get as close to the point where developers actually committing code, whether that means in the IDE, whether that means as a kit wrapper or like something along those lines.
So I think, yes, yes, exactly. So I think that has sort of solved like some of our problems. But I think for the most part, right? Like how do we...
Host: Mm-hmm. Okay. like pre commits that get like pre commit hooks here.
Anshuman: advocate for secure code, right? I think as a security team, we can be proactive by building security for frameworks and we can give some training and education, but these are things that generally don't work if the culture is also not good enough, right? So I think culture has to play a big role where if as a security engineer, what I try to do is I try to understand, okay, how does an engineer on a different team try to go by their day, right?
What does their process look like in terms of what are the tools they use? How do they deploy stuff? So Lyft has built a bunch of in-house tools where we deploy stuff, you know, like we don't pay any vendor to do that. And we have built this very complex pipeline of how code gets committed, right?
So as a security engineer, it's important to understand how those systems work as well. Because if you're trying to integrate security checks and controls into the development workflow, the best way to do that is to be at a place where developers are actually using these systems, Instead of security and building their own systems, that often is, there's a lot of friction there. Nobody wants to learn a new tool completely. So I think that's been a trick for me at least.
Host: So I love how you structured them, right? And more importantly, you highlighted that you have guardrails in each place. Like if a developer is writing, you have ID plugins or you have Git wrappers so that secrets don't go through, like get to SVN or Git, or even you have a pipeline in place with stricter security rules so that maybe a SQL injection doesn't get through things like that, right? Or maybe if you are using an external package, then vulnerabilities of you define policies based on your risk appetite again, so that you are not breaching that risk appetite and things like that.
And you touched on culture, which is super important. Like it always depends on if you have a culture that celebrates security, then even your engineers will be open to collaborating with you when there are security challenges or when they want to learn about security, how to build security practices.
Any tips you have to build a security culture, like top two, top three items that you generally recommend?
Anshuman: You know, I don't think I am probably the right person because I have not really built or I've not really been in an organization where I saw the culture shift from like, you know, from a place where security engines were considered generally the people who, you know, folks stayed away from to the point where they actually wanted to work with them. I don't know if I've been in any company with that experience.
But having said that, think you know, it all boils down to having empathy, right? Empathy and sort of trying to understand the pain points of our developers, right? So that's like something I've tried to do every place I've worked at is when I join a company, I just try to understand, I probably spend the first two months or three months and just try to understand the process, the people, the technology, everything else, right? try to build relationships with other peers, with folks who I would be working with. And from those initial meetings and conversations is so much to learn about just the company in general.
And then once I have all of that information, I try to take a look at, these are all the ways things get built here. Now, at what point and how difficult would it be to integrate security into these different checkpoints? So that's been my approach is… And I think application security generally is a field where if you don't have good relationship with your peers, it's generally considered a very difficult problem to be working on because it requires a security engineer to work with somebody in order to build secure code. It's just not going to happen out of the box. And that basically means that you have to have empathy and work on it.
Host: Yeah, I mean, two things that you highlighted are super awesome. One is empathy. And in product design, generally, it's said that you need to have empathy so that you understand your users, end users, and based on that, you design your product. Similarly, when it comes to security, your users or your customers are engineers or other stakeholders. You need to have that empathy to understand what are the pain points that they are going through.
You cannot just like the second item which you touched on, cannot just create a hundred JIRA tickets and then say that, hey, you guys have to fix it, right? You need to build that relationship and then have empathy because they also have their own business requirements to deliver. Now you have security requirements to deliver, so they have to balance it as well. So that's where empathy plays a major role in that case, right?
So a slightly related question is often when it comes to in a product, like let's say if we take Lyft as an example, right? If I'm looking at the app, there are two aspects, like the product design, is user experience, then there is security. How do you balance that? Because let's say if you add MFA, that somewhat impacts the user experience, right? Because I just wanted to get it right now, I have to do an MFA, right? So how do you balance that? How do you balance user experience and security?
Anshuman: Yeah, man, I think that's a question that security professionals and generally industry folks have been struggling to get a clear answer on because I think, like you said, there's a balance, security versus usability, right? And it's difficult to sort of achieve the balance if you don't really know what is the audience that you're trying to cater, what is the product that you're trying to build, what is the risk that is trying to address.
So having a holistic understanding of all of these factors will help anybody come up with, OK, this is the balance that we need to have. So in your case, let's say, as the security team of Lyft, if we wanted to enable MFA on the app. Now, I think MFA, again, is a very touchy topic, because it has big repercussions.
So I think for something like this, the security needs to be opinionated, right? If the security team's opinion is that, look, implementing MFA is gonna make our platform secure, it's gonna make our customers secure. So even if that means the customer has to click through a few other options and enable using the SMs, whatever it is, it's still fine. That is what we are recommending as the Lyft security team to the team that is building that feature, right?
Now, whether that team kind of accepts our recommendation or if they decide to accept the risk, it is up to them, right? As a security team, what we can do is we can provide all the data, we can provide all the facts, we can do a threat modeling exercise with them, you know, we can do all of these things. But at the end of the day, it is going to be the decision of the team that is kind of building that feature.
And I think that is a point where the security teams have to take a step back, because at that point, there's only so much we can do, right? We can either have the team accept the risk or actually go and fix it, right?
Now, having said that, the teams don't necessarily have to completely fix a vulnerability, right? There are other ways where they can work with the security teams in order to come up with some mitigation measures, which it's not fixing for the long term, but at least if we can stop the bleeding, right? Because especially teams are always in the crunch of shipping, shipping, shipping, right? So how do we as security engineers unblock them? So I think it is a culmination of experiences of having worked in different companies. think these things are something that, if you were to ask me this question five years, I would be clueless. like, I don't know. mean, security usability, I would rather want. So yeah, there is that experience factor as well, where the more you work, the more interactions you have, the more people you speak to. I think at least my mind view of how security should be balanced with usability has evolved over the years.
Host: Hmm, makes sense. Do you have an example that you can share when you had to balance between user experience and security and you decided one way or other or recommended rather?
Anshuman: Yeah, yeah, absolutely. So, I mean, there are so many instances. I'll just take one of the most simplest ones. in a company I was working at, we, as a security team, we wanted to build something internal, right? Like internal as in service, which would be used by our engineering peers in order to sort of leverage some security features and whatnot, right? So this service that we wanted to build was supposed to be internal. It would deal with sensitive information, right? Because it was going to be used by our internal teams.
However, the security didn't have the necessary expertise or skills to build a fully secure application, which has the authentication enabled, logging, everything, right? Security is generally, at least I can speak for myself. I'm good at scripting. can, especially with AI these days, I can build a pretty decent MVP, right? But… My SWE skills kind of stop at that point. I'm not really confident building production-grade systems. So yeah, there was this instance where we wanted to build something. We knew it was valuable, but we just didn't have the expertise.
So we decided to just kind of implement a very basic authentication on the application, where you just kind of enter standard username and password, and it allows you the application to access the application. That kind of worked well. Implementing the basic auth was very simple. It wasn't a hard-coded password. It was a static password which we had to share offline with the teams.
But those are cases where we of took a step back and we're like, OK, do we really want to care about the security of this application? Or do we actually want to make this security application available to our customers, which they can use? So yeah, as a security professional, I wasn't happy with that decision, right? I'm just implementing basic authentication, which I recommend against. Now I'm having to do that. And I think that that was a requirement because we wanted to ship something and we still didn't want to expose it completely open. So what could we do? So yeah, things like that. think we have to make those decisions a lot.
Host: I guess this is where that balance comes into picture, right? Like whether you want to implement the toughest authentication and more modes and things like that, or since it's an internal app, you want to at least have some basic authentication in place. So that is where I guess you took the call and you have the basic authentication in place. Makes sense.
So you spoke about AI, we'll touch on that in a bit. But earlier you touched on third party vulnerabilities like in case of application security, third party vulnerabilities versus your own like sequence injection or cross-site scripting related vulnerabilities that's crop up.
Nowadays, every organization uses open source, right? You have third party packages, third party libraries in every application and things like that. What are some of the, I know that you mentioned about you guys have a very complex security process in place, right? Does that cater to third party components as well? How do you manage that?
Anshuman: Yeah, so the third party supply chain ecosystem and the third party security, know, the different teams within Lyft that are responsible for that, that doesn't necessarily fall under the application security domain. But having said that, you know, I am somewhat familiar with that process. So we have certain tools, we use certain tools to scan all our code bases for outdated dependencies and third party libraries and whatnot.
These tools are often pretty noisy in the sense that these is this concept of reachability analysis, where in basically showing that if you have a dependency, are you actually going to invoke that library or not? So there are a few concepts floating around. They're good. They're very helpful. But in general, the risk associated with third parties is very different from a risk associated with your own code, right? Because A, you don't really know. If you're just kind of you know, importing a third party library, you have no idea how that code is, whether it's insecure, you know, if whether they have gone through their security scanning and whatnot.
So I think in general, what we try to do at Lyft is if a team wants to use an open source third party library in their code, we have a process for that as well where, you know, as security, we go and take a look at that open source library. We take a look at all the other open issues, whether they have any security concerns or not. We try to run our own tools on that open source code and just kind of do a due diligence of, OK, does anything look really concerning or not? So those are some things that as a security team, teams look up to us for our recommendation. these are some things.
Now, there are also licensing issues. There's so much open source code. We can't just use any open source out there. Over the years, we have learned a few things here and there, and we have built a checklist that we just kind of follow through every time somebody has a question about a third party library, if they want to use them.
And again, you know, the third party ecosystem is constantly changing, constantly evolving. And I think as a security team, you can do your due diligence by having the right security control in place, the right security scaling in place, and just kind of having a good understanding of, OK, if you're building infrastructure, if you're building software, can we do something at the foundation layers?
In other words, if you're building a container, what sort of security controls are we baking in the base image of that container? So we try to focus more on those things as opposed to worrying too much about, this team is going to use this library. The other team is going to use the Y library. I think it becomes very complex at that point.
Host: Right, right. So having that golden image gives you that control of what external libraries are being used. And when somebody says that, I want to use an external library, you do that due diligence, looking at the issues, security challenges there. Licensing is a key one that you touched on. So yeah, it makes sense. It looks like you guys have a defined process even for external libraries as well.
So now we spoke about the application we spoke about third party and you have a product security program in place, right? What are some of the key metrics that you look at to not only measure but also show the effectiveness of the program? Maybe you need to present it to the leadership, right? So that they you continuously get their buy-in or if you need a tooling, then you get budget and things like that. What are some of the key metrics that you look at?
Anshuman: Yeah, thinking of, and again at Lyft, the way we have organized or the way we have the product security team function is there multiple sub teams under product security. So application security is a sub team under product security. There's a security foundations and I think there's one more team under that, right? I can speak for application security metrics.
Metrics is something, At least I, as a professional, have struggled for the most part because it is very difficult to quantify the impact that any application security team is able to have in an organization. And I say that because as an AppSec team, our responsibility is to reduce the risk. And how do we do that? We try to build relationships. We try to build programs. We try to proactively find vulnerabilities. But the nature of vulnerabilities is such that you cannot guarantee. You don't know, in a system there's 500 vulnerabilities, and then our metric is going to be, we have to resolve, let's say, 80 % of them. That's not how it works, because software is evolving and changing.
So if that's the case, how do you come up with metrics in the AppSec world? I've been thinking about this. I think there are a few ways to think about it. The first is to eliminate vulnerability classes. What I mean by that is if as an AppSec team, if you've seen that your code, your organization's infrastructure is prone to certain classes of vulnerabilities. So if you see SQL injection happen a lot, if you see cross-site scripting happen a lot, that generally is an indication that there's something going on and the AppSec team isn't addressing the root cause. Because there's no reason why you should see the same vulnerability class again and again.
That means that you're doing something wrong or you're not addressing the root cause, right? So that is how I look at it is, you know, we have a hacker bounty program. So every time a new vulnerability comes through that program, we try to assess the root cause of it. Okay, have we seen this before? If we have, what was the root cause there? How did we fix it? Can we do that fix in all the other places so that we stop seeing this class completely, right?
So I think that's one way to really think about metrics is once you've seen a vulnerability, there's no reason why you should be seeing that again. Yes, you could see different variations of it, but if you still see the same kind of SQL injection or cross-red scripting being reported again and again, that is an indication that you're doing something wrong, right? So that is one way.
The other way is, know, security teams in general have the tendency to have a bunch of tools, platforms. So we pay a bunch of vendors to get some kind of security, scanning and things in place. What I've seen is this works well for the initial phases when we onboard a certain platform or tool, but then the ROI kind of gets lost somewhere like three months down the line because there's just the volume, the quantity of the results is just impractical for any team to triage.
As an AppSec team, can go to your leaders and say that, we have these tools, we have tools xyz, and they're doing static scanning, dynamic scanning. But I think the real question the AppSec team should be asking is, are these tools providing any value to us? Are these tools helping us drive issues to closure? Are these tools helping us to fix the vulnerabilities?
So those are the questions that I think we should be asking and we just don't ask them enough of. And the best way to sort of come up with metrics is mean time to detect MTTR, mean time to remediate. So yeah, think honing on these metrics, right? And sort of with a good understanding of all the things you have, can sort of enable a good, like successful application security program.
Host: I think one key one that you highlighted early on with the responses, the repeated vulnerabilities, right? If you are getting repeated vulnerabilities, that means you're not learning from what you implemented in maybe one part of the application and two other part of the application, right? So yeah, that's a key one.
And MTTD and MTTR, absolutely. mean, that sort of tells you, right? How quickly you are able to detect and also how quickly you are able to respond and remediate them as well. So generally the graph should go downwards that you are able to get it faster and faster. So, yeah.
Anshuman: Yeah. Yeah. I just want to add one more. So I think there's one more, especially when we think about building security, security for frameworks and libraries. How well are these libraries getting adopted by the teams? So building these libraries is one piece. But then, if the teams are not using those libraries and if the libraries are not helping avoid vulnerabilities and what's the use of it, right? So building metrics around the usage and the adoption of these libraries is also huge.
Host: Yeah, interesting. Yeah, you guys have the libraries in place, or even like golden images that you touched on, right? If you are building golden images, but the teams are just using a base image of Ubuntu or Alpine or something like that, then it defeats the purpose of building the golden image. Yeah, that's a good one that you added.
So now shifting gears to AI, like we touched on AI, we are living in the age of AI. So today, one of the use cases being an engineer, can say is generating code, right? Using either GitHub co-pilots or Claude or Gemini and things like that. And when it comes to AI, there are two sides of it, right? One is you are using it to generate code and the other is to use it to secure the code that you have generated in a way.
Now, as a AppSec lead, you guys must be using some security, some like AI tooling, how do you think about it? How do you think about the generated code? And how do you think about using it to secure your applications?
Anshuman: Yeah, it's a pretty fascinating space. I think about it pretty much on daily basis. I feel like we are still very early on in terms of coming up with good frameworks, good thought process of how to approach these problems. The easiest way to think about it is as AI or as technology evolves, the code that gets generated using AI is Kundooskyrock. The quantity of code, because we've already seen that, like people can build and ship companies overnight. It's become so easy.
So obviously as defenders or as security engineers, what we can do is we can also up-skill or up-level our skills by using how some of these tools work and then trying to either build tools, platforms, features, products, work, what have you to secure the code.
How are you keeping up to speed with the pace at which code is getting generated? So I think there has to be some level of effort put in by security engineers where they're trying to compete against the ever-increasing code generation. The other aspect is, okay, if you can't keep up to speed with the pace at which code is getting generated, can you actually bake in some security controls within the development SDLC process, right?
So things like, know, proxies where all the code that is getting sort of like sent to these LLM vendors or IDs, right? If you can introduce a proxy layer, right? It's not necessarily a layer that blocks all the traffic, but it is a layer that could be used to give more, get more visibility, observability on what is going on. right? So that you can introduce additional controls and whatnot.
So I think that is the lens I have tried to take is, look, I don't want to stop my engineers from writing and building and shipping features fast. What can I do is to either up level my skills by learning some of these tools and technologies and trying to build secure code or trying to build secure frameworks myself, or I can work with an infrastructure team or like a platform team and build certain controls in place so that we are funneling all the AI-based traffic from one place, and then we can do all the security scanning, everything from there.
Host: Nice, yeah, those two areas, right? writing secure code and also writing secure pipelines or like secure processes in place. You touched on proxies. That's a very important aspect, especially when you integrate with LLMs, right? Not only from an observability perspective, but as you highlighted, you based on the patterns, maybe you can add some controls or even you can work with like DLP vendors to make sure your IP doesn't go to LLMs, right, for training and things like that. Yeah, good points that you highlighted.
So one of the questions that we got Brad Geesaman around JNEI is, when it comes to product security, how has JNEI changed things? Has something gotten easier? Has something gotten harder? If you can shed some light on that.
Anshuman: Yeah. Yeah, yeah. I do have a blog on this as well. If you go to my website, I have a blog called the feature of application security, which I wrote a few months ago. And that blog is essentially about how LLMs in AI are set to revolutionize the AppSec and the product security space.
But to cut it short, think AppSec and product security traditionally has been a domain that requires a lot of manual sort of attention, manual intervention. You know, when you think about things like threat modeling, code reviews, design reviews, these are activities that AppSec engineers generally have to be physically present to, you know, work with their engineering counterparts to go through documents and do the analysis, you know.
And I think this is essentially the area that is ripe for innovation using LLMs because now you can use LLMs to help the AppSec team in order to analyze these documents, so to speak. So if you think about PRDs or specification documents, these could be anywhere from like five pages to 15 pages with details about how a team wants to implement something.
As an AppSec engineer, if I were to review these documents, it would take me anywhere from like a day to a week. So I think with LLMs that has greatly improved for me personally is now I start by analyzing these documents using AI first that gives me that initial analysis of look, this is what this document is about. These are the key areas. Now it's up to you. Right.
So I think even that has saved me hours, right? Where the things that I used to spend maybe an entire day in going through pages and pages. Now it's taking literally like seconds to get that initial review. And then I get to decide whether it requires my attention or not. Because at the end of the day, it's all about scaling ourselves. I cannot possibly have my attention and focus on each and everything that is getting shipped. I have to prioritize myself. So I think from that perspective, think LLMs and AI, instead of thinking of it as going to replace humans or it's going to take away our jobs,
I tend to think of it more like, how can I make myself more efficient? How can I use this technology as my expert advisor where I don't have to go anywhere else? I just have this person working with me. I say person, it's really like a sentient being, right? Yeah.
Host: Right, No, maybe this is the secret to running an AppSec program with few people at Lyft that you guys are using like the Gen.AI tools to the fullest, as a sort of partner in crime, right, or partner in coding or partner in security practices.
So one last question that I have around this is, you mentioned about both sides of using AI, right? What are some of the tricks that you recommend for products that are for AppSec teams or product teams that are thinking of incorporating AI or they are getting started into the Gen.AI world?
Anshuman: Yeah, Man, there's so much I would love to share. think the simplest thing is to, and I think I've shared this with others as well, is as any security engineer, or I think this kind of applies to any professional, is if you consider all the things that you do on a daily basis. So if you, the moment from where you start your day, you sit in front of your computer, you have meetings, you have this, you have that.
And then if you can come up with, let's say five things that you do consistently on a daily basis. And that is something you don't want to do. It is very process-related. It is something that requires you to go to Jira, go to conference, look at a dashboard, click here, click there, right? These kinds of things, it's not using your intelligence. It's just you are so, as humans, are so used to doing these activities. If we can ideally automate these small things, right? And then actually start focusing our time and attention on something that it requires intelligence.
So I think that's the best way to think about, how do I get started in automating? Let's say I have a workflow where I go to JIRA every day. I look at a particular project. I look at all the vulnerabilities. I try to triage them, right? That is a very complex process that takes me anywhere from 15 minutes to less than an hour, right? If I can automate bits and pieces of that and take away that 15 minutes, that is a huge win, right?
While doing so, I will get to learn about the technology. I will get to learn about different ways of incorporating AI. Who knows, I might even build something that others can use. So that has been my approach. And I think that has worked really well is I'm trying to apply these different approaches and technologies on my day to day first before actually asking others to do something. Yeah.
Host: Yeah, so that's a great thing that you highlighted is automation, right? And that helps a lot. Like I also apply that quite a bit in my day-to-day life as well. Somewhere I had read that, I think maybe seven, eight years ago, that if you're doing something, if you're doing the same thing first time, don't think about automation. Second time, maybe don't automate. But if you are doing it for the third time, that means it's time to automate.
There is a possibility that you will do it even more number of times. You will not stop at the third time. So maybe first and second time, you can just do it manually. But third time and beyond, you should automate it. So yeah, use automation to the fullest, sounds like, which helps with your day-to-day repetitive tasks. So yeah, absolutely.
That brings us to the end of the security questions.
But before I let you go, I have one last question, which is around if you have any reading recommendation, it could be a book or a blog or a podcast or anything that you want to.
Anshuman: Yeah, yeah, there's this book is called the Lion Trackers Guide To Life. It is by this author called Boyd Hardy. I think this this book has helped me a lot personally in in terms of figuring out, OK, how do I approach my life? How do I approach the problems I want to solve in my life? How do I want to prioritize the things I have in my life, personal, professional? So it kind of it's a very small book. It's a quick read. You can finish this book in maybe a couple of days at best. But it'll start having you think about some of the questions that you might not be thinking about.
It's a fascinating book about how tracking wild animals is actually a full-time job of certain people. Just imagine going into an into a forest and trying to find where the lion might be. It's crazy if you think about it, but people actually do that because they have that like sense of listening to the nature, listening to the sounds, you know, yeah, it's a fascinating book. So I highly recommend that.
Host: Hmm. Line tracking. Yeah, we'll line trackers guide to life. So yeah, when we publish the episode, we'll add it to the show notes. Also the blog post that you have written on future of application security so that our audience can go in and learn from that. So that brings us to the end of the episode. Thank you so much, Anshuman, for joining. It was a fun conversation.
Anshuman: sure thing. Yeah, absolutely Puru. I really enjoyed it as well. Thank you for having me.
Host: Absolutely. And to our audience, thank you so much for watching. See you next time.