Network Security Fortress: Master Network Segmentation with Tom Adamski
This episode dives deep into network segmentation - your secret weapon for building a secure and scalable network. We'll discuss best practices, tackle implementation challenges, and explore how to integrate segmentation with Zero Trust.
TLDR;
- Before working on NS, Perform Risk Assessment of the business & infrastructure. Depending on the need like compliance or security or management or others, build your Network Segmentation.
- Network Segmentation should be performed more at a broader level instead of very granular. This would become difficult to operate and maintain.
- Security tools are AND and not OR in AWS. Depending on the use cases, leverage the security tools and appliances like Security Groups, NACLs, Subnets, Transit Gateways, Firewalls, etc.
Host: Hi, everyone. This is Purushottam and thanks for tuning into ScaletoZero podcast. Today's episode is with Tom Adamski. Tom is an AWS principal solutions architect specializing in networking. He has over 15 years of experience building networks and security solutions across various industries from telcos and ISPs to small startups. He has spent last six years helping AWS customers to build their network environment in the cloud.
And in the spare time, he can be found hunting for waves to surf across, like around the California coast. So Tom, thank you so much for coming to the podcast.
For our audience, do you want to start with like a brief intro about your journey?
Tom Adamski: Cool. Yeah. Thanks for having me, Puru. Hello, everyone. Yeah, funny enough, I started my degree in economics, so it's not in tech. So I got a master's in economics, and then I worked a little bit in economics, but I found it a little bit boring. I think economics gets exciting once you get to advising politicians or advising the country level. When you're just starting, it's extremely boring. So I… I switched to doing an internship in tech and IT. I landed in a startup in London that was doing, it was a small internet service provider. And that's how I started my journey and I learned everything on the job. I've done my Cisco certs and I slowly was progressing through. And I found that there's a lot of similarities between economics and knowing economics, learning economics and networking, because there's a lot of patterns, there's a lot of mechanisms, how things stitch together, how things work together. So that modeling side of economics was kind of useful in the networking journey. But that was probably 16 -ish years ago.
And as you said, the last six and a half years now, I've been at AWS and it's been super fun working with customers and helping them figure out the network architectures.
Host: Nice. So it's a very different journey, right? Because most folks start with some hacking and or did some college in the computer science itself. And then they come to security or networking areas. But yeah, you have a very different journey.
Tom Adamski: Yeah. I think it shows you can switch between. The IT is quite accessible and you can start at the bottom. I found it already more exciting than economics at that entry level. There was a lot more fun and you could do a lot more interesting things. So yeah.
Host: Yeah, you can do a lot of experimentation hands on work, right? When you are starting in networking or security. So yeah, absolutely.
Tom Adamski: Yeah, and all the models are immediately applicable, right? So you're testing some pattern, some model, and it works. Whereas with economics, you test models, but usually you keep only two variables. Everything else, you kind of have to abstract because otherwise models stop working. So it's very theoretical versus networking, especially being very practical.
Host: Either it works or it doesn't work and you try to figure out, right? So that's how you get your hands dirty and get started. So yeah, love that. So a question that we ask to all of our guests and we get unique answers is
What does a day in your life look like? So yeah, what does a day in your life look like nowadays?
Tom Adamski: Yeah, so working in AWS as a solution architect, I don't have dedicated customers. You can think about me more as a kind of overlay. So I'm a specialist in networking. My expertise area is networking.
So whenever customers in any space, startups, medium -sized, really large enterprises, global customers, need help with designing their networks, and that spans a lot of different topics, with the networking area in AWS is… The traditional connectivity like setting up fiber from A to B and then figuring out how your virtual private networks or VPCs in the cloud are going to look like, how DNS is going to be set up. But then it goes beyond into application networking, so load balancing and different communication patterns between your applications.
And then finally, even network security, so deploying your network firewalls and control points in your environments, creating segmentations, all of that. So that's… the topic area.
And then I probably would spend a big chunk of my time just working directly with customers. So different customer problems and challenges. Some of it could be new customers coming in and they're trying to figure out, okay, what should I do? What are the best practices? Here are my requirements. Can you help me figure out how to deploy those requirements on AWS? Some other cases are customers that are already on AWS and they're growing and they need to figure out, okay, we're going beyond a certain size and maybe the original solutions we deployed no longer apply and we need to figure out how to scale to a higher level.
And then, so that's one big chunk. But then there's a lot of learning for me when I'm seeing so many customers and I get to have these discussions and see what the pain points are and see what they find challenging.
The other big part is then amplifying what I found out and what I feel like is useful knowledge for our customers. So whether it's writing blog posts, presenting in sessions like this session right now as well. So that's something that is a good opportunity to share some of the learnings and best practices I've seen with customers.
Host: Yeah, so I can see that I'm guessing you won't have many boring days, right? Because you are working with many customers with their unique challenges and you are not also focusing on one type of customers. You are working across different types of customers, different domains, different stages of the company. So each bring unique set of challenges, right? So there is learning and you are also using that platform to share your learnings with others as well. So yeah.
Tom Adamski: That's totally right. Yes. There's a big variety of different customer use cases and sizes and even services. So we, like what I find very exciting being in AWS is the speed of development. Like in my past company, sometimes I found that we weren't launching new things fast enough for me to be excited about, oh, this is a new thing that we can now use to make our customers life easier.
With AWS that happens a lot. And I think it's… Sometimes for customers, it's even scary because they're going, oh my god, there's so many things coming in. So the good news is we're not forcing customers to have to use every single service, but it's just having the option to like this constant development of adding new options and new things that could make your life easier. That is exciting.
Host: Yeah, I can imagine. And today we are going to talk about network segmentation and even the security appliances, some of them you alluded to. So to get started, can you help our audience understand
What is network segmentation? And if you can maybe simplify it a little bit so that even non -engineering audience can understand.
Tom Adamski: Yeah. So at high level, I would say the name is kind of self -explanatory. You're creating different segments within your network or different separated isolated units. I think the analogy and you can jump in if I'm, because I'm trying different analogies to figure out, you know, what's the best way to explain it.
I like the analogy of a building and water pipes. Like you have the warm water and the cold water. It's still water. It's just different types of water and you don't want to mix it anywhere else other than at the place where you as a user actually decide to mix those two types of water usually at your top. I guess if you live in UK, you have two separate tops for cold and hot water. So then it gets a little bit harder. So you only mix it in the sink or in the bathtub.
But you're controlling the place where you're going to be mixing these different flows. So you can think about the hot water pipe and cold water pipe as different network segments that are staying completely isolated, even though they're delivering something similar. And then you're very in control on when you mix it. Or we can expand that analogy even to sewage lines. So you also don't want to mix your sewage line with your water pipes unless you get to a treatment facility, which could be considered a security solution that's able to filter the sewage out, process the water, and then only the water goes back into the system.
So it's all about creating the clear -cut segmentation and then having the ability to… to control the way you're mixing those environments together. So even though when we talk about network segmentation, we want to say isolated networks. But in reality, I see many customers where they have requirements that they need to mix or create connectivity between those different segments in their environments for some specific reasons.
Host: Mm -hmm. I like the analogy where you can easily understand that the hot water is coming from a different tap and the cold water is from a different tap. So a question that comes to my mind is
Why should I do this? Like is it needed and at what point in or at what stage of my company should I think about network segmentation?
Tom Adamski: Yeah, it's a good question. I've seen customers be successful without any network segmentation. So having just one large, really flat network and being able to connect all the environments and apply their security controls in a different place than in the network. So I would think about the network as your defense in depth layer, where it's not to be all and end all. Network is not the only thing you need to use to create your control points. It can help you, but my advice would be to use it in a gentle fashion. So don't use it for like very granular controls. Use it for broader, like I have production and non -production environments that I want to separate. And the reasons there could be varied, right?
It could be, primarily it could be, I want to have different security requirements for these different environments. But at the same time, I might have other requirements there as well that would drive me towards having separate networks.
Host: OK, so you said that even we can use the vanilla or minimum network setup. Then
What benefit am I getting from the network segmentation? Is it around security? Is it operational excellence? Is it compliance? Like, what benefit do I get out of this so that I should think about incorporating network segmentation?
Tom Adamski: Yeah, yeah. So I would say Network segmentation is more of a tool that you implement segmentation, but the driver for you needing segmentation should come from really doing a risk assessment or figuring out your data classification. And or if you have any compliance requirement to say PCI should never mix with, you know, your payment network should never mix with other networks.
Those would be the drivers for you to then go, OK, I need to somehow apply these requirements into my environment. And then I need to do it with the help of the network, but also with the help of other tools.
There's also some other benefits. I often see customers separate, create segments for production and non -production environments. So one is there's a security benefit, because usually you do different things in production and non -production. The non -production may be more relaxed.
But the additional benefits are like blast radius and also management. So that segmentation of the network would come alongside segmentation of your control plane and not just data plane. Because very often when we say network segmentation, we think data plane. So we have these different networks and they can't communicate with each other. But then if I'm managing those independently, I also have additional flexibility because if I'm making changes to my non -production network, adding more routes or making some management adjustments that could potentially be an error, human error, if there's a human being involved, then usually you want to have that separation not just on the data plane, but also on the control plane and management plane.
So that would be the additional benefits of segmentation. It's not just the data plane segmentation, but it's also the management plane segmentation that can help you with like simple manageability or make it a little bit easier to troubleshoot.
But at the same time, like creating additional segments comes with extra complexity. So that's why I said I still do see some customers that just go, I don't want to be in the business of applying my control in the network environment. I'll apply control elsewhere and we probably will chat about what the different control options later on. But yeah, so it's not the be on an end all, think about it as like just a layer on the onion and make sure that you have those clear cut requirements from your risk assessment.
Host: Mm -hmm. Yeah, so what I'm getting from your response is there is no standard formula that you apply that, hey, do it because you have to do risk compliance or just for security. There are various use cases which it can cater to. And it always depends on your own risk analysis of your, let's say, infrastructure or even your business. And then based on that, you work on the network segmentation accordingly.
Tom Adamski: Correct. Although I would say if we're talking production on production, I would say that's a, it's a really good practice to separate those. I would even say best practice to separate your production and non -production into at least separate data planes. If you're really good with automation and you have a very good control process for managing your control, your management plane or control plane. So your changes are really validated in this, there's automation and this CI/CD pipelines maybe instead of a human going in and making changes, you could consider having the same component from a management plane, but then still have that data plane separation.
I would say an example for that, NAWS would be a transit gateway. So transit gateway is like a logical router. A distributed system, we talk about it as a single object, but it's a distributed system. So it's highly available. And as itself, it doesn't have any kind of quarter limits because it's a nature of how it's deployed.
But it so that I lost my train of thought for a second. Yes, so example, yes. So the transit gateway is an example of a single object, a single logical construct.
Host: So you're talking about like Transit Gateway and how it can help.
Tom Adamski: that has multiple logical route tables that you can deploy. So you're still managing a single transit gateway, but underneath you have, say, three data planes and three logical VRF equivalents or route tables on the transit gateway. Production, non -production, development, test, however you want to split it.
So the question there is, you definitely want to split those data planes. That's a really good practice.
The control plane is more of a… how comfortable you are with the changes that you're making. So if you are really going crazy on the non -production environment and doing a lot of wild changes, I would keep it to the separate transit gateway. So have a transit gateway for non -production, your environment and then transit gateway for production that the change control process is a lot more strict for than it is for non -production.
Host: Yeah, yeah. So I think, yeah, thank you for highlighting that it's a very good practice to have production and non -productions in separate segments. And that is not just a best practice. It's a good practice to follow, right? So one question that comes to my mind is, let's say, yeah, I'm convinced that I did a risk assessment and I need to build network segmentation for security perspective, let's say.
What best practices should I keep in mind when I am implementing segmentation controls?
Tom Adamski: So I would say there's two areas. So one, make sure you familiarize yourself with the technical capabilities that your environment has. So, okay, what are the tools that I can use to implement my network segmentation? I was telling you about Transit Gateway that's very specific to AWS. So that's a construct that only lives in AWS. If you're going on -premise, you'd be using things like VRFs and maybe MPLS VPNs. So knowing what technical capabilities you have is one big part.
And then the second part is… kind of going back to, again, something we said already about making sure that you know what your requirements are, and then map those requirements into the technical capabilities. So that's going back to that risk assessment and then figuring out, how am I splitting data?
So a lot of the customers I'm working with, they're either migrating or already are on AWS. So very often they already have some idea of how they're already creating different logical segments in their existing environments.
So they don't really have to spend time on what are going to be those segments. We know what segments we need. We have a PCI environment that needs to be completely isolated. We have problem dev and test, or we have business units even that have to have completely separate networking segments. So then it becomes more about just mapping those to the technical capabilities in the environment that you're deploying into.
Host: It sounds very similar to how even you will do engineering, right? Understanding the business requirements and understanding what, let's say, the framework or the language that you're using and then mapping them so that you can build the capabilities.
So very similar, you need to understand why you want to do the network segmentation and then marry it with the right technology to implement the capabilities.
Now, let's say I… I did some network segmentation implementation.
How do I ensure that there is proper enforcement of these controls and there are no misconfigurations happening as part of it?
Tom Adamski: Yeah, you would usually have to, in AWS there's a set of tools that you can use, like AWS config or other components that would do some kind of governance. So validation that the environment is working correctly. So these would be tools that would look more on the configuration side, is the configuration done in a certain way. You could also have tools that would look more of a data plane setup.
So we have things like reachability analyzer that could… or network access analyzer, which are different tools, ultimately that can help you figure out how your connectivity is working. So what they would do is they would look at the configuration of data playing components like route tables, security groups, components on the path, and then label to tell you authoritatively, hey, the path is available or the path is not available. So I see some customers integrate those tools into their pipelines where they're making changes to the network.
And then at the end of the change, they use that as just a QA validation to make sure that, okay, prod cannot talk to dev, we tested it, the system, you know, reachability analyzer checked it and the path is not available, which is what we want. And then, you know, prod can still talk to another prod environment or to some sort of shared environment. And again, the reachability analyzer validated that. So we have proof that this path exists.
So… usually it comes down to figuring out the tooling that can validate your connectivity options.
Host: Yeah, so I have, like when you highlighted reachability analyzer, like I have had, like myself, I've had experience where I had set up the route table. At least I thought I had set up the route table properly, but the traffic doesn't get routed. And you're like, what did I miss? And you go to the reachability analyzer and you run it. And then you figure out, oh, there is some setting. So even though it's a small setting, I missed that setting. So we fix it. I fix it. And then it starts working. So yeah, definitely reachability analyzer is a great tool to look at how the network traffic is getting routed.
Tom Adamski: Oh yeah. Same for me. I've had similar experiences when you're like in the evening setting something up for a demo and like, why is this not working? And it always turns out that you messed up a route somewhere and you're pointing to the wrong destination and end up with asymmetric traffic. So yeah, reach the ability analyzer for the winner.
Host: Yeah, absolutely. So now if we increase the scale of this, let's say you work with large enterprise, large customers enterprises also, where they have multiple workloads, different types of workloads like serverless or Kubernetes or virtual machine based and different types of path services also they're using. In that case, when you get into a network segmentation setup,
Do you see it differently than how you highlighted earlier? Maybe based on some of the use cases you have specific network segmentation policies that you define.
When it comes to all of these complex cloud setup, do you see them differently? Any other factors that you look at?
Tom Adamski: I would say it's still going to be very similar from a point of view of making sure your requirements are clear and making sure you understand the technology. Some things you probably want to consider more maybe is any exceptions.
So I see a lot of customers going, I will keep my prod and dev completely separate. And then the next day, there's some requirement that forces them to connect those. And there's ways to do it in a very secure way. Like you don't have to use a network. You can… You have ways to expose just applications between environments.
But on a network level, it's just understanding, do we need to plan for exceptions? Or are we just going to be able to say, always no to anything that comes to us and says they need that communication to happen? And then I would say, for the larger you are, the simpler you want to keep things.
Because I always like to say that any design that I'm reviewing on a network design that needs to pass the 3AM test, where someone can wake up at 3 AM to get spaged at 3 AM to try and support a pattern or an outage or some issue. And they look at the network diagram and immediately understand what's going on. Because there's a lot of flexibility in the cloud, and you can be as complex as possible. And I've seen customers really build complicated environments. And they will work, and they will provide some value that they can use customer sees, but then they could end up being very complicated to manage.
So in terms of creating network segmentation, I would say keep it as simple as you can. Don't over -index on using network as a security function. You can do that in different places. Network is a good broad brush, very wide set of rules that you deploy.
And I think another example I have here is a going back to that transit gateway. So transit gateway can have 5000 attachments, VPCs and a bunch of other things, but only 20 route tables. Again, the route tables, you can imagine them as like VRFs. And it's not because it can't have more route tables. We put the limits so low, it can be increased, but you really have to have a good use case to increase the number of route tables because we want to prevent customers from over -complicating their environment.
Because I've chatted to a lot of customers that when Transigate will launch and they're like, oh, perfect. I'm going to have a route table per VPC. So I can be very granular. I'm going to control access. And you're like, oh, no, this is not what you want to do. It's just going to become overly complicated. And the route tables are not the right place to put such granular controls. Keep it simple. Sure, have your pro dev or PCI environment. Keep them separate. That makes sense. But then, More granular controls should go elsewhere and not in the network.
Host: Yeah, yeah. And I think two things that I take out of your response, right? One is simplicity. And I cannot stress that enough, right? Like the example that you gave, if somebody is getting paced at 3 AM and they are not able to figure out what is your network setup and they need to spend hours to figure out how you have set up, it's not best use of their time, right? In a way. And you… If there is an incident going on, then it's even crazy that you're still trying to figure out what's going on. So simplicity, definitely.
And the other part is, like, again, getting familiar with some of these tools, like reachability analyzer, so that you understand how the network is set up. And at that time, you're not panicking, and you have a good grasp of it. And network graph, like having that either a… maybe an image or using a tool to understand the network graph would definitely help in those scenarios, right?
So when it comes to segmentation, there are many methods. Like you have been talking about transit gateways. There are security groups, VPCs, subnets. There are many such tools available to practitioners.
Which one do you pick in which scenarios? Do you have a formula that you follow?
Tom Adamski: Yeah, so you think about the tools often as an end rather than an or. It's more of a defense in depth and I should be using multiple tools. I just need to figure out what tools are good for what use cases.
So I want to start with security groups because security groups is a default on. So you can't deploy a resource inside AWS that doesn't have a security group. And they're pretty cool. They're like stateful firewalls that… you can deploy around your network interfaces that you deploy inside of VPC or your virtual machines. And they can span, it doesn't matter what subnets, what IP address your things have, they're part of the same security group, they're getting the same treatment. And those security groups, they're default deny, and you can only add allow rules.
So that's sometimes a little bit challenging. So all you're doing is basically saying, I have a security group by default, it's denying everything and I need to just decide what I want to allow to my thing, to my resource. And you can set up rules for inbound and outbound. I typically see most customers do inbound, outbound is a little bit more rare.
And they scale to around thousand entries. So you do have to request an increase, but thousand entries per security group. security group or a thousand entries per multiple security groups on the same network interfaces, that's really the limit. So they're pretty beefy and they're always a good starting point. So that's something that you would always have. Then network access control lists are a second tool, but it's more of an optional thing. By default, they're allow any. So they have a lot less entries. 40 entries is the maximum and they're state less.
So if you're getting too complex with your rules, and you allow something one way, then always need to remember to allow the reverse of that same flow in the opposite direction. My recommendation typically for knuckles is to figure out any kind of broad use cases that you want to apply them for. So maybe you want to deny any telnet port traffic in your VPC. You can just do that easily with a knuckle or a set of IP address ranges that you never want to communicate with, also a knuckle. But if you have more than like, you know, a few hundred IP address ranges, then again, NaCl is not gonna fit because it only has 40 entries. So NaCl, I would say optional, you really have to have a use case for a NaCl.
But if you have that deny use case or like deny any port 80, I don't wanna have port 80, I just want port 443. Those are good use cases for a NaCl. Both NaCls and security groups operate in layer four, right? So you can set up rules that are based on IP addresses or based on ports. So I can say… deny HTTP on port 80, right? And that's gonna be it. I can't really look inside of that HTTP request and say, I really don't like the URI that this request is going to, or even the host name.
So if you do have those requirements where you need to dig deeper into the packet to make a decision on your forwarding or not forwarding, you need to look at kind of the packet inspection files. So we have an offering that's native. There are third party options as well that you can bring in easily, but those usually would be the drivers. You need to look deeper into the packet or you have a larger scale than the 1000 entries.
And that's kind of more, there's also this expands if you start looking at, okay, if I'm doing ingress, maybe then I need to look at things like web application firewall. So that's a different type of firewall to use. And again, that's really specialized for a particular use case.
But again, you think about them as an end rather than an or. I would say what not to do is don't, again, and I mentioned that earlier, don't spend too much time on making the network a very granular control point. Even when we think about your boundaries, and it becomes different than what you might be used to on premise.
On premise, a subnet is your very clear boundary, or effectively a network segment. And to get between two different subnets on premise, you need to have a router. You need to explicitly allow that traffic. Maybe you have a firewall. So that's a good control point.
Inside the VPC, subnets are just like containers. They contain a set of IP addresses that you will put. So I shouldn't have said containers, because that might be confusing, because containers also mean something else. But they're like a storage place for resources. So you can pull IP addresses out of that subnet.
But inside a VPC, there's an implicit kind of router. So any subnets can talk to any other subnet. So from a routing perspective, inside a VPC, everything can talk to everything. It's more like a layer two network than it is a layer three, even though layer two doesn't exist inside of VPC. You don't really need to deal with that. Anything can talk to anything. So you don't really use subnets as your boundaries, unless you're using knuckles. Then you should think more of a VPC as your broader boundary for like from a segmentation point of view.
Inside the VPC, you might have a subnet that has a route to the internet and another subnet that doesn't have a route to the internet. So you're just basically splitting maybe your public facing and your non -public facing resources.
But usually what I recommend is if customers have, okay, this is gonna be my production or a particular, you know, security level, think about a VPC as the broader boundary rather than individual subnets in that VPC.
Because when you're getting traffic out of that VPC, say to the transit gateway or to cloud one or to another networking construct that supports segmentation, you will not be able to get to that subnet granularity. The all traffic is treated as a whole VPC. So when you're making those segmentation decisions, think about the VPC as that container for your certain segment or certain security level.
Host: Yeah, makes sense. So one of the things that you highlighted, like in security groups, it's default deny, which sometimes we don't realize that. And especially for someone, let's say who is coming from on -prem world to the cloud world, it takes a little bit of time to understand the network intricacies. But as you highlighted, like with security groups, knuckles, VPCs, subnets, there are so many bells and whistles. But at the same time, you should not go overboard and do very granular configuration. I mean, that's the message that I'm getting. It should be very good.
Tom Adamski: Yeah, sorry, for security groups, you can be granular, but the dark holes and subnets, the granularity decreases. So they're like broad brush settings. But if you're using security groups or if you're using a next generation firewall or something that does deep packet inspection, that's where you kick in your granularity. That's where you can have like, I'm allowing this IP address to this port. So there's place and time for granularity and there's… Knuckles and subnets and route tables are good broad brush tools, but not very good for like super granular rules because it's going to get very complicated.
Host: Okay, yeah, thank you for correcting me. So yeah, it makes sense because the more granularity you add, sometimes it helps you, but it has to be at the right place, right? If you add it at a subnet level or something like that, you, again, it goes back to what you said, right? Like if somebody gets paced at 3 a .m., will they be able to understand what exactly is being set up? Otherwise it will become a nightmare. So...
Tom Adamski: Although just to add before we jump off, you can't always decide the level of granularity for your security controls. That's sometimes driven by your risk assessment. So you're more of a, we have a policy that we only allow a granular access to specific host names. That might be the policy that my security team is giving me. So then I need to find a technical solution for me to be able to apply that level of granularity.
And that's not going to be something I'll be doing on a network level. That's going to be something I'll be doing on like deep packet inspection file.
Host: Yeah, makes sense. So yeah, absolutely. While responding to the earlier question, you touched on containers. Of course, you didn't mean the containers as in Kubernetes containers. So now a lot of workload is moving to Kubernetes or even containers like non Kubernetes distributions. And within Kubernetes, there is own like traffic routing and network routing and everything. It's at a different crazy level. So
When you work with customers who have implemented Kubernetes or using, let's say, ECS or something like that, do you look at network segmentation differently? Do you do different setup in that case?
Tom Adamski: Yeah, it changes things a little bit. Well, the way Kubernetes is deployed inside AWS, there's a container network interface, CNI, that allows the containers to have an IP address from the VPC range. So effectively, a container becomes a first -class citizen inside a VPC. It's just another virtualization layer on top of the existing virtualization layer, because now it's a container on a virtual machine that lives on a virtual… private cloud.
So the only thing that the VPC will not see is communication within the cluster that happens on the same worker node. Right. So if you took two containers on the same worker node and there's some communication happening between those, the VPC networking things will not see it. So if you're running flow logs, if you're trying to apply some security controls using security groups, these things will not kick in because the traffic is not leaving the worker node.
If the traffic is going to something else inside of VPC, we now give you options to set up security groups for your pods, if that's what you want to do. And then you use those security groups to be referenced on another service inside of VPC. So you can set up those capabilities.
But if you want to have additional controls inside of your Kubernetes cluster, you need to use Kubernetes native constructs like network policies, for example, to control communication between different services, namespaces. and so on.
Host: Right, yeah, because it has like service accounts, you have roles and policies within Kubernetes, which gives you more granular control. But I like one thing that you highlighted, right? If there are two containers which are running in the same node, then you do not see that communication outside, outside as in at a VPC flow log level, which is a key thing to remember, right? Sometimes we, since we have… done the setup of the VPCs and all of that, we might assume that we will see that traffic as well at the VPC level, but that's not how it works. So thank you for highlighting that.
Tom Adamski: Yeah, that's going to be usually that's not going to be a huge percentage of your traffic because if you have a larger cluster, the chances of you leaving the work and know to talk to another service on another work and know that are pretty high. But it could be cases where you will be staying on the same work and know. So you're totally right.
Host: Yeah, yeah. So one of the terms that is floating around network segmentation is zero trust. A lot of folks talk about a network as one of the key aspects of zero trust. How do you incorporate network segmentation principles so that you can achieve a zero trust architecture?
Tom Adamski: Yeah. So network zero trust is such a broad topic. And I know we probably don't have too much time to really explore all of it, but on a high level, we think about zero trust as like the set of tools and capabilities to help you be more granular with your controls. So, and we think about it as, you know, zero trust goes beyond just the traditional network layer controls. So like the API addresses and ports. but it doesn't get rid of them.
So it's more of a build on top of the existing capabilities rather than replace them. And it usually is about enhancing this network level set of information that you might use to control traffic with some additional information like identity of the remote user, if it's a user to application communication or some additional identity of the application, if it's application to application. We have launched a couple of services recently that kind of fall into those categories.
For user to app, there's a service called verified access that is allowing you to integrate with an identity provider to then decide if you want to allow a remote client to communicate with a particular application. So it's specifically kind of a replacement for maybe remote access VPNs. And then we also have VPC lattice, which is more of an application layer seven proxy, which is specifically designed to help you with app to app communication, regardless of what environment those apps live on. So they could be, one can be on a - containers, the other one could be a Lambda app, the other one could be an EC2.
They just talk to Lattice and then Lattice is an L7 proxy aware of all the other applications that were registered with it and then helps you with allowing communication from an app to app. So you don't need things like load balancers or transit gateways. It just takes care of all of the networking, but also allows you to add additional context in that communication. So you can use identity access management information to decide if you want to allow traffic between two different applications in your environment. So it's kind of an interesting shift to kind of abstract a little bit more of the network from application to application communication. And I think that's kind of the principles of Zero Trust as well, because we want to slowly move away from just using the IP addresses and ports and go into something that gives us more insight into what is actually going on and then apply controls on that.
Host: VPC lattice definitely sounds like powerful, especially with the context, right? That sort of helps you with authentication, authorization, or even you can apply custom logic on top of it. So yeah, we'll definitely check that out.
Tom Adamski: That's correct. Yeah. And it's interesting because often customers use today to make that work and to authenticate up to an app. Usually they use MTLS, right? Mutual TLS. So they have to manage all the certificates, deploy the certificates on the clients. And then that's the most common way I've seen to validate the clients and allow them to talk to the apps.
So with Flattis, you don't have to do any of that. You just make sure that… you have the right IAM credentials from the client and they get presented to Lattice. And then you have a policy in Lattice saying, hey, I only allow requests from my organization or from this particular role or from this particular user. So all the IAM capabilities, the IAM rules that you had, you can now bring them into the data plane to control access between your sources.
Host: Yeah, that sounds very powerful, for sure. So now, shifting to security appliances, so when it comes to setting up the network infrastructure, there are several security appliances which come into play, like firewalls, we have been talking about it, or different transit gateways and stuff like that.
Can you give us some examples of common security appliances that we deal with on a day-to-day basis either knowingly or unknowingly?
Tom Adamski: I think the security appliances that are most common that I've seen are firewalls. So that is truly the most common thing that pops up. When we talk about network connectivity, very rarely firewalls don't come up. It's pretty much every conversation I have about, hey, I have this two regions, I want to connect them and I want these VPCs to talk to one another. And then usually I have these different segments.
Then we talk about things like Transigate, our cloud one to set up connectivity, but then immediately a conversation moves on to, okay, now how I apply firewalling between these different things. Usually it's about firewalls, and we have options. We have some native firewalls, whether they're AWS Network Firewall or AWS WAF, but we also have options to bring in third-party partner firewalls into AWS in a very easy fashion. It used to be pretty hard.
It used to be that if you deploy, say, a Palo Alto inside AWS, the way you would use that Palo Alto firewall is you would send traffic to it using a route in the route table. Unfortunately, routes in the route tables in the VPC are not aware of the health of the destination. So if that firewall went down, and it's a single instance, so chances are it eventually will, the route would just be dropping traffic because the next hop is not available and there's no native automation to flip that route. So that used to be pretty challenging.
We've launched a service called the Gateway Load Balancer. It's specifically a tool that allows you to be a destination for a route, but behind it, you can deploy multiple firewalls. So now if one of those firewalls fails, Gateway Load Balancer is still the target for your routing, so it receives all the traffic, but then it knows, oh, the firewall that I'm normally sending traffic to is now down, I will use another one. So it helps with that availability challenge and also with scale. Right? Previously… again, with the VPC routing, you can only have one destination. So if you're running out of capacity on the one firewall, it's very hard to add a second firewall. You'd have to figure out some mechanism to split your destinations based on routes.
So with Gateway Load Balancer, you can keep adding additional firewalls and just have them horizontally scaled to handle your traffic load. So even the network firewall, so it's a native solution, uses the same thing under the hood. It's also using Gateful Load Balancer. It's just… you don't see it, it's managed by AWS.
But ultimately, that's what powers a lot of the security solutions. So I would say firewalls is the primary that I'm seeing.
There are some solutions for network monitoring. So like traffic mirroring especially is a feature that we have that allows you to effectively create a network tap to copy any raw packets from a particular interface belonging to any C2 instance to a specific destination. So I've seen different types of vendors deploy their solutions behind either network load balancer, gateway load balancer, which could be a destination for that mirror traffic. And then they do some analysis with it to see what's going on.
Host: Interesting. So now you highlighted like both you can use a native firewall or an external like a third-party firewall. So
What parameters do you use to decide that whether I should use native or third party?
Is it more of a customer's decision that they have bought a solution they want to integrate or there are any parameters which play into that decision-making.
Tom Adamski: Yeah, that's a good question. So I would say it's very much a customer's decision. And the way we develop functionality and features is all about based on giving customers a choice. And usually what starts with like a partner solution, customers come to us and say, actually, can you guys develop something because we want it managed? And then we listen to our customers, we build a solution, and then we have an additional option for customers to decide that, OK, I don't want to manage my own firewalls. I want AWS to do all the management for me. I don't want to deal with patching, with setting up a gateway balancer. I just want to set up rules and have a policy deployed in the control point.
So it really what I see is often customers who are already familiar with a particular solution. So say they run Checkpoint, Fortinet, ValtX, Palo Alto, whatever is the solution they have on-premise. They often like to bring that into AWS as well because they already have teams familiar with it. They have tooling, they know the monitoring.
So they like to keep that when they deploy it inside AWS, but there's a lot of customers who are like, okay, I just want to have it managed. I'm going to use the native network firewall.
Host: Yeah, makes sense. So it's mostly driven by customers, depending on what type of setup they have today, and they want to sort of connect with AWS. So in that case, they are using a third-party solution. Or if somebody is more like a green field or starting with AWS, it makes sense to use the managed operating. And depending on other infrastructure that you have, you might want to look at third-party solutions.
But… Native managed solutions also cover a lot of areas.
Tom Adamski: Correct. There's also an aspect of, you know, is there a specific type of functionality you're using on your current vendor, like, you know, Palo Alto's app ID, right? That's a specific piece of functionality to Palo Alto. That doesn't exist in network firewall today. So, you know, if you have specific requirements, it makes sense to kind of review of what am I actually using with my current vendor? If I'm deciding to go into the managed solution, are those capabilities also supported? How is that, how is management going to be different?
So reviewing those things when you're making that decision. So it's pretty standard. It's similar to probably like what people are used to doing on premise where they were deciding on one firewall vendor over the other. They would just look at capabilities, how it's monitored, what the cost is going to be, how much they have to manage, things like that.
Host: Yeah, yeah. So one final question that I have is, like with COVID and all, there is an explosion of remote work and all, right? So now let's say you have your network segmentation and everything set up. Now somebody, some developer is working from a remote location wants access. What type of remote access solutions do you see being used by customers? Which of course, doesn't mean that you are adding exceptions, but...
Like what remote solutions do you generally see and how do they work with the network segmentation overall?
Tom Adamski: Yeah. So the common pattern that's kind of probably still the broadest is remote access VPNs or client VPNs or remote VPNs, probably different names where you have a user somewhere on the internet. They have a client that that client connects to some endpoint in AWS. So we have a managed version of that. So we have a client service called client VPN where you set up again. And again, it comes down to how much of this thing you want to manage.
With client VPN, you just set up an endpoint, it's managed, and then you have your remote users connect to that client VPN endpoint, and they get network level access to resources inside AWS. And you land inside of VPC, so then however, whatever connectivity that VPC has will also be provided to your users. So that's why I was saying it makes sense to think about your segmentation on a VPC level, because that's the container that you're going to be thinking about in terms of where is that VPC allowed? Because then you're bringing remote users into that VPC.
They'll inherit whatever access that VPC has on a broader scale. So if that VPC is a production VPC, they have access to production. So yeah, there's an option with a managed solution. There's also option with third party partners who can deploy virtual machines inside AWS to terminate the VPNs on pretty much similar principles. It'll land in a particular VPC and then inherit the behavior of that VPC to get into other resources.
There is, the landscape is evolving a little bit, right? There's a lot of appetite I've seen from customers to try and narrow down the access. So even though those VPN solutions often do rely on some additional user parameters, so maybe a membership to a particular group allows you access to a particular IP address, but with VPN solutions very often, it's still all about the network level access.
So it's still deciding based on an IP address, where in reality there's a bit of a push towards having some additional information to make that decision and not do it just based on an IP address.
Host: And I think that's where there are many new tools coming up around zero trust network access, where you can define more granular policy -driven access rather than just IP -based access.
Tom Adamski: Yeah, that's right. So this is the user to application use case where, you know, the solutions there, AWS verified access is one of them where you can just use a browser to connect to a web application inside AWS and be authenticated against your identity provider like Okta, PingOne, whoever you're using as long as they support auth.
Yeah, you can, and then attributes in your identity. So maybe what group you're part of or, you know, what color you like. Whatever attributes you want to select these could be taken into account when the decision is being made if the user should be allowed to a particular back -end application.
And then this can be expanded to also like awareness of your device. Is your device aligning with the policy? Does it have enough? Is the antivirus up to date or are you, you know, anything else suspicious on your device? Correct. Yeah. Yeah. So then we integrate with...
Host: Encryption is in place or not. Yeah, stuff like that.
Tom Adamski: like CrowdStrike or Jamf to provide us that information because they're already running on the end user machine and they're able to give us additional details about the security level of that machines.
And if it's below a certain threshold, we might not allow the user or the administrator can decide not to allow that user to connect to the application. So it becomes very powerful. I really like the zero trust direction because it really makes a lot of difference in how you're really making that decision. And it's.. you have a lot more information to play with than just someone's username and password, group membership, and then just an IP address.
Host: True, true. It gives you that power of granularity, in a way, based on many factors. So yeah, makes sense. So that's a good way to end the security question section.
Let's go to the next section, which focuses on security practices.
Rating Security Practices
The way it works is I'll highlight a security practice. You need to rate between one to five, five being the best. You can add context as to why you have given a particular rating.
So let's start with the first one. Conduct periodic security audits to identify vulnerabilities, threats, and weaknesses in your systems and applications.
Tom Adamski: Yeah, I would say this is high on my list. And I'm curious to see the practices because I imagine they're all going to be high, highly scored. I would even expand it to add testing of your availability. And I know availability falls a little bit under security, right? The whole CIA. Try it. So not just think about security, but also availability. How do you deal with disaster? How do you deal with outages? These are things that… I see a lot of customers being reluctant to test because it's hard to test, right? You have to schedule outages, you have to schedule downtime in case something goes wrong.
So it's a lot of work, but it's really worthwhile doing if you're in a pickle and something really happens and you have practiced it and you know your failover scenarios work and the R works, it makes a lot of difference.
Host: No, I agree. I agree on that. The second one is use strong passwords that contain a mix of uppercase, lowercase, characters, letters, numbers, symbols, and also change it frequently.
Tom Adamski: Yeah, I would say I combine out of MFA as well. Again, both are fives, in my mind, five -five. I don't know if I would score one higher than the other. Combine out of MFA, this is actually some of the base recommendations when you're setting up your AWS account to strong passwords, set up MFA, those are just kind of better things to do.
Host: Yeah, I agree, like MFA is now like a default recommendation that you cannot have account setup without MFA. So the last one is continuous integration is a must for DevOps practices and security architecture review should be conducted as part of it.
Tom Adamski: Totally. The sooner in the pipeline you can get into the security review, the easier your life is going to be later on. I've seen some customers where security is an afterthought and the development pipeline goes through and then security teams goes, oh no, this can't go out.
There's a security flow in it and it creates a lot of friction. So if you can get your security team involved very early into your development process, that makes a lot of difference because they feel like they're part of the process and they're just supporting the development rather than… you know, everybody sees security and I've been in that place when I was on this network security team and we're just doing reviews and saying no to people when they came in with like things that they already developed.
So it would have been nice at the time to be involved earlier in the, in the cycle so we can be part of the development journey. So I thought to me that would be a five as well. I really, it's really hard to rate these because I would say like, you got to do all of them really.
Host: No, I understand. And there is always that friction between security and engineering or even DevOps, right? So I can relate to what you're saying. So before I let you go, one last question is around any recommendation that you have, a blog or a book or a podcast, anything.
Tom Adamski: Yeah, I would, if I'm a little bit biased, I would recommend to, you know, people catching up on the networking and content delivery blog, which I help manage and we have a lot of contributions from folks inside AWS. So it's an interesting resource to stay on top of. Privately, I like, there's a Darknet Diaries podcast. I think it's a Jack Reisider. I always find it super interesting. It's a, he talks, it's not just about the technical. He talks a little about like, hacking, but a lot of it is around social engineering.
So it's very interesting to hear how these social engineering scams are happening. And yeah, it's always really fun to listen to. He does a really good job investigating and getting a lot of data and getting really good guests to talk about what happens. So yeah, I highly recommend it.
Host: Yeah, yeah. Dark Pair Diaries, like, I have watched a few episodes. And the way it's presented is also more story -like, right? Rather than it's a discussion, it's more story -like. So you can grasp that in a much better manner. So yeah.
Tom Adamski: Yeah, there was a recent one, I think on the cyber cryptocurrency scams. And it was kind of interesting because one of the victims was someone who works in security in tech. And they were actually quite cautious about the... It's super interesting to hear how that social engineering really... It was really hard for that person to really pick up on it because it was done so well.
Yeah, fascinating. Like the human psychology, like you think, you think it's going to be all about like technical skills and hacking, but in reality it's all about social engineering.
Host: Yeah, so what we'll do is when we publish the episode, we'll tag both the darknet diaries and the network and content delivery blog post so that our audience can also go there and subscribe and start following that as well.
Yeah, so thank you so much, Tom, for coming and sharing your knowledge and your insights with us. It was lovely to have you in the podcast.
Tom Adamski: Awesome. Perfect. Awesome, thank you.
Host: Yeah, and to our audience, thank you for watching. See you in the next episode.