Infrastructure management requires consideration of many things, such as application performance, observability, reliability, and disaster recovery, to name a few. But one of the most overlooked is the security aspect. Here at ITSyndicate, a trusted AWS partner, we treat security with great accountability and understand how crucial it might be for business. Let’s dive into a case study to take a closer look at how infrastructure security is handled.
Foundational principles for VPC and IAM security
It’s always a good idea to start from the most fundamental things you have heard about hundreds of times - subnet management, IAM configuration, and most minor privilege implementation. And there is a reason for that - ignoring these two well-known truths generates problems exponentially with the project’s growth. And when your infrastructure is fully operational and access to services gets settled, it is almost impossible to make any VPC and IAM security adjustments.
The rule of thumb for IAM is simple
You should start most minor privilege implementation from the beginning without tricking yourself that one day you will open this IAM User and obviate extensive permissions for every resource inside your account. It doesn’t mean you shouldn’t touch policies at all, but we try to mitigate this need with careful planning.
For instance, in this project, you can see that the EKS cluster has several AWS services to be accessed; thus, we already know that the IAM Role used by Kubernetes workloads requires access to a limited amount of services - Secrets Manager, KMS for encryption keys, RDS database and S3 bucket.
Strategic VPC and subnet management
At first glance, VPC and subnet management may look tricky, but it’s not for most cases. For each resource to be deployed, a few control questions should be asked. Does this resource require access to the Internet? Is this a necessary resource to be accessed from the Internet? Can we limit access to the Internet without any sacrifices? You can see this approach in our project - the EKS cluster should not be accessed, but it still requires access for specific patches, git, etc.
These requirements lead to private subnet usage for cluster nodes with the route to the NAT Gateway in the public subnet. Application load balancers are also in public subnets, serving as the origin for CloudFront distributions. Another resource that sits in a private subnet is an RDS instance. Notice VPC Endpoint that allows S3 access through AWS internal network - S3 can undoubtedly be accessed through the Internet, but it’s less secure than AWS private network.
Enhanced security with AWS WAF and CloudFront
Now, let’s talk about the CloudFront distributions we mentioned earlier. As you already noticed, we attached AWS WAF to each distribution, which makes each one a double-edged sword (except this sword has no wrong sides, only good and good ones). The first security benefit is obvious - AWS WAF effectively protects us from layer seven attacks such as SQL injection.
But there is also a second benefit - CloudFront helps to effectively mitigate DDoS attacks by spreading requests among many AWS edge locations. And in case you need something more advanced to avoid DDoS, you can always enable AWS Shield Advanced, but it’s pretty expensive.
Secure data management in Kubernetes
You might wonder what Secrets Manager is used for in the Kubernetes context. Still, there is no mystery behind it - Secrets Manager lets us store sensitive data that later can be mounted as a file or environment variable using the secrets keep CSI provider. This approach removes any need to store sensitive data inside a git repository or elsewhere where it can be exposed to the outside world. AWS KMS, on the other hand, is multifunctional since it’s integrated with almost any AWS service you can think of.
It’s a good idea to create a separate KMS CMK key for every service that allows encryption, but it’s heavily related to the IAM planning. Our scenario used KMS for EBS volume encryption (RDS, EKS nodes, and OpenSearch nodes) and S3 buckets encryption. Also, notice EventBridge that allows more frequent key rotation, which definitely helps to enhance security.
Share your thoughts and inquiries in the form below, and let's secure your Kubernetes secrets today.
Enhancing security through observability
One of the most exciting components of this architecture is security observability. It’s a common practice to track CPU and memory utilization, disk space, and network IO. Still, we can also track security incidents and notify when something suspicious happens and automatically mitigate those incidents. For this concept, we used AWS Config with SNS and Lambda integrations.
Config contains a set of rules that trigger SNS, which triggers a function that sends notifications to the Slack channel. Although those rules are pretty simple (such as ‘port x should not be opened in any security group’), they help us to react rapidly and remove unwanted changes, as well as simplify investigation since you always know who made those changes.
Scalable security: easy improvements for the future
But the best part of security management in this project is how easy it is to improve it over time. Not every security feature was enabled since it affects infrastructure cost, but in case it’s needed, all preparations were done, and it’s a matter of several checkboxes. We already mentioned AWS Shield Advanced, but GuardDuty also monitors AWS API calls from CloudTrail, DNS queries, and VPC Flow Logs. If you need workload vulnerability scanning, SSM Agent for AWS Inspector was already preinstalled on EKS nodes. Making every aspect of your infrastructure as configurable and flexible as possible is always a good idea.
Now, let's shift our focus to a crucial security practice that applies to projects of all sizes: regular key rotation. This practice, combined with the idea of giving the least amount of access needed, is fundamental for protecting your resources and preventing unauthorized access. In our How does GCP service account key rotation enhance security, we'll introduce a straightforward method for regularly changing GCP service account keys using Kubernetes and Python. This proactive approach enhances your infrastructure's security, keeping your digital assets safe and your operations running smoothly.
Discover how our services can benefit your business. Leave your contact information and our team will reach out to provide you with detailed information tailored to your specific needs. Take the next step towards achieving your business goals.
PostgreSQL blends relational and NoSQL for modern app needs
Mutable vs immutable infra key perks drawbacks and Terraform hacks
Find out which cloud stack AWS or Azure powers your business best