By Chris Tozzi
Securing any type of software environment is a big task. But if you use containers, constructing and enforcing a solid security program can be especially difficult.
That’s because a containerized environment involves so many different layers as compared to other types of infrastructure. With virtual machines, you have only a host operating system (OS), a guest OS, and a guest application environment to secure. On bare metal, and in most types of cloud-based environments, the security situation is even simpler because there are fewer layers of software.
In contrast, in a production container environment, you have a number of different layers and tools to secure. In addition to the host OS and the container runtime, you have an orchestrator, a container registry, images, and probably several different microservices within your application.
How do you keep each of those pieces secure and hardened against attacks? This guide explains how to secure all layers in your container stack, from top to bottom.
Host Operating System
The OS that hosts your container environment is perhaps the most important layer of the stack to secure, because an attack that compromises the host environment could give intruders access to everything else in your stack.
Fortunately, the host OS is also probably the easiest part of the stack to secure. The types of operating systems that are used to host containers are not fundamentally different from those that admins have been using for years to host other types of workloads. In most cases, your host OS is a Linux distribution and the principles you use to harden any type of Linux environment apply when you’re dealing with containers.
The most important considerations to keep in mind for hardening the host OS include:
- Minimize attack vectors. A Linux-based OS whose sole job is hosting a Docker environment doesn’t need much running. Minimize opportunities for attack by eliminating all but the essentials from your host environment. Do this manually by stripping down a Linux distribution of your choice, or you could use a minimalist distribution such as Alpine.
- Enforce strict access control. If your host OS’s only role is hosting a container environment, you don’t need many accounts. In fact, in most cases, all you need is a root user (locked down by allowing logins only from specific remote hosts) and dummy user accounts (also locked down so they can access only the specific services or resources required to do their jobs). Tools such as SELinux and AppArmor can help to create and enforce rigid access control policies.
The container runtime is one of the most difficult parts of a container stack to secure because traditional security tools were not designed to monitor running containers. They can’t peer inside containers or establish good baselines for what a secure container environment looks like.
To keep your runtime secure, be sure to implement the following:
- Establish a baseline for your container environment in a normal, secure state. Perform real-time scans of running containers and compare the results to the baseline in order to detect anomalies that could signal an attack.
- Focus on securing your application, rather than relying on network-level security tools to keep you safe. Firewalls and other types of perimeter defenses don’t work well in a containerized environment. (That said, you should certainly take basic precautions to secure your networks. For example, be sure that encryption is configured for overlay networks.)
- Ensure that running containers are stopped and replaced with new containers whenever you update your applications or services. In other words, make sure to keep your containerized infrastructure immutable. This is safer than attempting to perform live updates on running containers, which leads to configuration drift and poor enforcement of security policies.
- Remember that your running containers are only as secure as the application code that powers them. The traditional rules for writing secure code and vetting it for security after it’s written still apply. In this regard, the shift-left security concept can come in handy.
Registries set containerized environments apart from most traditional environments. Registries provide a convenient, centralized source for storing and downloading application images. (This is not a totally new idea, of course; most Linux distributions have used software repositories in a similar way for a long time, and “app stores” function in a similar way as well.) But container registries differ because they are essentially the only way to obtain and run a containerized application. In traditional environments, it’s possible to install applications without using a repository or app store.
Because the registry is central to the way your containerized environment operates, it’s essential to secure it. Intrusions or vulnerabilities within the registry offer an easy opening for compromising your running application.
Securing the registry involves several considerations:
- Locking down the server that hosts the registry to mitigate the risk of attack there.
- Using secure access policies.
- Running an image scanner to help detect vulnerabilities within container images.
Securing container images overlaps to a large extent with securing registries, since registries are where your images live. But because image security involves some additional considerations, it deserves a section of its own.
To secure images, keep the following pointers in mind:
- Images should contain the bare minimum amount of code necessary to run whichever service or application you are creating the image for. Exclude any non-essential services. For example, in most cases there is no reason to include an SSH server inside a container image because you can log into the container in other, more secure ways.
- Images provide a blueprint for creating an application or service container. Don’t use images for other tasks, like hosting source code. Indeed, hosting source code may be convenient, but there are better, safer ways to do it, such as using a code repository. Vine famously made this mistake last year, when it placed source code inside container images that turned out to be publicly available.
The orchestrator serves as the brain to keep all your containers running smoothly. It’s another essential component of a container stack.
Orchestrators are not security tools. They provide provisioning, but they are not designed to detect intrusions or vulnerabilities. Don’t assume that Kubernetes, Swarm or whichever orchestrator you use will do much on its own to keep your environment secure.
On the contrary, you need to take precautions to ensure that the orchestrator itself remains secure and does not become a liability. Do so using these principles:
- Install the orchestrator from an official, trusted, up-to-date source (just as you should do for any type of mission-critical application).
- Configure the orchestrator to provide high availability and automatic failover to the extent possible. This will help to mitigate the impact of DDoS attacks.
You can take other steps to secure your orchestrator. They vary depending on exactly which orchestrator you use. The Kubernetes folks offer some useful tips here. For Swarm, the information in this Docker reference guide can be helpful (although the guide is not written solely with Swarm security configuration in mind). Additionally, Twistlock recently published an Ultimate Guide to Container Orchestrators post, so check it out for more details.
The persistent storage layer of your containerized environment can take different forms. Docker Data Volumes is one approach. Flocker (which lives on as an open source project even though the company behind it, ClusterHQ, shut down in late 2016) is another. So is Portworx, which provides a commercial container storage solution.
The details of securing your container storage will vary depending on the type of storage architecture you use. But in general, some basic security principles apply:
- If a container does not need write access to a storage directory, limit it to read-only access.
- Use a file system that enables roll-back so that you can undo any changes to your data if necessary.
- As with several other parts of the container stack, access control is key. Make sure that only containers that need access to a shared storage directory have access. In addition, make sure that no users or applications on the storage server have access to the storage system unless they require it. Last but not least, remember that chmod is your friend. Use it to provide fine-grained access control for specific files or directories where appropriate.
Chris Tozzi has worked as a journalist and Linux systems administrator. He has especially adept in open source, agile infrastructure, security and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO and a contributor to Twistlock.
Reprinted with permission of Twistlock.