Securing Docker With Secrets and Dynamic Traffic Authorization
Here at Conjur, we’ve been caught up in the Docker wave like so many of you. The biggest question our users face is: how can Docker be effectively deployed and secured in production, by mid-size and large enterprises?
As we’ve worked with our customers on this problem over the last year or so, we’ve been deeply interested in finding effective security orchestration patterns for Docker which work for many different types of organizations and application architectures. In this post we will summarize several of these patterns, which we hope you’ll find useful.
In 2012, Adam Wiggins and his collaborators formulated an excellent set of guidelines for architecting PaaS and container-style applications. These guidelines were formalized as the “12 Factor App”; you can read about them at http://12factor.net/. These guidelines apply very well to Docker containers and images, and studying each one is a valuable exercise for any developer or architect who’s using Docker.
Factor number three states : “Store config in the environment”, and it mandates “strict separation of config from code”. From a security standpoint, the implication is that secrets (SSL certs, database passwords, etc) must be provided to the container through environment variables. Baking secrets into images is an insecure practice which must be avoided.
In response, we’ve developed a fully open-source tool called “Summon” (https://github.com/cyberark/summon/). To quote the Readme:
“Summon is a command-line tool that reads a file in secrets.yml format and injects secrets as environment variables into any process. Once the process exits, the secrets are gone.”
Summon provides several clear benefits for orchestrating secrets into Docker:
Secrets are referenced in secrets.yml, a file which is safe to check into source control.
Secrets are provided at runtime by a secrets “provider”; thus, the security orchestration is decoupled from the actual provider of the secrets.
Summon ensures that secrets are handled according to security best practices; they are never on disk (temp files go to /dev/shm) and they are removed when the managed process (“docker run”) exits.
For example, to provide a database password to a Docker container, you’d create a secrets.yml file which looks like this:
DB_PASSWORD: !var prod/db/password
Then launch the Docker container using the following command:
$ summon docker run -e DB_PASSWORD myapp
In any service-oriented architecture, it’s important to securely govern the allowable communication between the various applications and services. And it’s also important to provide a secure way for key personnel (developers and operations) to interact directly with applications and services when necessary.
This traffic management problem is known by quite a few different names; we generally call it “software-defined firewall” or “identity-defined firewall”.
Traditionally, a couple of different approaches have been used.
All the services are deployed within an enterprise perimeter. Very little additional security is applied to the traffic. This is a hazardous way to operate, and it’s useless in a cloud or hybrid architecture.
Software-defined networking (e.g. AWS Security Groups) are used to govern allowable inbound and outbound traffic. This method is effective, however it’s difficult to manage, and the management tools like AWS Console present a constant threat that the security will be accidentally loosened when people (or code) relax the traffic rules for their own purposes. It’s also hard to interact with these systems from outside the cloud environment, and the security of the application becomes tied to the cloud vendor.
Methods like PKI (SSL mutual auth) or Kerberos govern the traffic. These techniques are hard to manage, and have limitations of their own. For example, fine-grained authorization is not a strength of PKI, and it’s hard to make Kerberos reliable across multi-site (e.g. hybrid cloud and multi-region) deployments. They are also hard to interact with manually (e.g. via cURL, for maintenance and one-off tasks).
A Docker container cluster like Mesos or Kubernetes presents another particular challenge: software-defined networking and security groups cannot be used to gate the traffic, because there are no fixed boundaries between the containers and container groups (“pods”, in Kubernetes parlance).
For regulating HTTP(S) traffic, we advocate for a particular technique involving a Forwarder, Gatekeeper, and Token Broker. It works like this:
Each container “pod” (aka “multi-container application”) runs a Gatekeeper container, which intercepts all inbound HTTP(S) traffic and verifies the authenticity of a token which it finds in the Authorization header.
The Forwarder is a container which intercepts all outbound traffic and places the token on the Authorization header.
The tokens are issued and verified by the Token Broker.
Token verification can be cached, so that the latency and throughput of the cluster is minimally impacted.
The Forwarder and Gatekeeper are each implemented using Nginx with standard configuration directives plus a bit of Lua scripting. The Forwarder authenticates itself to the Token Broker using a shared secret, which can be provided by Summon. The result is a system with very clear security properties. It’s also easy to manage, it works with any deployment architecture, and it works equally well inside a defined perimeter, or across wide geographic boundaries. In addition, it’s easy for humans to interact with the system; they can run a local Forwarder on their laptop which operates just like the Forwarders in the container pods.
You can see examples of Forwarder and Gatekeeper Nginx configuration in our “sdf-gen” project.
While the Docker engine has been officially released and stable for about a year, the practices and tooling surrounding Docker deployment and orchestration are still very much in flux. We believe that solid patterns and practices for deploying secrets and managing HTTP(S) traffic between containers can help to accelerate Docker’s enterprise adoption. These patterns need to be tool-independent and “future-proof”, so that Docker orchestration tooling and application architectures can continue to evolve without breaking security. In addition, they should be written as open standards or open source, so that any team can feel comfortable adopting and using them. We are looking forward to working with you to design and develop this new generation of security APIs and tools!