DevOps Zone is brought to you in partnership with:

I enjoy programming in my free time and spend a considerable amount of time hacking away at open source projects as well as implementing those projects for various clients. My recent interests have included messaging (RabbitMQ specifically), Node.js, NoSQL, and alternative JVM languages such as Scala. You can also occasionally find me at several conferences speaking on some of my passions as well. Of course, got to balance all of this out with my wonderful baby girl and beautiful wife. James is a DZone MVB and is not an employee of DZone and has posted 20 posts at DZone. You can read more from them at their website. View Full User Profile

Securing Docker’s Remote API

10.31.2013
| 5074 views |
  • submit to reddit

One piece to Docker that is interesting AMAZING is the Remote API that can be used to programatically interact with docker. I recently had a situation where I wanted to run many containers on a host with a single container managing the other containers through the API. But the problem I soon discovered is that at the moment when you turn networking on it is an all or nothing type of thing… you can’t turn networking off selectively on a container by container basis. You can disable IPv4 forwarding, but you can still reach the docker remote API on the machine if you can guess the IP address of it.

One solution I came up with for this is to use nginx to expose the unix socket for docker over HTTPS and utilize client-side ssl certificates to only allow trusted containers to have access. I liked this setup a lot so I thought I would share how it’s done. Disclaimer: assumes some knowledge of docker!

Generate The SSL Certificates

We’ll use openssl to generate and self-sign the certs. Since this is for an internal service we’ll just sign it ourselves. We also remove the password from the keys so that we aren’t prompted for it each time we start nginx.

# Create the CA Key and Certificate for signing Client Certs
openssl genrsa -des3 -out ca.key 4096
openssl rsa -in ca.key -out ca.key # remove password!
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

# Create the Server Key, CSR, and Certificate
openssl genrsa -des3 -out server.key 1024
openssl rsa -in server.key -out server.key # remove password!
openssl req -new -key server.key -out server.csr

# We're self signing our own server cert here.  This is a no-no in production.
openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

# Create the Client Key and CSR
openssl genrsa -des3 -out client.key 1024
openssl rsa -in client.key -out client.key # no password!

openssl req -new -key client.key -out client.csr

# Sign the client certificate with our CA cert.  Unlike signing our own server cert, this is what we want to do.
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

Another option may be to leave the passphrase in and provide it as an environment variable when running a docker container or through some other means as an extra layer of security.

We’ll move ca.crt, server.key and server.crt to /etc/nginx/certs.

Setup Nginx

The nginx setup for this is pretty straightforward. We just listen for traffic on localhost on port 4242. We require client-side ssl certificate validation and reference the certificates we generated in the previous step. And most important of all, set up an upstream proxy to the docker unix socket. I simply overwrote what was already in /etc/nginx/sites-enabled/default.

upstream docker {
  server unix:/var/run/docker.sock fail_timeout=0;
}
server {
  listen                         4242;
  server                         localhost;
  ssl                            on;

  ssl_certificate                /etc/nginx/certs/server.crt;
  ssl_certificate_key            /etc/nginx/certs/server.key;
  ssl_client_certificate         /etc/nginx/certs/ca.crt;
  ssl_verify_client              on;
  
  access_log  on;
  error_log /dev/null;

   location / {
      proxy_pass                 http://docker;
      proxy_redirect             off;

      proxy_set_header           Host             $host;
      proxy_set_header           X-Real-IP        $remote_addr;
      proxy_set_header           X-Forwarded-For  $proxy_add_x_forwarded_for;

      client_max_body_size       10m;
      client_body_buffer_size    128k;

      proxy_connect_timeout      90;
      proxy_send_timeout         120;
      proxy_read_timeout         120;

      proxy_buffer_size          4k;
      proxy_buffers              4 32k;
      proxy_busy_buffers_size    64k;
      proxy_temp_file_write_size 64k;
   }

}

One important piece to make this work is you should add the user nginx runs as to the docker group so that it can read from the socket. This could be www-data, nginx, or something else!

Hack It Up!

With this setup and nginx restarted, let’s first run a curl command to make sure that this setup correctly. First we’ll make a call without the client cert to double check that we get denied access then a proper one.

# Is normal http traffic denied?
curl -v http://localhost:4242/info

# How about https, sans client cert and key?
curl -v -s -k https://localhost:4242/info

# And the final good request!
curl -v -s -k --key client.key --cert client.crt https://localhost:4242/info

For the first two we should get some run of the mill 400 http response codes before we get a proper JSON response from the final command! Woot!

But wait there’s more… let’s build a container that can call the service to launch other containers!

For this example we’ll simply build two containers: one that has the client certificate and key and one that doesn’t. The code for these examples are pretty straightforward and to save space I’ll leave the untrusted container out. You can view the untrusted container on github (although it is nothing exciting).

First, the node.js application that will connect and display information:

https = require 'https'
fs    = require 'fs'

options = 
  host: 172.42.1.62
  port: 4242
  method: 'GET'
  path: '/containers/json'
  key: fs.readFileSync('ssl/client.key')
  cert: fs.readFileSync('ssl/client.crt')
  headers: { 'Accept': 'application/json'} # not required, but being semantic here!

req = https.request options, (res) ->
    console.log res

req.end()

And the Dockerfile used to build the container. Notice we add the client.crt and client.key as part of building it!

FROM shykes/nodejs
MAINTAINER James R. Carr <james.r.carr@gmail.com>

ADD ssl/client* /srv/app/ssl
ADD package.json /srv/app/package.json
ADD app.coffee /srv/app/app.coffee

RUN cd /srv/app && npm install .

CMD cd /srv/app && npm start

That’s about it. Run docker build . and docker run -n >IMAGE ID< and we should see a json dump to the console of the actively running containers. Doing the same in the untrusted directory should present us with some 400 error about not providing a client ssl certificate. :)

I’ve shared a project with all this code plus a vagrant file on github for your own prusual. Enjoy!



Published at DZone with permission of James Carr, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)