Microcontainers, and Logging in Docker: Iron.io CTO speaks at Docker NYC

microcontainers-banner

Travis Reeder, the co-founder and CTO of Iron.io, spoke at last night’s Docker NYC meetup about Microcontainers. In addition, Hermann Hesse of Sumo Logic spoke about Logging in Docker.

Slack for iOS Upload

Iron.io is a big proponent of microcontainers, which are minimalistic docker containers that can still process full-fledged jobs. We’ve seen microcontainers gaining traction amongst software architects and developers because their minimalistic size makes them easy to download and distribute via a docker registry. Microcontainers are easier to secure due to the small amount of code, libraries and dependencies, which reduces the attack surface and makes the OS base more secure.

Microcontainers require us, as developers, to adjust the way we think about creating containers. Instead of starting with everything, we’ll start with nothing, and add only what you need.

For our tiny base image, we could start with Scratch, but that’s a bit too esoteric for this introductory talk — we’ll use Alpine Linux, which comes with a shell and a package manager.

Now, let’s build a node base image that we can use for our node apps, based on the Alpine image. All we’ll do is add node, nothing else. To make it a little bit smaller, we’re installing node and then deleting the package cache from the package manager.

We have tiny images for each language:

4 steps for most languages:

Vendor dependencies: The npm install is typical, except we’re running it in a docker container.

docker run --rm -v "$PWD":/app -w /app iron/node:dev npm install

Dev/Test: To test, we’ll run node app.js with the image I just created. Notice, we don’t even need to install node locally.

docker run --rm -v "$PWD":/app -w /app iron/node node app.js

Build Image: Now, we’ll create a simple dockerfile:

FROM iron/node
WORKDIR /app
ADD . /app
ENTRYPOINT [ "node", "app.js" ]

Then we’ll build it:

docker build -t USERNAME/myapp 

Push Image: And finally, docker push:

docker push USERNAME/myapp

Now, you have a 29 megabyte node container on dockerhub, instead of 644 megabytes.

This gets even smaller with Go. If you use the command:

docker run treeder/hello

…you’ll be able to see how big this is: about 10 megabytes.

Further reading on microcontainers:


The next talk at Docker NYC was given by Hermann Hesse, Manager – Sales Engineering at Sumo Logic. Before joining Sumo Logic in 2013, Hermann led the deployment of large scale automation and monitoring solutions at BMC.

A History of Docker Logging

In Docker 1.7, we saw the introduction of –log-out, where we can pass parameters to the log drivers. You can forward directly to your local Syslog aggregator, or to a cloud logging service.

In 1.8, we saw the addition of options for the json-file driver. json-file is still the default, with a longstanding problem: it will fill up your disk. Now, json-file can be configured.

In 1.9, many containers could share a single aggregator downstream from the log driver. But when this happens, which log comes from which container? There’s a loss of meta-data — but log tags allow you to use container metadata as part of each entry.

In Docker 1.10, we saw TCP+TLS support for Syslog, but unfortunately there’s a bug that prevents it from working (and that bug is currently unresolved.)

Shameless plug: Sumo Logic is going to release a Cloud Syslog endpoint, which means you don’t need an on-prem collector for Syslog.

Where will it end?

Logging drivers have been a very large step forward in the last year. The engine commit protocol is sub-optimal, but it means more review and more stability of the drivers. There is a GitHub issue out there that is trying to reduce the dependency on third party libraries.

How should we log?

Events: You’ll want to enumerate all your running containers, start listening to the event stream, and then for each running container and start event, start collecting entries.

Configurations: For each running container and start event, we make a call to the inspect API. We then get the result as a JSON log. Now we have all the configurations in the logs!

Logs: For each running container and start event, call the logs API and get the log.

Statistics: For each running container and start event, call the stats API to open a stream and send each received JSON as a log. Now we have monitoring — even of memory, CPU and disk!

Host and Daemon Logs: You can include a collector as part of your host images, or run collector as a container.

We love the API, but there are many limitations. logs api touches the disk, there are race conditions, and scaling connections is difficult.

The dream is one combined stream for events, logs and stats, either as an API call to pull, or as a registration API. Can we expand #18604 to allow for this?

An even more shameless plug – Sumo Logic announced a unified logs and metrics platform. Currently in early access — not generally available.

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.