Go Applications in Containers

So you’ve built an app that runs on your machine. Great. Now what?

Obviously software is built to be used and today most of that software is being run on servers in the cloud. But sometimes it’s not. Sometimes it might be running on some old hardware in the basement of a company or maybe another developer needs to use your code as part of integration.

Whatever the case may be it would be nice to package your application in such a way that it’s guaranteed to work no matter what environment it is in.

Containers

Containers allow you to package your application and run them on a shared runtime much like a lightweight virtual machine. They allow you to configure the environment in which your code will run. By doing this you are able to add an abstraction layer between your operating system/arch/framework that allows your application to run.

Containers have become very popular of the past couple years mainly because of how they make code portable. Without having to install a bunch of dependencies on a server to make the application run the servers can then be more generic. This in turn makes deployments easy to manage and operations a little easier.

Containers should be written in a way that makes them idempotent so they work the same on any environment they are on. A container that is tested in a local environment is guaranteed to work the same way in production. Managing the deployments of containers both locally and at a production level becomes much easier than deploying them as individual applications.

Now, this was a high level, oversimplified view of containers. If this interests you please take a look at the references section below where I will include some added material.

There are several container frameworks out there but for the following example we will be using Docker which has become an industry standard. Every container in Docker requires a Dockerfile to define how to build the container. This recipe allows systems to build the container the same each time, kind of like a build script. This is often checked in along with the project because it is often associated with part of the build process.

Dockerfile

Not only does a Dockerfile allow us to define how the container will run, but it also allows us to use a container as part of it’s build. In doing this you can ensure that the build will be the same each time. This is especially great when you are building a static binary for languages like Go and Rust to be built in the same environment they will be run in.

Build Environment Step

So let’s take a look at the Dockerfile we will be creating for our application. The first step will be to build the code. All Dockerfiles must first start with a base container using the keyword FROM. We will use the base image of golang:alpine which is a very small Linux image containing minimal configuration. We will call this build-env.

Next we add the project to the src directory to the container. Finally we RUN several steps at once: update, upgrade, change directories, fetch dependencies, then build. Why do we do this all at once? It is best practice to combine RUN steps because it allows for smaller containers.

FROM golang:alpine AS build-env
ADD . /go/src/hex-example
RUN apk update \
   && apk upgrade \
   && apk add git \
   && cd /go/src/hex-example \
   && go get ./... \
   && CGO_ENABLED=0 GOOS=linux go build -ldflags "-w -X main.docker=true" main.go

Container for Application

The container for the application is very simple. First you one again state your base container but this time we will be using scratch. This is a very special container because it is the smallest possible container. It is almost exactly like running a binary on a server with a few gains in terms of memory and resource management as well as being a deployable artifact.

This was the core reason for me to learn Go. If you are trying to build small pet projects you don’t necessarily want a lot of overhead in costs. By creating containers from scratch you are able to have small containers, which means you can use a smaller amount of resources to run your applications.

After defining your base container we’ll expose a port on our container and then need to copy the binary we created from the build-env container. Finally we have the command that runs the application when the container starts.

FROM scratch
LABEL maintainer="Joel Holmes <holmes89@gmail.com>"
ENV PORT 3000
EXPOSE 3000
ENV DATABASE_URL=""
ENV REDIS_PASSWORD=""
COPY --from=build-env /go/src/hex-example /
CMD ["/main"]

You will notice we have some ENV params in the container, this is so we can pass values into the container to make it reusable. There were some slight code modifications to accommodate for this so please make sure to take a look at the updated source code in the main.go file.

Building and testing

Before we go any further please make sure you have Docker installed on your machine Now we will build the container and see how it runs.

docker build -t gira:latest .

Everything should build fine. The -t tells Docker what to name your container and the dot at the end is for saying which directory to build.

Start up redis and then run this. If you are using linux you can use this:

docker run -p 3000:3000 --net=host gira:latest

Or if you are using Windows or Mac use

docker run -p 3000:3000 -e DATABASE_URL=host.docker.internal:6379 gira:latest

Everything should be running.

Just a brief explanation for the difference in running in the above example. Docker requires a Linux kernel to run and so on non Linux machines Docker runs in a Virtual Machine. So in order to contact the local system resources the Docker VM will publish a special host path to map to the local resources on Mac and Windows (host.docker.internal). This is one of the reasons to have the environment variable.

When running on Linux the container needs to know to connect to the host machines network otherwise it will try to connect to the virtual Docker network that it creates.

More can be read here

Contained System

That’s great, now you have a container running. But if you can imagine it could get a little out of hand if you had to start a bunch of these. Especially in our case where we are dependent on a database. Docker allows us to build something called a compose file, which is a series of containers that are expected to run together. What’s neat about this is the entire system will run on its own self-contained network. This allows you to have multiple instances of a service or DB running provided they don’t expose the same port.

So let’s define our system using a compose file. Docker-compose will look for a docker-compose.yml file in any directory as the default definition. You can always use the -f command to specify a different file. Open an editor and create docker-compose.yml and add the following:

version: '2'
services:
 db:
   image: redis
 gira:
   image: hex-example:latest
   ports:
     - "3000:3000"
   environment:
     - DATABASE_URL=db:6379
   depends_on:
     - db

You might stop and say, “wait, this isn’t the newest version, why?” I have a good reason. In this case our container is dependent on the database to be active. Unfortunately you can get into a situation where the application starts faster than the database and fails to connect. Docker Compose version 3.x removed the depends_on definition so that command is no longer valid. However, the documentation says that version is specifically for running Swarm (not something we are using) and that 2.7 is still valid and not deprecated. Read more about this here.

What we will do is spin up a Redis instance and have our api dependent on that service. The API will also have to reference to “name” of the internal Redis instance and so we will set that as an environment variable. Notice that we are only exposing the port for the API and not for Redis. This means that only applications on this specific Docker network that compose builds for us can access it. Once more it provides a sort of DNS name lookup so only the app names need to be referenced.

To run all you will need to do is run docker-compose up and you should see the application startup. Then test it out:

CURL CODE

You should see valid responses! To stop everything press CTRL+C and then docker-compose down. The first step will stop the containers from running but will not remove its state, the second step removes the containers and tears down the network and any other internal items.

This was a high level overview of containers and how to run them. From past experience maintaining a local docker-compose file can accelerate you local development team for testing applications locally. I’ve even used it to spin small web applications on AWS using docker-machine.

References

Related