Containerization for Deployability

Goal

  • To package our helloworld microservice into an independently deployable Docker container image and deploy it on a single instance.

Discussion

As the previous post highlighted, the deployability of our service may not be up to snuff.

Well, technically, we haven’t discussed a deployment strategy yet, but it is easy to visualize an approach driven by version control and some deployment automation (a.k.a. good old days). Even with CI/CD, deployment is narrowly focussed on the code artifacts, not worrying about the consistency of environments across multiple production machines. In a planet-scale application, this is insufficient.

Virtualization could solve this, but it is too resource-intensive and cumbersome, since consistent packaging in virtualization means creating an image with an entire OS and the whole application footprint with all the bells and whistles.

This is the context in which we speak about Containerization. A container is much more lightweight and nimble, compared to a virtualized packaging: not the OS and the entire application footprint, but just a service and its dependencies. So, it is easier to deploy different containers for various services and have them make effective use of resources, while still being consistent across the whole footprint at the container level. Containerization alone doesn’t solve for the problem of deployability. There is more to it, which we will get to shortly, but it is useful at this point to first build a container based on our helloworld service and deploy it to the same single instance.

Infra

The same VM instance from the helloworld post.

Stack

The same stack as the helloworld post.

Containerization

  • Docker

Note: As I will be using Kubernetes for orchestration (spoilers!), it is clear that I will not be using docker the way it was used, say, a couple of years back, since Kubernetes is deprecating support for docker containers. This doesn’t break my ability to use docker to create images though. So, those millions of Dockerfiles out in the wild aren’t going to be useless in a Kube-nvironment. But, we will not be using the docker container runtime. Instead, we will be going the Kubernetes way, embracing containerd (more on this later!). Here is a good read that demystifies some of this confusion.

Architecture

Containerized Helloworld Microservice Architecture Diagram

Setup

We will just be containerizing the setup from the helloworld post. As depicted in the architecture diagram above, we will create the docker image using Docker Desktop and push it to a repository on Docker Hub.

So, the only set up we need in the VM is to install Docker, so that we can pull the image from the hub and deploy it.

sudo apt update
sudo apt install -y docker.io

Code

The project structure will be slightly modified to add a Dockerfile and a requirements.txt, both standard components of containerizing through docker.

─── helloworld/
    ├── app/
    │   ├── __init__.py
    │   └── helloworld.py
    ├── tests/
    │   ├── __init__.py
    │   └── test_helloworld.py
    ├── .gitignore
    ├── Dockerfile
    ├── LICENSE
    ├── README.md
    └── requirements.txt

The key point to realize is that the code doesn’t change. requirements.txt simply captures all the dependencies and Dockerfile encapsulates the steps we performed on a command line earlier to set up our service.

In our example, the requirements.txt looks like this (I am being lazy here. I can specify version ranges for each of the dependent packages):

gunicorn
uvicorn[standard]
fastapi
requests
pytest

The Dockerfile looks like this:

FROM python:3-slim

WORKDIR /helloworld

COPY requirements.txt /helloworld/requirements.txt

RUN pip3 install --no-cache-dir -r /helloworld/requirements.txt

COPY ./app /helloworld/app

CMD ["uvicorn", "app.helloworld:app", "--host", "0.0.0.0"]

Here is a quick description of what’s going on here:

Line 1: We start from a base docker image called python. This is to install python before doing anything else, just like we did manually in our helloworld set up. We will use the 3-slim tag which has a small footprint.
Line 2: We declare our working directory to be /helloworld. This is where all the docker commands defined in the file will be executed.
Line 3: Copies the requirements.txt file from our local directory to the working directory.
Line 4: Installs all dependencies, just like we did manually in our helloworld set up.
Line 5: Copies code from local directory to the working directory. Since python is interpreted, just dropping the files is good enough. Depending on the use case, we may have build steps defined in the Dockerfile for code build.
Line 6: Executes the command to run the server with the right entry point and host info, just like we ran it in the helloworld post.

So, as we can see, the Dockerfile does the things we would have done manually. In that sense, it performs the role of build automation. Where it differentiates itself is that these steps aren’t performed on the machines (like an automation would do), but they are performed in the ‘docker build’ stage, to create a prebuilt image. This image is lightweight, has all dependencies wired up and ready to go, and can just be deployed and run as a container on our VM.

Now, time to get things going. We first build the image from the current directory. Optionally, we can try running it locally in desktop (notice the port routing. We can launch the service just by going to http://localhost on desktop). Then we push the image to docker hub.

docker build -t <my_username>/helloworld .
docker run -d -p 80:8000 --name helloworld <my_username>/helloworld
docker push <my_username>/helloworld

Im my case the image’s compressed size was just 55MB, a far cry from the behemoth of a virtualized image.

Now that the image is up on the hub, we can deploy it on our VM instance and access it via http://vm-ip-addr/.

sudo docker login
sudo docker run -d -p 80:8000 --name helloworld <my_username>/helloworld

Summary

Tenet State Observation
Deployable Better By using containerization, we have achieved better deployability via docker images, paving the way for consistent environments across the production footprint, while keeping the service footprint small and nimble.

Combining microservices with containerization, we can achieve independent deployability, since each service is fully self-contained and by re-deploying a service, we will not be creating environmental / dependency-related issues for other services.

But what if the service itself is a dependency for another service? What if we break another service by changing the interface of our service? That’s a story for another day.


Creative Commons License

Unless otherwise specified, this work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.