Let’s talk about some of the best practices you should follow while using containers.
Containerization is widely being used across several organizations to deploy applications inside a container. These containers are popular because they are very lightweight. To leverage the most out of containers, you should follow some best practices while you are working with them.
Use Stable Base Image
Thanks to Docker, creating container images has never been simpler.
Specify your base image, add your changes, and build your container. While this is great for getting started, using the default base images can lead to large images full of security vulnerabilities. Also, avoid using the “Latest” tag docker image as there is a huge chance of a bug present in it.
Debian or Ubuntu are used as the base image by most Docker images. They are very helpful in terms of compatibility and easy on-boarding, but these base images can add hundreds of megabytes of additional overhead to your container.
For example, simple Node.js and Go, “hello world” apps are around 700 megabytes. Your application is probably only a few megabytes in size. So, all this additional overhead is wasted space and a great hiding place for security vulnerabilities and bugs.
If your programming language or stack doesn’t have an option for a small base image, you can build your container using raw Alpine Linux as a starting point. This also gives you complete control over what goes inside your containers.
Keep Container Images Smaller
Using smaller base images is probably the easiest way to reduce your container size.
Chances are your language or stack that you are using provides an official image that’s much smaller than the default image. For example, let’s take a look at Node.js container. Going from the default node:latest to node:14-alpine reduces our base image size by almost ten times.
In the new Docker file, the container starts with the node:alpine image, creates a directory for the code, install dependencies with NPM, and finally, starts the Node.js server. With this update, the resulting container is almost ten times smaller.
You create the container even lighter by using the builder pattern. With interpretive languages, the source code is sent to an interpreter, and then it is executed directly. But with a compiled language, the source code is turned into compiled code beforehand.
Now, with compile languages, the compilation step often requires tools that are not needed to run the code. So this means that you can remove these tools from the final container completely. To do this, you can use the builder pattern. The first container builds the code, and then the compiled code is packaged in the final container without all the compilers and tools required to make the compiled code.
Using small base images and the builder pattern are great ways to create much smaller containers without a lot of work.
Tag your Container Images
Docker tagging is an exceptionally powerful tool for us when it comes to managing our images. It helps in managing different versions of a docker image. Below is an example of building a docker image with tag name v1.0.1
docker build -t geekflare/ubuntu:v1.0.1
Now, there are two types of tags used: Stable tags Unique tags.
Use stable tags for maintaining the base image of the container. Avoid using these tags for deployment containers because these tags will receive updates frequently, and it can lead to inconsistencies in the production environment.
Use unique tags for deployments. Using unique tags, you can scale your production cluster to many nodes with ease. It avoids inconsistencies, and hosts will not pull any other docker image version.
Also, as a good practice, you should lock the deployed image tags by setting write-enable to false. This helps in not removing the deployed image from the registry by mistake.
Below are the fundamental points for making sure the container is secure.
- Verify the authenticity of any software you install in your container
- Use signed docker images or images with a valid checksum.
- Make sure the URL is using HTTPS if you’re using a third-party repository.
- Include the right GPG keys before using your package manager to update the packages
- Never run your applications as root. You should always use the user directive inside of the dockerfile to make sure that you drop the privileges of your user.
- Do not run SSH inside of your container.
- Make the filesystem read-only.
- Use Namespaces to split up your cluster.
Docker benchmark has been provided by the Center for Internet Security (CIS) to evaluate the security of a docker container. They have provided an open-source script called Docker Bench for Security, which you can run to check how secure a docker container is.
One Application Per Container
Virtual machines are pretty good at running multiple things in parallel, but when it comes to containers, you should run a single application inside one container. For example, if you are running a MEAN application in a containerized environment, then it should have one container for MongoDB, one container for Express.js, one container for Angular, and one container for Node.js.
Even containers can run multiple applications parallel in it, but then you can take advantage of the container model. Below is a correct and wrong representation of running applications in a container.
The containers are designed to have a similar lifecycle to the application it runs. When the container starts, the application will start. When a container stops, the application also stops.
Run Stateless Containers
Containers are fundamentally designed to be stateless. In this case, the persistent data which contains information about the state of the container is stored outside the container. Files can be stored in an object store such as cloud storage, to store user-session information you can use low latency database such as Redis and you can also attach external disk for block-level storage.
By keeping the storage outside of the container, you can easily shut down or destroy a container without the fear of losing any data.
If you use stateless containers, its very easy to migrate or scale as per business needs.
The above is some of the most important practices which one must follow while working with containers if you are building a Docker Production environment then check out how to secure it.