Container technology is not exactly new, but it is a big topic in IT. Many enterprise Linux distributions do their best to let you know that they also have all the tools for you to be successful with container technology. If you want official description and documentation, please see the Reference Articles at the end of this tutorial. I will use my own words to give you a brief description of Docker containers. Also, I will be focusing on the basics of Docker and Docker-Compose here, not going into the more enterprise tools such as Kubernetes.
This Tutorial is for all Linux users that have some basic understanding of terminal usage, virtualization, and would like to dip their toes in container technology. No Docker mastery required, we are going to start from the very basics here.
PROs and CONs
I am going to start with some basic PROs and CONs of Docker and container technology.
Docker PROs
- Extremely light on system resources
- All needed libraries and other dependencies are in the Docker container
- Dockers can easily access your host’s storage
- Very easy to backup, restore, move, destroy and recreate, while keeping all configurations and data intact
- Docker is very popular. There are many high quality pre-made images for a lot of software
Docker CONs
- Basic Docker concepts are different from traditional Virtual Machine technology, so require some getting used to
- Added complexity, especially in the beginning, might drive users away from Docker and containers
- Not all pre-made Docker images are created equally. Stick to the official/popular ones that are well documented
Common Questions about Docker
Now that we had a quick look at the PROs and CONs, it is time to move on and see what the common questions about Docker might be. I had a few when I started, so I took down some notes and I am going to go through the very same questions, hoping that by now I have found a decent answer to those.
- What is Docker?
- Why is everyone in IT making a big fuss about Docker and containers?
- Does this all matter to me, a regular Linux user? And if so, why shall I use it?
- What can I do with Docker? Can I have a practical example?
What is Docker?
Instead of a dry description, let me use a practical example to describe it. Let’s start from KVM and Virtual Machines in general. Why do we use a lot of of VMs? They allow our main machine host to be untouched and healthy, while at the same time allow us to test, use, delete and play around with operative systems and applications as needed. No risk involved. Companies like VMs for similar reasons. They are easy to migrate to different hardware, scale up and down, snapshot before changes and revert as needed.
So, in the real world, if I need to install a specific application, let’s say, I want Deluge to download and manage my bittorrent downloads. But maybe Deluge is not available on my distribution, or requires some strange repository I really do not want to add. Now what? I create a VM that officially supports Deluge, install it, assign a bridged network, configure IP addresses, harden the OS, make sure it says up to date and so on. That works, and in some cases, it is still the best approach. However, if I want to do with containers, what is the difference?
A container is like that VM that officially supports Deluge, but without the VM. Wait… what? Let’s take that VM we created for Deluge, remove all the parts that we had to install extra, just in order to have the environment that makes our Deluge run. And we keep just the bare minimum needed for that Deluge.
In the specific case of Deluge, we keep the libraries, web server and front end, and the Deluge app itself. And we create an image with those parts. This image, is uploaded on docker hub and will be super easy for anyone to pull down as it is, pre-made. Wow, that’s pretty convenient. Good, someone did put a lot of effort to provide us with a pre-made container image. So now what do I have to do to use it?
First, we download the image. Then we build a container with the image we downloaded, plus our configurations. And if we did it right, there we are. We have our Deluge up and running, on our host machine, but isolated from our OS, independent from our libraries, dependencies and so on. If we run a rolling release distribution, that let’s say updates libraries too fast and deprecates what is needed for Deluge to run, we don’t really care. Everything that Deluge needs, is in the docker container anyway. Same thing works the other way around, updating the docker images and containers, has zero impact on the host OS we are running.
Are you starting to like it? I bet so! And this is just the beginning. Our Docker configuration, in this case docker-compose, is just a .yml file written in very readable format. Once you figure out the configuration you want to run it on your host, you can back up that tiny file and that allows you to redeploy the same docker container in any machine, local and remote.
And what about the data and configurations inside the docker? You no longer need to backup 30GB of VM just to save your Deluge tiny configuration and logs. With Docker containers, all you have to do is map a directory of your host to a directory of the container, and the container will write there all the data. So, backing up Docker containers is as easy as backing up one .yml file for the container “recipe”, and a few folders for all configurations, logs, users, anything that is done inside Deluge Docker container.
Why is everyone in IT making a big fuss about Docker and containers?
You can easily grasp now how cool Docker containers are. Scale it up to organizations that need many applications. Different applications that are based on different distributions, have different needs for libraries, versions of web servers, databases etc. Suddenly, instead of having to deal with dozens of Virtual Machines, with all the administrative tasks linked to those, I can just deploy a bunch of docker images? That is very nice! Less worrying about OS hardening, OS upgrades breaking my applications, and last but not least, way less need for horsepower. Yes, running containers is way less resource intensive. I do not need full Operative Systems running just for my single app. I just need that few bits and pieces that make the app run. All the points mentioned make a big difference in any company’s IT budget.
Does this all matter to me, a regular Linux user? And if so, why shall I use it?
If you only use Linux desktop, and have zero interest in any server, services, you probably don’t care too much of Docker and containers in gerneral. Your best friends in this case should be Flatpaks, Snaps, and Appimages.
But if you deal with some sort of infrastructure, servers or services you want to keep running at home, well in this case Docker will serve you well for the reasons mentioned above. You will enjoy a cheaper electricity bill, less heat, less hardware to run the services you need, and smaller backups.
What can I do with Docker? Can I have a practical example?
This tutorial is part of a few basic tutorials on virtualization and container technologies. I am writing these as a starting point for a series of articles that will teach you how you can build your own “Infrastructure In a Box”. As for a practical example, sure, let’s start with something simple and see how we can get a Docker container up and running.
Docker Example: Installation
First of all we need to install our components. For this series of articles, I am running openSUSE Tumbleweed on my desktop. This is what I need:
sudo zypper in yast2-docker docker-compose
Then, we need to make sure docker service starts automatically at boot. We can do this starting “YaST Mervices Manager”. Once started, look for “docker” service, click on Start, and change “Start Mode” to: “On Boot”. Click OK, confirm to Apply changes. Or, we can do this the terminal way:
sudo systemctl start docker
sudo systemctl enable docker
If you are running a different distribution, please check Reference Articles at the end of this tutorial, or your distribution documentation.
Docker Example: Deluge Docker Image
Now that we have docker installed and running, we should use a small and easy image we can start with. I like Deluge quite a bit for Torrents. It is powerful, has a nice web interface, and a lot of very useful options. Actually, we will use this later, in the “Infrastructure In A Box” project, so let’s go ahead and give it a try.
First, we need to go to Docker Hub and see if there is a good image for Deluge. Lucky for us, linuxserver has one. Their Docker images are usually very good: Deluge Image on Docker Hub. They are so nice they also included detailed instructions on how to use it with docker-compose. Nice!
Now, we need to prepare our folder structure, so that it all stays nice and tidy. I like to keep my configuration files separate from the container mapped volumes. I usually keep all the docker-compose and other image related configurations in the /datastore/docker/dockercompose/ folder, and store all the container volumes under /datastore/docker/containers/ . So let’s go ahead and create our folders.
sudo mkdir -p /datastore/docker/containers/deluge
sudo mkdir -p /datastore/docker/dockercompose/deluge
Time to create our docker-compose file. move to /datastore/docker/dockercompose/deluge/ directory and create a file called “docker-compose.yml”. In the file, paste the configuration found on docker hub page:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- UMASK_SET=022 #optional
- DELUGE_LOGLEVEL=error #optional
volumes:
- /path/to/deluge/config:/config
- /path/to/your/downloads:/downloads
restart: unless-stopped
Since we are building our first container based on docker-compose, let me stop for a minute and explain some of the basic configurations:
- image: linuxserver/deluge -> This is what tells docker the image to go and download/update.
- container_name: deluge -> This is the container name and you can change it as needed.
- network_mode: host -> This is the simplest network types, basically the container gets direct access to the network of your host, no NAT or other complex networking. All we have to make sure is that the ports that Deluge wants, are not conflicting with ports we use on our host machine.
- TZ=Europe/London -> We want to change this to match our local Time Zone, in my case: Asia/Taipei
- volumes: -> Everything in the “volumes” is important. This tells our docker where to store data in order to keep it persistent. Consider it a link between a folder of the host, and a folder inside the container. So that whenever you delete the container, and re-create it, data stays safe, and new container can get all the configurations and data automatically. This is why we created the folder structure above.
This is the end result of the docker-compose.yml file:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Taipei
- UMASK_SET=022 #optional
- DELUGE_LOGLEVEL=error #optional
volumes:
- /datastore/docker/containers/deluge/config:/config
- /datastore/docker/containers/deluge/downloads:/downloads
restart: unless-stopped
As we can see, there are 2 directories that we are mapping for this container. In order to have the container start correctly, we need to create the specific directories first:
sudo mkdir -p /datastore/docker/containers/deluge/{config,downloads}
Now we are ready to download the image, build our container, and start it. Let’s position ourselves in the correct folder:
cd /datastore/docker/dockercompose/deluge
Now, we download the images of all the containers included in the docker-compose.yml file. In this case it is only one, but there could be many images in a single docker-compose.yml file.
sudo docker-compose pull
You will see the images being downloaded, and at the end, a final report confirms images are downloaded correctly: Pulling deluge … done
Next, we need to create and start the container. The command below here will start the docker compose in detached mode, therefore keeping our terminal free to run more commands.
docker-compose up -d
Again you should see a nice confirmation that the container is started: Starting deluge … done . Time to confirm it all works fine. We do this by running a simple docker command:
sudo docker ps
It seems like everything is working, why don’t we go ahead and connect to Deluge’s web interface? Default port for deluge is 8112, so we should open our browser and point it to: <http://localhost:8112>
Victory! We can see login page. Default password is “deluge”. It is recommended to change it on first login. Once that is done, select your Deluge host, click Connect.
Go ahead and configure it as you wish, and you will be able to see that all the configurations are being saved to the /datastore/docker/containers/deluge/config directory. Your Torrents will later on be saved in the /datastore/docker/containers/deluge/downloads/ Directory.
In order to have a full backup of your container, all you have to do is stop the container, backup those folders. If you break your container during an update or if you need to move it to a different host, you can delete it. Restore the docker-compose.yml file, restore the /datastore/docker/containers/deluge/ folder and its subfolders. Re-download the images, build the containers, and everything will come back , including all downloads and settings. Actually, why don’t we give it a try? Let’s break it!
Docker Break Test
How about we demonstrate Docker’s power with a simple test? I am going to delete my container, re-create it, and demonstrate that it will come back with all configurations as if nothing ever happened. First, we go ahead and check if the container is running:
sudo docker ps
It seems our container is still alive. Now, we proceed to stop the container. Note: With the command below, we can control our docker-compose projects from any directory we are currently in.
sudo docker-compose -f /datastore/docker/dockercompose/deluge/docker-compose.yml stop
Now that the container is stopped, we can remove it by running docker system prune.
sudo docker system prune
We see this big warning, but in this case it is exactly what I want to demonstrate:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Now that we blasted our docker container out of the sytstem? What do we do? simple, we go ahead and start again our docker-compose project. I am not kidding, it is all we have to do.
sudo docker-compose -f /datastore/docker/dockercompose/deluge/docker-compose.yml up -d
Let’s give it a few seconds, then we just go and open our web console: http://localhost:8112 How do we know it retained our data? Easiest way is to try and login. I just changed the password after first installation of deluge. Since deluge is accepting my custom password, this is proof al configurations are restored. That was really easy, wasn’t it?
Docker tags, and a quick note on the default “Latest” Tag
By default, most docker images will use the latest tag. This might sound very useful because it makes updating docker really easy. Any time you want to run an update, all you have to do is stop the docker-compose and then pull it. However, thinking about this a bit more, this poses a problem. Let me give you a scenario:
Let’s say you have your Deluge up and running and configured well with all your preferences, you host many Linux distribution images to help various projects ease up on their bandwidth. It took you quite a while to set it all up with your preferences, security, and to download all those iso files. Then, you want to keep this all backed up in case your server breaks, or you need to migrate to a new hardware. Easy, right? As I previously wrote, just backup the /datastore/docker folder recursively, and you are good to go.
Let’s think about this for a moment. Ok, in order to do this, you move your /datastore/docker folder to a new machine. Then you run docker-compose pull to get the image and… ah, yes, here comes the issue. With the default latest tag, your pull will most certainly get a newer version of the image. In the case of Deluge, it is very unlikely it will cause any issue. However, for larger and more complex docker-compose projects it will be a problem. You won’t have an easy way to know which version of all the components of the docker were pulled when it was working. You will have different versions of databases, applications, proxy, and so on. And it will cause you a headache. Eventually, you will be able to fix it all. But it will be far from a painless backup and restore as we want it to be.
So, how do we avoid this? Actually, it is quite easy. Going to Docker Hub, we search our image, go to the tags page and instead of leaving default image, we specify the version we want. This way we will be able to have a specific version in our docker-compose file, and when we restore our dockers, there will be no surprises. Also, by doing so, we will be able to control exactly if and when we want to run updates on our applications.
Let me show you how to do that. First, let’s have a look at our image on docker hub, by opening this link. Then, we change our docker-compose to match the specific version we want to use. That’s it! You can reference to the example below for our final docker-compose file.
version: "2.1"
services:
deluge:
image: linuxserver/deluge:version-2.0.3-2201906121747ubuntu18.04.1
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Taipei
- UMASK_SET=022 #optional
- DELUGE_LOGLEVEL=error #optional
volumes:
- /datastore/docker/containers/deluge/config:/config
- /datastore/docker/containers/deluge/downloads:/downloads
restart: unless-stopped
Docker commands
There are a lot of very useful docker and docker-compose commands we can use. Let’s start with a few basic ones. You can try them out and see what they do.
docker-compose pull will download all images defined inside a docker-compose.yml file. This could include a lot of images for complex projects.
sudo docker-compose pull
docker-compose build is used in more complex projects that have specific additional docker files and other configurations we need to pass to our containers.
sudo docker-compose build
docker-compose up is used to bring up our container. -d means detach, therefore freeing our console for more commands.
sudo docker-compose up -d
docker-compose stop is used to stop all containers included in a docker-compose file
sudo docker-compose stop
docker ps is quite nice, it prints out a list of Docker containers, shows us a few useful things, such as IDs, Image names, Status, mapped ports.
sudo docker ps
docker stats is more like the “top” software, but for Docker containers:
sudo docker stats
docker logs, followed by container name is very useful to check what is happening within a docker container. Note that you can always use tab to auto complete.
docker logs deluge
docker exec is used to execute commands within docker containers. One of the most basic uses, is if we need basic bash access to a docker container:
docker exec -u root -it deluge /bin/bash
Conclusion
This concludes our First Steps with Docker tutorial. I hope it was informative and easy to follow. This article is also a part of a larger series that I’m writing so be sure to check out my other articles for more information.
Also please note that we are just dipping our toes in the world of containers. There are many amazing tools and technologies that are based on containers. I encourage everyone to read more about it, and experiment with some of the items below:
Last but not least, please remember that open source projects are made by passionate people and volunteers, and they most likely need help. And yes, you can help even if you are not a developer. If you are asking yourself how you can help, check a few things that you can do and remember even a little kindness goes a long way:
- Let the project team know how much you love their work
- Reach out and ask if the project needs any help
- Spread some love, become and advocate
- Buy some merchandise, join as patron, or make a donation
Be the first to comment at forum.tuxdigital.com