Container Fundamentals And Container Management
Index
- Docker Fundamentals and Container Management
- Docker Volume
- Docker Network
- Publish Wordpress with MySQL using Docker Network
- Dockerizing a Node Project
- Docker Tag and Push
- Dockerizing Maven Project With Plugin
- Docker Multi-Stage Build Example
- Up And Running Simple WordPress With Docker-Compose
- Docker Swarm
- Docker Service Who-Am-I Example
- Docker Stack
- Docker Secret
- Docker Swarm Limit Resources
Docker Fundamentals and Container Management
- Firstly you install docker, you can check your version with “docker --version” command.
We will use docker command without sudo. If you use sudo please check https://docs.docker.com/engine/install/linux-postinstall/.
docker --version
- Let's run simple app use Docker. Run “Hello World” Docker app with “docker run hello-world” command.
docker run hello-world
- If everything is done correctly, one should have a similar output in your terminal as displayed below.
- You can list your downloaded images with following command.
docker image ls
- You can list your running containers with "docker container ls" command. If you want to display all of your containers (including down ones) you can write --all or -a option.
docker container ls -a
- In order to remove an image, first you must remove containers using that image. You can remove a container with “docker container rm
container-id
” or “docker container rmcontainer-name
” command.
docker container rm 2aa5
- You can also write some of specific id.
- Now you can remove your image with “docker image rm
imageId
” command.
docker image rm fce2
- We will run Alpine in one of our container. Let’s search the Alpine image in registry with following command. Use “docker search
image-name
” command to search the image that you want.
docker search alpine
- When you run a container with a specific image, docker will automatically download that image from registry if it couldn’t find the image locally. Or just download "docker pull
image-name
"
docker run alpine
- As you can see, your running container list seems empty. Alpine is automatically exited when it finished it’s job. Because the alpine image has a default command sh. That means, when run in background, the shell exists immediately.
- To see your container up and running, you can write a command that lasts when running a container.
docker run -d alpine /bin/sh -c "while true; do echo Hello World; sleep 10; done"
- Check the container is up. If everything is done correctly, one should have a similar output in your terminal as displayed below.
- In order to run your container with an attached interactive terminal, run container with “-i -t” options.
docker run -i -t alpine
- If you want stop the process, just write
exit
.
- Check a containers log with "docker container logs
container-id
" command. You can use “-f” option if you want to follow logs instantly.
docker container logs e4f
docker container logs e4f -f the same command but instant log -f: instant live log
- You can gracefully stop a container with "docker container stop
container-id
" command or you can force stop a container with "docker container killcontainer-id
" command.
docker container kill dd2e
Docker Volume
- Create a volume using “docker volume create
volume-name
" command.
- So List your volumes with following code
docker volume ls
- You can inspect your volume information with “docker volume inspect
volume-name
”.
docker volume inspect eteration
- In order to run a container with a specific volume you can use -v option with your docker run command.
docker run -dit --name
container-name
-vvolume-name
:destination
image-name
docker run -dit --name eterationtest -v eteration:/app alpine:latest
- Let's check the container
docker ps
- Inspect your container and look for “Mounts” section to verify your volume successfully created and mounted correctly.
docker container inspect
container-name
docker container inspect eterationtest
- Attach your running container and inspect its files.
docker attach
container-name
docker attach eterationtest
- Create a eteration.txt in app folder.
- Exit your container and delete it
If you get error like this docker: Error response from daemon: Conflict. The container name "/eterationtest" is already in use by container "c2bfa17197d763294830d6f807b187617a6c3c0c7e560385d71a0dca054bb440". You have to remove (or rename) that container to be able to reuse that name.
docker container container-name
-f
-f = force quit.
- Start it again to see your volume saved app folder.
alihan@alihan:~$ docker start eterationtest
eterationtest
alihan@alihan:~$ docker attach eterationtest
/ # ls
app bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd app/
/app # ls
eteration.txt
/app # cat eteration.txt
Eteration Test
/app # exit
Clean Up
docker container prune
docker image prune
docker volume prune
Docker Network
- Run your container in detach mode with a random port on the host using port 80 on the container.
-d = detach mode -p = port-forward
- docker run -d -p
port-number
image-name
docker run -d -p 80 nginx
- Assign a specific port(8080) on the host to a container port and run your container.
docker run -d -p 8080:80 nginx
-
Let's check the localhost:8080
-
If everything is done correctly, one should have a similar page below.
- List your running networks.
docker network ls
- So let's create a new bridge network and list your networks to see it.
docker network create --driver bridge eteration-network
- Check the network list again
docker network ls
- Inspect your new network. You’ll see the “Containers” object is empty because it’s not a default bridge and no containers are connected to it.
docker inspect eteration-network
- Create two new containers in detach mode and connect them to your user-defined network.
docker run -dit --name eteration1 --network eteration-network alpine
docker run -dit --name eteration2 --network eteration-network alpine
- Check the container.
alihan@alihan:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40eadc667f7e alpine "/bin/sh" 58 seconds ago Up 56 seconds eteration2
8ebc6a4f9df9 alpine "/bin/sh" About a minute ago Up About a minute eteration1
- Inspect your network again. You’ll see that two containers you just created are now connected to this network.
docker network inspect eteration-network
- Attach your container named
eteration1
and try to ping your container namedeteration2
with it’s name.
docker container attach eteration1
/ # ping -c 5 eteration2
- Now create a container outside this network which will connect to default bridge network.
alihan@alihan:~$ docker run -dit --name eteration3 alpine
7baf474a4a883e364f4d93acf69f89239d9d18c93039cc1d6501401cb8892ab4
- Attach the container you just created. Try to ping a container which is connected to your user-defined network. You won’t be able to get a response for this request.
docker container attach eteration3
Publish Wordpress with MySQL using Docker Network
- Firstly, We need to login Docker with ' docker login ' after that install mysql database using the official mysql docker image
docker run --name mysql -e MYSQL_ROOT_PASSWORD=12345 -d mysql
If we explain this command, the 'run' command helps to start the docker image, if the image has been pulled to the system with the pull command before, docker will use pulled image otherwise the image will be pulled from the hub. We name the container with ' --name ', associate the MYSQL root password with the container as the environment variable we gave with the -e command, and let the container run in detached mode with -d. Finally, we gave the name of the image to be downloaded as a parameter.
- Run docker container ls to check if the container is successfully created and running. If it does, let’s add a new database for our Wordpress by executing mysql in the container. When the system prompt " Enter password: " give the MYSQL_ROOT_PASSWORD previously configured
docker exec -it mysql mysql -u root -p
After system prompt mysql>
create database wordpress;
- Just like setting up mysql, we can use official wordpress image to have wordpress up and running.
docker run --name wordpress -p 8080:80 -d wordpress
It is pretty much the same with when we run mysql, the difference is here we have -p 8080:80 which tells docker to have port mapping. Our machine’s 8080 port will be forwarded to the container’s 80 port. Now, we can access our wordpress by accessing http://localhost:8080 on the browser and complete the setup wizard there.
Now, we will encounter an “error establishing a database connection” if we use localhost as our Database Host. That is because our mysql database is inside a separate docker container that is not accessible from the wordpress container.
To fix it, we need to create a new docker network and attach it to both of our containers.
docker network create --attachable wordpress-network
The above command tells docker to create a new network named wordpress-network that can be manually attached to containers. After the network is created, we can connect it to the containers.
docker network connect wordpress-network mysql
docker network connect wordpress-network wordpress
Now, let’s go back to the wordpress wizard in our browser and set ' mysql ' as the Database Host. We should be able to continue the wizard and have Wordpress up and running.
Wordpress is UP now !
Dockerizing a Node Project
- Create a directory with simple-node-app and go to this directory with simple-node-app.
mkdir simple-node-app
cd simple-node-app
- So let's create a main.js file. This file is main entrypoint of node application.
nano main.js
- Add these lines to main.js file and save it(Ctrl+X -> Y -> Enter).
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(8081);
console.log('Server running at http://127.0.0.1:8081/');
- Create a file named Dockerfile
touch Dockerfile
- So let's find the Node image from DockerHub
In this step will be need sign in docker hub.
- type the node in this search bar and click the official image.
If click the Tags, it will show you versions of the node (we will use lts-alpine3.9 version)
- So let's continue, copy these lines in Dockerfile and save it(Ctrl+X -> Y -> Enter)
nano Dockerfile
FROM node:lts-alpine3.9
COPY main.js main.js
CMD ["node","main.js"]
- So let run the Dockerfile with following command.
docker build -t my-node-app .
- Show last builded image with the following command
docker image ls
- Let's run my-node-app with following command
docker run -it -p 8081:8081 my-node-app
- Check the resulthttp://127.0.0.1:8081/
Docker Tag and Push
-
After we succesfully build the project, we want to reuse it for future projects so we can push project to DockerHub.
-
Firstly, we need have Docker Hub membership from https://hub.docker.com/
- After complete registration, use ' docker login ' at terminal and use credentials of DockerHub.
Note that : Username can see from top-right at hub.docker.com
- Before pushing package to DockerHub, we need to tag that package with proper username/repository combination likely hub match our repository with tag.
docker tag my-node-app ysnyldrm0/my-node-app
Note that : If we don't user version like ysnyldrm0/my-node-app:version_number docker will tag it as latest.
- Now we can push this package to DockerHub.
docker push ysnyldrm0/my-node-app
- As we see docker used latest tagname as a default and package pushed to DockerHub succesfully.
Clean Up
docker kill $(docker ps -q)
Dockerizing Maven Project With Plugin
-
We will use the product-service that we remember in the previous hands-on (Can find the setup folder)
-
Launch Eclipse EE and click the import project.
-
The generated project as an Existing Maven Project as shown below.
- Click the browse and select the extracted file(product-service in setup folder).
- If everything is done correctly, one should have a similar folder structure as displayed below.
- Let's add the following code block to the pom.xml file
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
</plugin>
- Create a file named Dockerfile
product-service rigth click -> New -> File
- So let's continue, copy these lines on Dockerfile
FROM openjdk:11-jre-slim
RUN apt-get update
RUN apt-get install wget -y
ADD target/product-service-*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
EXPOSE 8080
- Open the terminal and go to product-service directory. Then write the following code.
mvn package
After that
docker build -t product-service .
- Expected output like this.
alihan@alihan:~/git/academy/microservices-docker/docker-fundamentals/setup/product-service$ docker build -t product-service .
Sending build context to Docker daemon 23.91MB
Step 1/6 : FROM openjdk:11-jre-slim
11-jre-slim: Pulling from library/openjdk
54fec2fa59d0: Pull complete
b7dd01647a92: Pull complete
793cbc6f8a59: Pull complete
50a0e9985dcd: Pull complete
Digest: sha256:678022f7c59cae7d1afddfc800356611bdedf1030d94cfc503a60a0757d97b79
Status: Downloaded newer image for openjdk:11-jre-slim
---> 62358ed2899c
Step 2/6 : RUN apt-get update
---> Running in 96ddcaf0b39a
Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [49.3 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [189 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [7380 B]
Fetched 8339 kB in 3s (2495 kB/s)
Reading package lists...
Removing intermediate container 96ddcaf0b39a
---> f67afbf8e12b
Step 3/6 : RUN apt-get install wget -y
---> Running in 51d63982f086
Reading package lists...
Building dependency tree...
Reading state information...
The following package was automatically installed and is no longer required:
lsb-base
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
libpcre2-8-0 libpsl5 publicsuffix
The following NEW packages will be installed:
libpcre2-8-0 libpsl5 publicsuffix wget
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 1285 kB of archives.
After this operation, 4328 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 libpcre2-8-0 amd64 10.32-5 [213 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 libpsl5 amd64 0.20.2-2 [53.7 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 wget amd64 1.20.1-1.1 [902 kB]
Get:4 http://deb.debian.org/debian buster/main amd64 publicsuffix all 20190415.1030-1 [116 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 1285 kB in 1s (2511 kB/s)
Selecting previously unselected package libpcre2-8-0:amd64.
(Reading database ... 6888 files and directories currently installed.)
Preparing to unpack .../libpcre2-8-0_10.32-5_amd64.deb ...
Unpacking libpcre2-8-0:amd64 (10.32-5) ...
Selecting previously unselected package libpsl5:amd64.
Preparing to unpack .../libpsl5_0.20.2-2_amd64.deb ...
Unpacking libpsl5:amd64 (0.20.2-2) ...
Selecting previously unselected package wget.
Preparing to unpack .../wget_1.20.1-1.1_amd64.deb ...
Unpacking wget (1.20.1-1.1) ...
Selecting previously unselected package publicsuffix.
Preparing to unpack .../publicsuffix_20190415.1030-1_all.deb ...
Unpacking publicsuffix (20190415.1030-1) ...
Setting up libpsl5:amd64 (0.20.2-2) ...
Setting up libpcre2-8-0:amd64 (10.32-5) ...
Setting up publicsuffix (20190415.1030-1) ...
Setting up wget (1.20.1-1.1) ...
Processing triggers for libc-bin (2.28-10) ...
Removing intermediate container 51d63982f086
---> bb2ac7eac48a
Step 4/6 : ADD target/product-service-0.0.1-SNAPSHOT.jar app.jar
---> 174f7d394257
Step 5/6 : ENTRYPOINT ["java","-jar","app.jar"]
---> Running in bdaa21440d26
Removing intermediate container bdaa21440d26
---> 242cd0ed1567
Step 6/6 : EXPOSE 8080
---> Running in 08b7b916fe1b
Removing intermediate container 08b7b916fe1b
---> c0fe149a8dbf
Successfully built c0fe149a8dbf
Successfully tagged product-service:latest
This part create a product-service image using Dockerfile. Check the image just wrting "docker images" command.
- So let's run the image the following code.
docker run -p 8080:8080 -t product-service
- Check the result. If everything is done correctly, one should have a similar output like this.
Docker Multi-Stage Build Example
First of all, I want to give information about the multi-stage build. Multi-stage builds are a method of organizing a Dockerfile to minimize the size of the final container, improve run time performance, allow for better organization of Docker commands and files, and provide a standardized method of running build actions.
A multi-stage build is done by creating different sections of a Dockerfile, each referencing a different base image. This allows a multi-stage build to fulfill a function previously filled by using multiple docker files, copying files between containers, or running different pipelines.
-
Let's get example, we will use the hello-c-docker(Can find the setup folder)
-
Open terminal and go to hello-c-docker directory. Create Dockerfile and copy these lines on Dockerfile.
touch Dockerfile
# Full SDK version (built and discarded)
FROM alpine:3.5 AS build
RUN apk update && \
apk add --update alpine-sdk
RUN mkdir /app
WORKDIR /app
ADD hello.c /app
RUN mkdir bin
RUN gcc -Wall hello.c -o bin/hello
# Lightweight image returned as final product
FROM alpine:3.5
COPY --from=build /app/bin/hello /app/hello
CMD /app/hello
COPY and ADD are both Dockerfile commands that serve similar purposes. They let you copy files from a particular place right into a Docker image.
-
COPY takes in a src and destination. It simplest helps you to copy in a local document or directory from your host into the Docker image itself.
-
ADD helps you do that too, however it also helps 2 different sources. First, you may use a URL in place of a local file / directory. Secondly, you could extract a tar file from the source directly into the destination.
- So let's build use the Dockerfile
docker build -t hello-c-docker .
- Expected output like this
alihan@alihan:~/git/academy/microservices-docker/03-container-technology-and-docker/setup/hello-c-docker$ docker build -t hello-c-docker .
Sending build context to Docker daemon 3.072kB
Step 1/10 : FROM alpine:3.5 AS build
3.5: Pulling from library/alpine
8cae0e1ac61c: Pull complete
Digest: sha256:66952b313e51c3bd1987d7c4ddf5dba9bc0fb6e524eed2448fa660246b3e76ec
Status: Downloaded newer image for alpine:3.5
---> f80194ae2e0c
Step 2/10 : RUN apk update && apk add --update alpine-sdk
---> Running in c36e795ec9aa
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
v3.5.3-40-g389d0b359a [http://dl-cdn.alpinelinux.org/alpine/v3.5/main]
v3.5.3-40-g389d0b359a [http://dl-cdn.alpinelinux.org/alpine/v3.5/community]
OK: 7963 distinct packages available
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/57) Installing fakeroot (1.21-r1)
(2/57) Installing sudo (1.8.19_p1-r0)
(3/57) Installing libcap (2.25-r1)
(4/57) Installing pax-utils (1.1.6-r0)
(5/57) Installing libressl (2.4.4-r0)
(6/57) Installing libattr (2.4.47-r4)
(7/57) Installing attr (2.4.47-r4)
(8/57) Installing tar (1.29-r1)
(9/57) Installing pkgconf (1.0.2-r0)
(10/57) Installing patch (2.7.5-r3)
(11/57) Installing ca-certificates (20161130-r1)
(12/57) Installing libssh2 (1.7.0-r2)
(13/57) Installing libcurl (7.61.1-r1)
(14/57) Installing curl (7.61.1-r1)
(15/57) Installing abuild (2.29.0-r2)
Executing abuild-2.29.0-r2.pre-install
(16/57) Installing binutils-libs (2.27-r1)
(17/57) Installing binutils (2.27-r1)
(18/57) Installing gmp (6.1.1-r0)
(19/57) Installing isl (0.17.1-r0)
(20/57) Installing libgomp (6.2.1-r1)
(21/57) Installing libatomic (6.2.1-r1)
(22/57) Installing libgcc (6.2.1-r1)
(23/57) Installing mpfr3 (3.1.5-r0)
(24/57) Installing mpc1 (1.0.3-r0)
(25/57) Installing libstdc++ (6.2.1-r1)
(26/57) Installing gcc (6.2.1-r1)
(27/57) Installing make (4.2.1-r0)
(28/57) Installing musl-dev (1.1.15-r8)
(29/57) Installing libc-dev (0.7-r1)
(30/57) Installing fortify-headers (0.8-r0)
(31/57) Installing g++ (6.2.1-r1)
(32/57) Installing build-base (0.4-r1)
(33/57) Installing expat (2.2.0-r1)
(34/57) Installing pcre (8.39-r0)
(35/57) Installing git (2.11.3-r2)
(36/57) Installing xz-libs (5.2.2-r1)
(37/57) Installing lzo (2.09-r1)
(38/57) Installing squashfs-tools (4.3-r3)
(39/57) Installing libburn (1.4.6-r0)
(40/57) Installing ncurses-terminfo-base (6.0_p20171125-r1)
(41/57) Installing ncurses-terminfo (6.0_p20171125-r1)
(42/57) Installing ncurses-libs (6.0_p20171125-r1)
(43/57) Installing libedit (20150325.3.1-r3)
(44/57) Installing libacl (2.2.52-r2)
(45/57) Installing libisofs (1.4.6-r0)
(46/57) Installing libisoburn (1.4.6-r0)
(47/57) Installing xorriso (1.4.6-r0)
(48/57) Installing acct (6.6.2-r0)
(49/57) Installing lddtree (1.25-r2)
(50/57) Installing libuuid (2.28.2-r1)
(51/57) Installing libblkid (2.28.2-r1)
(52/57) Installing device-mapper-libs (2.02.168-r3)
(53/57) Installing cryptsetup-libs (1.7.2-r1)
(54/57) Installing kmod (23-r1)
(55/57) Installing mkinitfs (3.0.9-r1)
Executing mkinitfs-3.0.9-r1.post-install
(56/57) Installing mtools (4.0.18-r1)
(57/57) Installing alpine-sdk (0.5-r0)
Executing busybox-1.25.1-r2.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 192 MiB in 68 packages
Removing intermediate container c36e795ec9aa
---> 3d4b0839a0d3
Step 3/10 : RUN mkdir /app
---> Running in 9f84b7af06d1
Removing intermediate container 9f84b7af06d1
---> bc66767b29d2
Step 4/10 : WORKDIR /app
---> Running in b612d5a615f1
Removing intermediate container b612d5a615f1
---> 38e594374fab
Step 5/10 : ADD hello.c /app
---> f427d554b834
Step 6/10 : RUN mkdir bin
---> Running in bd90c33732d0
Removing intermediate container bd90c33732d0
---> 5b55a3a14b01
Step 7/10 : RUN gcc -Wall hello.c -o bin/hello
---> Running in b7e2d94e76b6
Removing intermediate container b7e2d94e76b6
---> 491aa4598677
Step 8/10 : FROM alpine:3.5
---> f80194ae2e0c
Step 9/10 : COPY --from=build /app/bin/hello /app/hello
---> 95e2b66732c2
Step 10/10 : CMD /app/hello
---> Running in 4b4702916d29
Removing intermediate container 4b4702916d29
---> d5cf4642685f
Successfully built d5cf4642685f
Successfully tagged hello-c-docker:latest
As you can see the first docker image(none) is 184MB but as a result of the image(hello-c-docker) is 4.01MB
Up And Running Simple Word Press With Docker-Compose
Compose file is a YAML file defining services, networks and volumes define dependencies, like DNS and DBs may reference multiple dockerfiles
- So let's create wordpress.yaml
touch wordpress.yaml
- Write the following code in wordpress.yaml
version: '3'
networks:
frontend:
backend:
volumes:
db_data: {}
wordpress_data: {}
- Then add services part and configs in wordpress.yaml
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_RANDOM_ROOT_PASSWORD: '1'
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpressPASS
networks:
- backend
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_data:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpressPASS
WORDPRESS_DB_NAME: wordpress
ports:
- 8090:80
networks:
- frontend
- backend
- Result wordpress.yaml here
version: '3'
networks:
frontend:
backend:
volumes:
db_data: {}
wordpress_data: {}
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_RANDOM_ROOT_PASSWORD: '1'
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpressPASS
networks:
- backend
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_data:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpressPASS
WORDPRESS_DB_NAME: wordpress
ports:
- 8090:80
networks:
- frontend
- backend
- After that install the docker-compose. (This part only for linux users.)
sudo apt install docker-compose -y
- Finally run the docker-compose command in the wordpress.yaml directory.
docker-compose -f wordpress.yaml up
- Check the result http://localhost:8090/
-
CTRL+C Stop the process.
-
Following code kill the compose
docker-compose -f wordpress.yaml down
Docker Swarm
- Firstly check the docker swarm is up.
docker info
- So let's start the docker swarm with following command.
docker swarm init
- So let's check node.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
k60jqez17frw96211h43p5kbv * docker-desktop Ready Active Leader 20.10.2
- And check the docker networks. We will see that a special overlay network has been set up for swarm.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6e58b6f4b797 bridge bridge local
8913a9267597 docker_gwbridge bridge local
e2379cb846a6 host host local
e528bc3560b7 none null local
q69d7zvu0qbn ingress overlay swarm
- Here, we are constantly pinging google in a container made up of alpine distribution name with etr.
$ docker service create -d --name etr alpine ping 8.8.8.8
v59zkssb2wj8g82bljcu8gbkq
If you’ve already got a multi-node swarm running, keep in mind that all docker stack and docker service commands must be run from a manager node.
- Check the services
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
v59zkssb2wj8 etr replicated 1/1 alpine:latest
- Now let's have a look at our conteiner in docker service.
$ docker service ps etr
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wrzbh9v1ober etr.1 alpine:latest docker-desktop Running Running 3 minutes ago
- We can examine our container in more detail by using the
docker container ls
command. Or if you want logs for container by usingdocker container logs a561
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a561f2c94851 alpine:latest "ping 8.8.8.8" 8 minutes ago Up 8 minutes etr.1.wrzbh9v1ober6du9yiua7aj7x
- If this container stops working for the alpine container in this service, the work we are actually doing will be interrupted. Using the scaling and load distribution features of Swarm, it is necessary to run this task on more than one container in order to achieve a more manageable cluster structure.
- We are generated 2 more working replications for this container by using
docker service update
$ docker service update etr --replicas 3
etr
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
- Let's check the container
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59e03639bf8c alpine:latest "ping 8.8.8.8" 3 minutes ago Up 3 minutes etr.3.l01yxr6in20wtlnz869a05xxa
c1c88203a61f alpine:latest "ping 8.8.8.8" 3 minutes ago Up 3 minutes etr.2.kig3bk9g1guda5rab7rtl9qe7
a561f2c94851 alpine:latest "ping 8.8.8.8" 10 minutes ago Up 10 minutes etr.1.wrzbh9v1ober6du9yiua7aj7x
- Let's delete one of the containers that we created and observe that it re-creates.
$ docker container rm -f 59e
59e
- We can see the difference by looking at the container status.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
91a87fc46c1d alpine:latest "ping 8.8.8.8" 26 seconds ago Up 20 seconds etr.3.q2qt9zz9ls0y1l7z6ft177sgk
c1c88203a61f alpine:latest "ping 8.8.8.8" 9 minutes ago Up 9 minutes etr.2.kig3bk9g1guda5rab7rtl9qe7
a561f2c94851 alpine:latest "ping 8.8.8.8" 16 minutes ago Up 16 minutes etr.1.wrzbh9v1ober6du9yiua7aj7x
Clean Up
$ docker service rm etr
Docker Service Who-Am-I Example
- Image of whoami that simple HTTP docker service that prints it's container ID. Let's create three replica of this image.
$ docker service create --name who-am-I --publish 8000:8000 --replicas 3 training/whoami:latest
- Now let's make a few curl requests and see the results.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fe5b1e8068 training/whoami:latest "/app/http" 30 seconds ago Up 28 seconds 8000/tcp who-am-I.1.e0mrtvmoq5s1nc8jubljxpqr5
84699ce0d327 training/whoami:latest "/app/http" 30 seconds ago Up 28 seconds 8000/tcp who-am-I.2.2fk6d9ksm5ymt2u9xomsx6qma
9dbf996c78ed training/whoami:latest "/app/http" 30 seconds ago Up 28 seconds 8000/tcp who-am-I.3.cq3j5v9s5ukvsk3o9330fbwkq
- As you can see, Docker Swarm service response us to different containers each request.
$ curl http://127.0.0.1:8000/
I'm c8fe5b1e8068
$ curl http://127.0.0.1:8000/
I'm 84699ce0d327
$ curl http://127.0.0.1:8000/
I'm 9dbf996c78ed
$ curl http://127.0.0.1:8000/
I'm c8fe5b1e8068
$ curl http://127.0.0.1:8000/
I'm 84699ce0d327
$ curl http://127.0.0.1:8000/
I'm 9dbf996c78ed
- Remove the docker service
docker service rm who-am-I
Docker Stack
- Let's create the stack by using
docker stack deploy
$ docker stack deploy --compose-file wordpress.yaml wp
Creating network wp_backend
Creating network wp_frontend
Creating service wp_db
Creating service wp_wordpress
- The last argument(wp) is a name for the stack. Each network, volume and service name is prefixed with the stack name.
- Check that it’s running with docker stack services wp
$ docker stack services wp
ID NAME MODE REPLICAS IMAGE PORTS
j0eijlj05bmi wp_db replicated 1/1 mysql:5.7
q4794gqa23wd wp_wordpress replicated 1/1 wordpress:latest *:8090->80/tcp
- Once it’s running, you should see 1/1 under REPLICAS for both services. This might take some time if you have a multi-node swarm, as images need to be pulled.
- And also check the stack list
Alihans-MacBook-Pro:solution alihanbilgin$ docker stack ls
NAME SERVICES ORCHESTRATOR
wp 2 Swarm
- Check the network list and we will see the network connection
Alihans-MacBook-Pro:solution alihanbilgin$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6e58b6f4b797 bridge bridge local
8913a9267597 docker_gwbridge bridge local
e2379cb846a6 host host local
e528bc3560b7 none null local
yd08lthx69ss ingress overlay swarm
apjuqr1v755a wp_backend overlay swarm
pzo9x45gjivd wp_frontend overlay swarm
- Check the result http://localhost:8090/
- Bring the stack down by using
docker stack rm
docker stack rm wp
Docker Secret
- I will talk about two different ways to give a docker secret. Firstly, go to solution directory and run the following command. First example is create docker secret with file and we will continue with this secret.
$ docker secret create etr-secret ./my-secret.txt
z7ovs0j5f8wxrk22pysbk7klm
Note: Another way to create a docker secret that printf<> syntax like a following command.
$ printf "Etr Top Secret" | docker secret create etr_secret -
- Becareful Docker Secret only works with Docker Swarm
- Create a nginx service and grant it access to the secret. By default, the container can access the secret at /run/secrets/<secret_name>, but you can customize the file name on the container using the target option.
$ docker service create --name etr --secret etr-secret nginx:latest
158wq76hyuio5a0fhr8nodake
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
Alihans-MacBook-Pro:solution alihanbilgi
- So that you can use docker container exec to connect to the container and read the contents of the secret data file, which defaults to being readable by all and has the same name as the name of the secret.
$ docker container exec $(docker ps --filter name=etr -q) ls -l /run/secrets
total 4
-r--r--r-- 1 root root 20 Feb 16 12:05 etr-secret
$ docker container exec $(docker ps --filter name=etr -q) cat /run/secrets/etr-secret
Eteration top secret
Clean Up
$ docker secret rm etr-secret
$ docker service rm etr
Docker Swarm Limit Resources
CPU and memory are each a resource type. A resource type has a base unit. CPU is specified in units of cores, and memory is specified in units of bytes.
- Let's create simple nginx service with limit recources
$ docker service create --limit-cpu=2 --limit-memory=4G --name=limit-example nginx
- Check the container
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8789ac2823ef nginx:latest "/docker-entrypoint.…" 12 seconds ago Up 12 seconds 80/tcp limit-example.1.tc16jumdmu8twl0f6hmpanmhp
- So let's inspect the nginx service we have created. As you can see, the nginx service we created has taken the limits we have set for
$ docker service inspect limit-example
.
.
.
"Resources": {
"Limits": {
"NanoCPUs": 2000000000,
"MemoryBytes": 4294967296
}
.
.
.
Clean Up
docker service rm limit-example
docker swarm leave --force