docker
Updated: September 28, 2024
Docker creates containers that run processes. They are lightweight. Some may call them VMs but they are much lighter. Best Practices is a must read!!
Table of Contents
- Install
- Setup
- Arguments
- Images
- Containers
- Tags
- Networking
- Port Mapping
- Volume Mapping
- Environment Variables
- Namespaces
- Dockerfile
- Docker Compose
- Docker Swarm
- SSH
Install
docs.docker shows many installations so just follow instructions there. Arch/Manjaro have it in pacman. I like to use yay. Enterprise Editions have their own installs!
yay docker # favorite aur manager on arch/manjaro :p
Once docker is installed it is very likely that you will have setup the daemon for it’s services to run.
sudo systemctl start docker # This will start the docker service
sudo systemctl enable docker # This adds service to launch on bootup
If systemctl is not used in the distro, try service instead:
sudo service docker start
sudo service docker enable
Setup
Just like after you install git, you should likewise setup user and group information. Docker is owned by user root
and will always require sudo
for all it’s commands. We can create a group called docker
so we do not have to sudo everytime.
sudo groupadd docker # creates the docker group (may already exist)
sudo usermod -aG docker $USER # adds your user to the docker group
# -a = append; -G = supplement group - same restriction of its main group -g
newgrp docker # activate the changes (reboot anyway?)
docker network ls # test that it works without sudo
Ubuntu
# Remove older versions of docker
sudo apt remove docker docker-emgine docker.io containerd runc
# Update Sync Libraries
sudo apt update
# Packages to allow apt to use repos over HTTPS
sudo apt install apt-transport-https ca-certificates curl \
gnupg-agent \
software-properties-common
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
# Setup Stable Repo
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Update package index again
sudo apt update
# Install Docker and Containerd
sudo apt install docker-ce docker-ce-cli containerd.io
# Check install
sudo docker info
# Add docker to sudo for user
sudo usermod -aG docker <username>
Arguments
run # start a container
ps # list containers; use -a to see all; -s to see size
stop # stop container; use container ID or name
attach # attach to running container; use container ID or name
inspect # get specific container info; use container ID or name (json)
rm # remove a container
images # lists all images available
rmi # remove images; no containers can be running off this image (stop && rm them first)
pull # download an image
logs # view container logs, even if in detached mode
exec # execute a command; docker exec cont_name command
network # use various network (such as ls to view networks in docker)
swarm # used for node/container orchestration
stack #
-d # detached mode, runs docker in background
-i # get input for stdin (without t, not attached to container terminal)
-t # adds pseudo-terminal; commonly used -it for interactive terminal
-v # attach a volume
Images
An image is a package template used to create one or more containers.
Containers
Containers are running instances of images that have their own isolated set of processes, network, fs &
Tags
Tags are often used for using a different versions of a program. Supported tags for an image are listed on dockerHub
docker run redis:3.0 # runs redis version 3 instead of latest (version 5 currently)
Networking
Bridge Network (Default)
A bridge network is a Link Layer device which forwards traffic between network segments through hardware or software typically using mac addresses or arp. This is not ideal for networking being L2.
Docker uses a software bridge. Containers are connected to the default bridge. It is recommended to use User-Defined Bridge instead of the default.
-
Docker0 is the virtual bridge interface and acts as the network usually named Bridge which is also it’s network type = driver.
-
can reach other containers only by ip address on default or host.
-
containers get their own mac address | ip address | veth on docker0 network.
-
has access to the internet but by default cannot serve the internet.
-
can’t isolate containers from each other
-
can’t have name resolution
-
can’t run services without opening up ports
Default Bridge | User-Defined Bridge |
---|---|
Access by IP only | Also by Name or Alias |
Isolated to Docker | Isolated to Specific Network |
All containers have same config | Each bridge given own set of rules |
Linked containers can share env variables | No link but uses other ways to share env variables |
Use the --network=
to assign a container to a network. If it is not used it will automatically join the default bridge network. Containers can also have no network using --network=none
. Containers may also have no isolation and be placed on the host network using --network=host
. This would not need the -p
flag to publish the port to expose from docker. You cannot spawn multiple of the same container on the host network since the port is common to all containers.
# To see the list of networks run
docker network ls
User-Defined Bridge (Connect containers to a vhost network)
- Isolates from docker0 and host networks
- Has DNS resolution (resolves containers by name)
- Used 90% of the time
- Good for services that are not needed across all devices
docker network create <custom-name>
Host Network (Connect containers directly to the host)
-
Create different ip address but remain on the host
-
perfect for pi-hole or wireguard
-
does dns or dhcp so you need to setup either a macVLAN or ipVLAN
-
Don’t need to expose any ports
-
has no isolation from the host
MacVLAN Bridge (Connect containers directly to the network)
-
essentially a bridge network attatched switch.
-
as if eth interfaces are connected directly to the switch port.
-
creates different mac | ip addresses for every container on the network.
-
some switch ports will not handle the multiple mac addresses on one port.
-
requires promiscuous mode
-
no DHCP
Some applications expect to be directly connected to the physical network. A macvlan network driver can be used to assign a MAC address to each container’s virtual network interface. This gives the appearance of being connected to a physical network. So what is needed??
- Designate a physical interface on the docker host
- Create a subnet and gateway of the macvlan
- Network needs to be in promiscuous mode as well as on the host `sudo ip link set enp0s3 promisc on'
Be careful not to assign too many unique MAC addreses in the network (VLAN spread).
Generally it is better to use a bridge or overlay network in the long run. MacVLAN is really only needed for legacy apps.
docker network create -d macvlan --subnet 192.168.0.1/24 --gateway 192.168.0.1 --ip-range 192.168.0.253/32 -o parent=enp0s3 <customname>
docker run -itd -rm --network allspark --ip 192.168.1.92 --name styges busybox # creating a container needs --ip and --name
MacVLAN 802.1q
- can specify a eth interface like eth.20 eth.30 etc
- then creates sub interface networks eth0.20 eth0.30 creating a trunk
docker create network -d macvlan --subnet 192.168.20.0/24 --gateway 192.168.20.1 -o parent=enp0s3.20 macvlan20
Have to have trunking setup for this to work.
ipVLAN L2 (Default)
- like a MacVLAN but doesn’t need promiscuous mode.
- hosts shares mac address for all the containers.
docker create network -d ipvlan --subnet 192.168.20.0/24 --gateway 192.168.20.1 -o parent=enp0s3.20 macvlan20
docker run -itd -rm --network allspark --ip 192.168.1.92 --name styges busybox # creating a container needs --ip and --name
ipVLAN L3 (No more switching or arp)
-
connect containers to the host as if the host was a router.
-
eliminates broadcast traffic (best practice)
-
do not specify gateway as hosts gateway is assumed
-
all subnets must be created at time of L3 network creation
-
can only be isolated by separate network interfaces. (all subnets communicate with each other)
docker network create -d ipvlan --subnet 192.168.44.0/24 -o parent=enp0s3 -o ipvlan_mode=l3 --subnet 192.168.54.0/24 allspark
docker run -itd -rm --network allspark --ip 192.168.44.4 --name styges busybox # creating container needs --ip and --name
| add static route for each subnet to the network (unifi or whatever you use) Name: Allspark Destination Network: 192.168.44.0/24 Next Hop: IP of the host
Overlay Network (Connects multiple containers on multiple hosts - k8s | Docker Swarm)
- pretty complicated
None Network
- already created
- container then has no networking
Port Mapping
When a container is run by default it will be placed on the docker bridge network. It will typically use an internal ip like 172.17.x.x and given a port that it is listening on (let’s say 5000). So the internal ip is 172.17.x.x:5000. To connect to myApp through a browser, the host network port needs to be connected to the port of the container on the bridge network. Container in Bridge - 172.17.x.x:5000 Host Network Device - 10.0.0.13:80.
docker run -p 80:5000 username/myApp
This is connect the port of the docker hosts to the ports of the containers inside it. You can even spawn multiple instances of the same container and give them different ports.
docker run -p 3306:3306 mysql
docker run -p 9306:3306 mysql
Each instance of the mysql will use same port but have different ip. You cannot maps the same ports to a host more than once. So each new instance use a new docker host port.
Volume Mapping
For running a database the files are stored in /var/lib/mysql inside the file system of the container. That means if you stop and rm the container, all the data in that container is gone as well. Enter volume mapping. This allows data to persist! You mount a directory on docker host to a folder inside the docker container with the -v
flag.
docker run -v /opt/datadir:/var/lib/mysql mysql
Environment Variables
Convert static variables into an environment variable (according to the language used) will allow to set different values while using docker run -e <envvar>=<value> <image>
. Environment variables can be found on containers that already running using docker inspect <container>
and will be listed in the JSON file under Env.
Namespaces
Dockerfile
Create an image
Inside a dockerfile the words that are all caps are instructions while the stuff after them are arguments. Only the instructions RUN, COPY, ADD
create layers. Other instructions create temporary intermediate images, and do not increase the size of the build.
# INSTRUCTIONS
FROM # The base image for building a new image. This command must be on top of the dockerfile.
MAINTAINER # Optional, it contains the name of the maintainer of the image.
LABEL # Adds metadata to an image.
RUN # Used to execute a command during the build process of the docker image.
ADD # Copy a file from the host machine to the new docker image. There is an option to use a URL for the file, docker will then download that file to the destination directory.
COPY # Copies new files or directories from <src> to fs of the image at <dest>.
ARG # Defines a variable that users can use with `docker build` using `--build-arg <var>=<value>` flag.
ENV # Define an environment variable.
CMD # Used for executing commands when we build a new container from the docker image.
ONBUILD # Add image instruction to trigger at later time as a base for another build.
ENTRYPOINT # Define the default command that will be executed when the container is running.
WORKDIR # This is directive for CMD command to be executed.
USER # Set the user or UID for the container created with the image.
VOLUME # Enable access/linked directory between the container and the host machine.
EXPOSE # Informs the container to listen on specific ports (does not make it accessible to the host - see port mapping)
WORKDIR # Sets the working directory for any `RUN, CMD, ENTRYPOINT, COPY, and ADD` instructions that follow it.
STOPSIGNAL # Set the system call signal sent to the container to exit.
HEALTHCHECK # Check container health by running a command inside the container.
SHELL # Allows other shells to be used (zsh, tcsh, ksh) instead of default (bash)
FROM Ubuntu # base os image of the container - they all must start with a FROM instruction
RUN apt update
RUN apt install (programs wanted)
RUN (install dependencies)
COPY . /opt/source-code
ENTRYPOINT some_app=/opt/source-code/app.go go run
Docker Compose {#compose} (Depreciated)
# Linux
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose # apply executable permissions
docker-compose --version # test the installation
Docker-Compose on a single host is useful only for testing and development. Both Docker Swarm and Docker-Compose have the following similarities:
- They both take YAML formatted definitions of your application stack.
- They are both meant to deal with multi-container applications (microservices)
- They both have a scale parameter that allows you to run multiple containers of the same image allowing your microservice to scale horizontally.
It is possible with Docker 1.13+ to swarm a docker compose (no install needed)
docker deploy --compose-file docker-compose.yml
Docker Swarm
docker swarm init # creates and makes current node a manager of the swarm cluster
docker node ls # list all the nodes in a swarm
docker service tasks <task> # see how many instances (replicas) of a particular task are running
USE SSH TO ACCESS PRIVATE DATA IN BUILDS
This is a Dockerfile for using SSH in the container.
From alpine
# Install ssh and git
RUN apk add -no-cache openssh-client git
# Download public key for gitlab
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
# RUN --mount=type=ssh git clone git@github.com:myorg/myproject.git myproject