1 Docker Installation & Management Guide#
1. Install Docker on Ubuntu#
apt-get install ca-certificates curl -y
install -m 0755 -d /etc/apt/keyrings
curl -fsSL [https://download.docker.com/linux/ubuntu/gpg](https://download.docker.com/linux/ubuntu/gpg) -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] [https://download.docker.com/linux/ubuntu](https://download.docker.com/linux/ubuntu) \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update && apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
usermod -aG docker marc
newgrp docker
systemctl enable docker.service && systemctl enable containerd.service
systemctl status docker.service
systemctl restart docker
systemctl stop docker
systemctl status dockerIf in Ubuntu, edit the service file so the internal firewall is not overridden and to prevent Docker from creating ghost files if SMB shares are not mounted (to deactivate, just remove the text):
systemctl edit docker.serviceAdd the following under [Unit]:
[Unit]
After=remote-fs.targetCheck Docker stats:
docker stats
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
docker stats --no-stream
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"2. Images and Containers#
Running Containers
# Run first container by running an image
docker run hello-world
# Run container from an image with port mapping
docker run -p 80:80 nginx
# Run container from an image with a different port
docker run -p 5001:80 nginx
# Run container in the background
docker run -p 5001:80 -d nginx
# Run container with a specific name
docker run -p 5001:80 -d --name my-container-name nginx
# Run a container with the shell open
docker exec -it my-container-name /bin/bash
# Delete container automatically when stopped
docker run -p 5001:80 -d --name --rm my-container-name nginxManaging Images
# Pull image without running it
docker pull hello-world
# Pull image with a tag
sudo docker pull hello-world:latest
# List images
docker image ls
docker image ls --digests
# Delete specific image
docker rmi hello-world
# Delete all images
docker image prune --all
docker rmi -f $(docker images -aq)
# Publish an image to Docker Hub
docker login
docker tag folder-name username/repository
docker tag project1 marcoue/private:beta
docker push marcoue/private:betaManaging Operations
# Check the logs of a running container
docker logs my-nginx
# Stop a running container
docker stop my-nginx
# Stop all running containers
docker stop $(docker ps -a -q)
# List running containers
docker ps
docker container ls --size
# List all containers
docker ps -a
# Delete a container
docker rm my-container-name
# Delete a running container
docker rm my-container-name --force
# Delete all stopped containers (you must also delete volumes to reset)
docker container prune
# Ping from docker
docker exec homepage ping 10.1.1.11
# Inspect container
docker inspect dockge
# Size of docker
docker system df -v3. Docker Compose#
From the directory where the docker-compose.yml file is located:
# Run a docker-compose file
docker compose up
# Run in the background
docker compose up -d
# View logs
docker compose logs -f
# Stop the container
docker compose stop
# Start a container that was already started but stopped
docker compose start
# Stop and delete the container
docker compose down
# Upgrade and restart
docker compose pull && docker compose up -d
docker compose up -d --force-recreate
docker compose up -d --build --force-recreate
# Check if .env file is OK
docker compose config4. Volumes and Disk Usage#
# List volumes
docker volume ls# Delete a volume
docker volume rm NAME
```bashDelete all unused volumes#
docker volume rm $(docker volume ls -q --filter dangling=true)# See space that can be reclaimed
docker system df# Reclaim space (prune)
docker system prune -a && docker system prune --all && docker system prune --volumes
docker volume prune# Clean logs
find /var/lib/docker/containers/ -type f -name "*.log" -delete5. Networks#
docker network create proxy
docker network list
docker network rm proxy6. Crowdsec#
# Unban IP
docker exec crowdsec cscli decisions delete -i 72.11.191.807. How to SSH in a Docker container#
docker exec -it cloudflared sh8. Migrating Container to New Host#
# Determine the volumes to copy
docker inspect container-name
# Stop the container
docker stop container-name
docker ps
# Export container to an image that can be deployed to another host
docker ps -a
docker commit container-id new-container-name
# Verify that the image was created
docker images
# Export the image
docker save new-container-name > new-container.tar
# Locate the file and the container folder
pwd
ls -l
# Import the image in Docker on the new host
docker load -i ./new-container.tar
# or
docker load -i new-container.tar
# Verify that the image is present
docker images9. Create a cronjob to delete 1-day-old images#
nano /etc/cron.daily/docker-daily-image-pruneAdd the following script:
#!/bin/bash
docker system prune -af --filter "until=$((1*24))h"Test the script:
run-parts /etc/cron.daily10. Install and Scan Docker Images (Trivy)#
# Install Trivy
curl -sfL [https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh](https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh) | sh -s -- -b /usr/local/bin v0.48.3
# Save the specific image version to a file
docker save docker.io/fosrl/pangolin:1.15.1 -o pangolin_image.tar
# Scan the file for Critical/High vulnerabilities
trivy image --severity HIGH,CRITICAL --input pangolin_image.tar
# Clean up
rm pangolin_image.tar11. Install iPerf3 in Docker#
# Install (from container CLI)
docker pull mlabbe/iperf3
# Start Service
docker run --name=iperf3 -d --restart=unless-stopped -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3Run client on remote machine with IP of Docker: MacOS (app must be downloaded and dragged once to a Terminal window to setup Path)
/Applications/iperf3 -c 10.1.1.10012. Teslamate Setup and Maintenance#
Install Teslamate with Bind Mounts (No Volumes)#
Create new folders:
mkdir -p /home/marc/docker-compose/teslamate/data/{import,teslamate-db,teslamate-grafana-data,mosquitto-conf,mosquitto-data}
# (Mac OS, from Teslamate folder)
mkdir -p data/{import,teslamate-db,teslamate-grafana-data,mosquitto-conf,mosquitto-data}Change permissions (no need on Mac OS):
sudo chown -R 1000:1000 /home/marc/docker-compose/teslamate/data/import
sudo chown -R 999:999 /home/marc/docker-compose/teslamate/data/teslamate-db
sudo chown -R 472:472 /home/marc/docker-compose/teslamate/data/teslamate-grafana-data
sudo chown -R 1883:1883 /home/marc/docker-compose/teslamate/data/mosquitto-data
sudo chown -R 1883:1883 /home/marc/docker-compose/teslamate/data/mosquitto-confStart and initialize:
cd /home/marc/docker-compose/teslamate/
docker compose up
# wait until everything is created
docker compose down
# Change permission once the DB is created
sudo chown -R 472:472 ./data/teslamate-grafana-data
docker compose upBackup Teslamate#
cd /home/marc/docker-compose/teslamate/
docker compose exec -T database pg_dump -U teslamate teslamate > ./teslamate.bck
scp teslamate.bck marc@10.1.2.230:/home/marc/docker-compose/teslamate/Restore Teslamate#
Stop the teslamate container to avoid write conflicts:
cd /home/marc/docker-compose/teslamate/
docker compose stop teslamateDrop existing data and reinitialize (Don’t forget to replace first teslamate if using a different TM_DB_USER):
docker compose exec -T database psql -U teslamate teslamate << .
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
CREATE EXTENSION cube WITH SCHEMA public;
CREATE EXTENSION earthdistance WITH SCHEMA public;
.Restore and restart:
docker compose exec -T database psql -U teslamate -d teslamate < teslamate.bck
docker compose start teslamate
rm teslamate.bckExport DB for Teslamate#
Reference: Backup and Restore PostgreSQL Database using Docker
docker ps
# Note postgres:17 ID number
docker exec b1fab6a81f3a pg_dump -U teslamate -F t teslamate > mydb.tar
# Send DB to a new server
scp mydb.tar marc@10.1.2.230:/home/marc/
scp mydb.tar marc@10.1.2.4:/Users/marc/
rm mydb.tarFrom the New server:
cd /home/marc/
docker ps
# Note postgres:17 ID number
docker cp mydb.tar bdc6d8f085e2:/
docker exec -it bdc6d8f085e2 /bin/bash
pg_restore --clean --verbose -U teslamate -d teslamate ./mydb.tar
# Exit container
ctrl-d
docker exec -it bdc6d8f085e2 psql -U teslamate
# exit
docker exec bdc6d8f085e2 rm /mydb.tar
cd docker-compose/teslamate/
docker compose down && docker compose upReboot the container if not done automatically.
13. Configure Nvidia for Docker#
Official NVIDIA Container Toolkit Guide
Install the prerequisites:
sudo apt-get update && sudo apt-get install -y --no-install-recommends \
curl \
gnupg2Configure the production repository:
curl -fsSL [https://nvidia.github.io/libnvidia-container/gpgkey](https://nvidia.github.io/libnvidia-container/gpgkey) | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L [https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list](https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list) | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt-get updateInstall the NVIDIA Container Toolkit packages:
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.18.0-1
sudo apt-get install -y \
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker14. Miscellaneous Configurations#
Default vm.max_map_count
sysctl vm.max_map_count
# Default should read: vm.max_map_count = 1048576
# To change
sysctl -w vm.max_map_count=262144Divers
docker logs homepage --tail=50If GPU is not seen in container:
rm -r overlay2/*
docker system prune -af
docker system prune -af