Installation of Docker and Docker Compose and Portainer container management tool

Docker official website

overview

Docker is an open platform for developing, publishing and running applications. Docker decouples applications from infrastructure so that software can be delivered quickly. With Docker, infrastructure can be managed like applications. By leveraging Docker's method of quickly delivering, testing, and deploying code, you can significantly reduce the delay between writing your code and running it in production.

Docker platform

Docker provides the ability to package and run applications in loosely isolated environments called containers. Isolation and security allow you to run multiple containers concurrently on a given host. Containers are lightweight and contain everything needed to run an application, so you don't depend on what's currently installed on the host. You can easily share containers as you work and ensure that everyone you share with gets the same container that works the same way.

Docker provides tools and a platform to manage the lifecycle of containers:

  • Use containers to develop your application and its supporting components.
  • Containers become the unit for distributing and testing applications.
  • When ready, deploy the application into production, either as a container or as an orchestrated service. This is the same whether your production environment is an on-premises data center, a cloud provider, or a mix of both.

1. Docker installation

1. Install Docker Desktop on Ubuntu

DEB package

1), prerequisites

To successfully install Docker Desktop, you must:

  • meet system requirements
  • Have a 64-bit version of Ubuntu Jammy Jellyfish 22.04 (LTS) or Ubuntu Impish Indri 21.10. x86_64(or amd64) architecture supports Docker Desktop.
  • For non-Gnome desktop environments, you must install  gnome-terminal:
 sudo apt install gnome-terminal
  • Uninstall a technical preview or beta version of Docker Desktop for Linux. Run the following command:
 sudo apt remove docker-desktop

To clean up completely, delete  $HOME/.docker/desktopconfiguration and data files at​​​​​​, /usr/local/bin/com.docker.cli symlinks at, and run the following commands to clean up remaining systemd service files:

 rm -r $HOME/.docker/desktop
 sudo rm /usr/local/bin/com.docker.cli
 sudo apt purge docker-desktop

Note: If you have installed a Docker Desktop for Linux technical preview or beta, you will need to remove any files generated by these packages, such as

~/.config/systemd/user/docker-desktop.service
~/.local/share/systemd/user/docker-desktop.service

2) Install Docker Desktop

Recommended way to install Docker Desktop on Ubuntu:

  1. Set up Docker's package repository .

  2. Download the latest DEB package .

  3. Install the package using apt as follows:

sudo apt-get update
sudo apt-get install ./docker-desktop-<version>-<arch>.deb

NOTE: At the end of the installation process, apt an error is displayed due to installing the downloaded package. You can ignore this error message.

N: Download is performed unsandboxed as root, as file '/home/user/Downloads/docker-desktop.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)

Several steps of configuration are completed after installation through the installation script included in the deb package:

Install script:

  • Sets the capabilities of the Docker Desktop binary to map privileged ports and set resource limits.
  • Add the DNS name of Kubernetes to the  /etc/hosts.
  • Create a link from  /usr/bin/docker to  /usr/local/bin/com.docker.cli .

Install Docker Engine on Ubuntu ( Set up Docker's package repository . )

To get started with Docker Engine on Ubuntu, make sure  you meet the prerequisites , then  install Docker .

prerequisites:

operating system requirements

To install Docker Engine, you need a 64-bit version of one of the following Ubuntu distributions:

  • Ubuntu Jammy 22.04 (LTS)
  • Free Impish 21.10
  • Ubuntu Focal 20.04 (LTS)
  • Ubuntu Bionic 18.04 (LTS)

Docker engine support  x86_64 (or  amd64), armhf, arm64and  s390x architecture.

uninstall old version

docker, docker.ioor docker-engine是an older version of Docker, if these are installed, uninstall:

$ sudo apt-get remove docker docker-engine docker.io containerd runc

If it says that  apt-get these packages are not installed, that's okay.

/var/lib/docker/content, including images, containers, volumes, and networks, are preserved. If you don't need to save existing data and want to start with a fresh install, see the section on uninstalling Docker Engine at the bottom of this page.

installation method:

You can install Docker Engine in different ways according to your needs:

  • Most users  set up Docker's repository and install from it to facilitate installation and upgrade tasks. This is the recommended method.

  • Some users download the DEB package and  install it manually, managing upgrades entirely manually. This is useful in the case of installing Docker on an air-gapped system without internet access.

  • In test and development environments, some users choose  to install Docker using automated convenience scripts .

1), use the repository to install:

Before installing Docker Engine on a new host for the first time, you need to set up a Docker repository. After that, you can install and update Docker from the repository.

set repository

1. Allow updating the package index and installing packages aptusing the repository over HTTPS :apt

$ sudo apt-get update

$ sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

2. Add Docker's official GPG key:

$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

3. Use the following command to set the repository:

$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  1. Update aptthe package index to install the latest versions of Docker Engine, containerd and Docker Compose, or go to the next step to install a specific version:

    $ sudo apt-get update
    $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

    apt-get update Getting GPG errors while running  ?

    Your default umask may not be set correctly, preventing the repo's public key file from being detected. Run the following command, then try updating your repository again:

    sudo chmod a+r /etc/apt/keyrings/docker.gpg
  2. To install a specific version of Docker Engine, list the available versions in the repo, then select and install:

    a. List the versions available in your repository:

    $ apt-cache madison docker-ce
    
    docker-ce | 5:20.10.16~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages
    docker-ce | 5:20.10.15~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages
    docker-ce | 5:20.10.14~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages
    docker-ce | 5:20.10.13~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages

    b. Use the version string in the second column to install a specific version, for example 5:20.10.16~3-0~ubuntu-jammy.

    $ sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io docker-compose-plugin
  3. hello-world Verify that Docker Engine has been installed correctly by running  the image.

    $ sudo service docker start
    $ sudo docker run hello-world

    This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.

Docker Engine is installed and running. The dockergroup was created, but no users were added to it. You need to sudorun Docker commands with . Continue the Linux post-installation to allow non-root users to run Docker commands and other optional configuration steps.

Upgrade Docker Engine

To upgrade Docker Engine, first run  sudo apt-get update, then follow  the installation instructions , selecting the new version you want to install.

2), install from the package

If you cannot install Docker Engine using Docker's repository, you can download  .debthe release file and install it manually. Every time Docker is upgraded, a new file needs to be downloaded.

  1. Go to   the Index of linux/ubuntu/dists/ , choose your Ubuntu version, then browse to pool/stable/, select amd64armhf, arm64or , and download the Docker Engine file for the version s390xyou want to install ..deb

  2. Install Docker Engine, change the path below to the path where you downloaded the Docker package.

    $ sudo dpkg -i /path/to/package.deb

    The Docker daemon starts automatically.

  3. hello-world Verify that Docker Engine has been installed correctly by running the image.

    $ sudo docker run hello-world

    This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.

Docker Engine is installed and running. The dockergroup was created, but no users were added to it. You need to sudorun Docker commands with . Proceed to the post-installation steps for Linux to allow non-root users to run Docker commands and other optional configuration steps.

Upgrade Docker Engine

To upgrade Docker Engine, download the updated package file and repeat  the installation process , pointing to the new file.

3) Install using a convenient script

Docker provides a handy script at get.docker.com to quickly and non-interactively install Docker into a development environment. Convenience scripts are not recommended for production environments, but can be used as examples to create provisioning scripts that suit your needs. See also Installing Using a Repository  Steps for installation steps using a package repository. The source code for this script is open source and can be  found in the docker-install repository on GitHub .

Always check scripts downloaded from the Internet before running them locally. Before installing, please familiarize yourself with the potential risks and limitations of convenience scripts:

  • This script requires rootor sudoprivileges to run.
  • The script attempts to detect your Linux distribution and version and configure the package management system for you, and does not allow you to customize most installation parameters.
  • The script installs dependencies and recommendations without asking for confirmation. This may install a large number of packages, depending on the host's current configuration.
  • By default, the script installs the latest stable versions of Docker, containerd and runc. When using this script to provision a machine, it may cause unexpected major version upgrades of Docker. Always test (major) upgrades in a test environment before deploying to a production system.
  • This script is not designed to upgrade existing Docker installations. When using scripts to update an existing installation, dependencies may not be updated to the expected versions, resulting in outdated versions being used.

Tip: Preview script steps before running

You can DRY_RUN=1run the script with options to see what steps the script will perform during installation:

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ DRY_RUN=1 sh ./get-docker.sh

This example downloads the script from get.docker.com and runs it to install the latest stable version of Docker on Linux:

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
Executing docker install script, commit: 7cae5f8b0decc17d6571f9f52eb840fbc13b2737
<...>

Install pre-release

Docker also  provides a handy script at test.docker.com for installing pre-releases of Docker on Linux. This script is equivalent to the one in get.docker.com, but configures your package manager to enable the "test" channel from our package repository, which includes both stable and pre-release versions (beta, release candidates) of Docker. Use this script to gain early access to new releases and evaluate them in a test environment before releasing them to stable.

To install the latest version of Docker on Linux from the "test" channel, run:

$ curl -fsSL https://test.docker.com -o test-docker.sh
$ sudo sh test-docker.sh
<...>

Upgrade Docker after using the convenience script

If you installed Docker using the convenience script, you should upgrade Docker directly using your package manager. There is no benefit to re-running the convenience script, which can cause problems if it tries to re-add a repository that has already been added to the master.

Uninstall Docker Engine

  1. Uninstall the Docker Engine, CLI, Containerd and Docker Compose packages:

    $ sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-compose-plugin
  2. Images, containers, volumes, or custom configuration files on the host are not automatically deleted. To delete all images, containers and volumes:

    $ sudo rm -rf /var/lib/docker
    $ sudo rm -rf /var/lib/containerd

You must manually delete any edited configuration files.

Steps after Linux installation

This section contains optional procedures for configuring a Linux host to work better with Docker.

Manage Docker as a non-root user

The Docker daemon binds to Unix sockets instead of TCP ports. By default, Unix sockets are owned rootby the user and can only be used by other users sudo. The Docker daemon always rootruns as a user.

If you don't want to dockerprepend the command sudo, create a Unix group called docker and add the user to it. When the Docker daemon starts, it creates a dockerUnix socket accessible to group members.

warn

This dockergroup grants root permissions equivalent to those of the user. See Docker Daemon Attack Surface for details on how this affects system security  .

Note :

To run Docker without root privileges, see  Running the Docker daemon as a non-root user (non-root mode) .

To create dockergroups and add your users:

  1. Create dockergroups.

    $ sudo groupadd docker
  2. Add your user to dockerthe group.

    $ sudo usermod -aG docker $USER
  3. Log out and log back in so that your group memberships are re-evaluated.

    If testing on a virtual machine, it may be necessary to restart the virtual machine for the changes to take effect.

    On a desktop Linux environment (such as X Windows), log out of the session completely, then log back in

    On Linux, you can also run the following command to activate changes to groups:

    $ newgrp docker
  4. Verify that you can run docker commands without sudo.

    $ docker run hello-world

    This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.

    If you run the Docker CLI command with sudo before adding the user to the Docker group, you may see the following error, which indicates that your ~/.docker/ directory is caused by the command not having permissions sudo.

    WARNING: Error loading config file: /home/user/.docker/config.json -
    stat /home/user/.docker/config.json: permission denied
    

    To fix this, delete the ~/.docker/ directory (it will be recreated automatically, but any customizations will be lost), or change its ownership and permissions with the following commands:

    $ sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
    $ sudo chmod g+rwx "$HOME/.docker" -R

Configure Docker to start on boot

Most current Linux distributions (RHEL, CentOS, Fedora, Debian, Ubuntu 16.04 and later) use systemd to manage services that start when the system starts. On Debian and Ubuntu, the Docker service is configured by default to start at boot time. To automatically start other distributions for Docker and Containerd at boot time, use the following commands:

$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service

To disable this behavior, use instead disable.

$ sudo systemctl disable docker.service
$ sudo systemctl disable containerd.service

If you need to add an HTTP proxy, set a different directory or partition for the Docker runtime files, or make other customizations, see  Customizing the systemd Docker daemon options .

use a different storage engine

For information about the different versions of the engine, see  Version Drivers . The default version engine and list of supported version engines depends on the host's Linux distribution and available kernel drivers.

Configure the default log driver

Docker provides the ability to collect and view log data from all containers running on a host through a series of logging drivers. The default logging driver jsonfile writes log data to a JSON-formatted file on the host file system. Over time, these log files can grow in size, potentially causing disk resource exhaustion.

To alleviate these problems, either configure jsonthe file log driver to enable log rotation , use  an alternative log driver , such as a "local" log driver  that performs log rotation by default  , or use a log driver that sends logs to a remote log aggregator .

Configure the Docker daemon to listen for connections

By default, the Docker daemon listens for connections on a UNIX socket to accept requests from local clients. Docker is allowed to accept requests from remote hosts by configuring Docker to listen on an IP address and port as well as a UNIX socket. See the "Bind Docker to another host/port or unix socket" section of the Docker CLI reference article for more details on this configuration option.

secure your connection

Before configuring Docker to accept connections from remote hosts, it is important to understand the security implications of opening docker to the network. If steps are not taken to secure the connection, it is possible for a remote non-root user to gain root access on the host. For more information on how to secure this connection with a TLS certificate, check out this article on  how to secure a Docker daemon socket .

Configuring Docker to accept remote connections can docker.service be done using systemd's systemd unit files for Linux distributions, such as recent versions of RedHat, CentOS, Ubuntu, and SLES, using the daemon.json而files recommended by Linux distributions that do not use systemd.

systemd vs daemon.json

Configuring Docker to use both systemda unit file and daemon.jsona file to listen for connections can cause conflicts that prevent Docker from starting.

systemdConfigure remote access using unit files

  1. Open the override docker.service file in a text editor using the sudo systemctl edit docker.service command.

  2. Add or modify the following lines, replacing with your own values.

    [Service]
    ExecStart=
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375
  3. save document.

  4. Reload systemctlconfiguration.

    $ sudo systemctl daemon-reload
  5. Restart Docker.

    $ sudo systemctl restart docker.service
  6. Check that this is done by checking the output of netstat to confirm that dockerd is listening on the configured port

    $ sudo netstat -lntp | grep dockerd
    tcp        0      0 127.0.0.1:2375          0.0.0.0:*               LISTEN      3758/dockerd

Configure remote accessdaemon.json

  1. Set up host arrays connected to UNIX sockets and IP addresses in /etc/docker/daemon.json as follows:

    {
      "hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]
    }
  2. Restart Docker.

  3. Check if this has been changed by checking the output of netstat to confirm that dockerd is listening on the configured port

    $ sudo netstat -lntp | grep dockerd
    tcp        0      0 127.0.0.1:2375          0.0.0.0:*               LISTEN      3758/dockerd

Enable IPv6 on the Docker daemon 

To enable IPv6 on the Docker daemon, see  Enabling IPv6 support .

Troubleshooting

kernel compatibility

Docker will not run properly if your kernel version is lower than 3.10 or if some modules are missing. To check kernel compatibility, you can download and run the check-config.sh  script.

$ curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh
$ bash ./check-config.sh

This script only works on Linux, not on macOS.

Cannot connect to the Docker daemon

If you see an error like the one below, your Docker client may be configured to connect to the Docker daemon on a different host, and that host may not be reachable.

Cannot connect to the Docker daemon. Is 'docker daemon' running on this host?

To see which host your client is configured to connect to, check DOCKER_HOSTthe value of the variable in your environment.

$ env | grep DOCKER_HOST

If this command returns a value, the Docker client is set to connect to the Docker daemon running on that host. If not set, sets the Docker client to connect to the Docker daemon running on localhost. If it is set incorrectly, use the following command to unset it:

$ unset DOCKER_HOST

You may need to edit your environment in files such as ~/.bashrc or ~/.profile to prevent DOCKER_HOSTincorrectly setting variables.

If DOCKER_HOSTit's set up as expected, verify that the Docker daemon is running on the remote host, and that a firewall or network outage isn't preventing you from connecting.

IP forwarding problem

If you manually configure networking with version 219 or later systemd-network, systemd Docker containers may not be able to access your network. Starting in systemdversion 220, the forwarding setting for a given network (  net.ipv4.conf.<interface>.forwarding  ) defaults to off . This setting prevents IP forwarding. It also conflicts with Docker's behavior of enabling the net.ipv4.conf.all.forwarding setting in containers

To resolve this issue on RHEL, CentOS or Fedora, edit the <interface>.network file in /usr/lib/systemd/network/ on the Docker host  (eg: /usr/lib/systemd/network/80-container -host0.network ) and add the following block in the [Network] section.

[Network]
...
IPForward=kernel
# OR
IPForward=true

This configuration allows IP forwarding from the container as expected.

DNS resolver found in resolv.conf and containers can't use it

Linux systems using the GUI typically run a network manager that uses an  dnsmasqinstance running on a loopback address, such as 127.0.0.1or  127.0.1.1to cache DNS requests, and add this entry to  /etc/resolv.conf. This dnsmasqservice speeds up DNS lookups and also provides DHCP Serve. This configuration will not work in Docker containers with their own network namespace, because Docker containers resolve loopback addresses locally 127.0.0.1 and are unlikely to run a DNS server on their own loopback addresses.

If Docker detects that no DNS server referenced in /etc/resolv.conf is a fully functional DNS server, the following warning appears and Docker uses the public DNS servers provided by Google 8.8.8.8for 8.8.4.4DNS resolution.

WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers
can't use it. Using default external servers : [8.8.8.8 8.8.4.4]

If you see this warning, first check that you are using dnsmasq:

$ ps aux |grep dnsmasq

If your containers need to resolve hosts inside the network, public servers are not enough. You have two options:

  • You can specify the DNS servers used by Docker, or
  • You can disable in NetworkManager dnsmasq. If you do, NetworkManager will add your real DNS nameservers to it /etc/resolv.conf, but you will lose the dnsmasq.

You only need to use one of these methods.

Specify DNS server for Docker

The default location for the configuration file is that /etc/docker/daemon.json。 the location of the configuration file can be changed using the --config file daemon flag. The documentation below assumes that the configuration file is located in the /etc/docker/daemon.json.

  1. Create or edit a Docker daemon configuration file, the default is  /etc/docker/daemon.jsonfile, which controls the Docker daemon configuration.

    $ sudo nano /etc/docker/daemon.json
  2. Add a dns key with one or more IP addresses as values. If the file has existing content, just add or edit the dns line

    {
      "dns": ["8.8.8.8", "8.8.4.4"]
    }

    If your internal DNS server cannot resolve public IP addresses, include at least one DNS server that does so that you can connect to Docker Hub so your containers can resolve Internet domain names.

    Save and close the file.

  3. Restart the Docker daemon.

    $ sudo service docker restart
  4. Verify that Docker can resolve the external IP address by attempting to pull the image:

    $ docker pull hello-world
  5. Verify that the Docker container can resolve internal hostnames via ping, if necessary.

    $ docker run --rm -it alpine ping -c4 <my_internal_host>
    
    PING google.com (192.168.1.2): 56 data bytes
    64 bytes from 192.168.1.2: seq=0 ttl=41 time=7.597 ms
    64 bytes from 192.168.1.2: seq=1 ttl=41 time=7.635 ms
    64 bytes from 192.168.1.2: seq=2 ttl=41 time=7.660 ms
    64 bytes from 192.168.1.2: seq=3 ttl=41 time=7.677 ms

disableddnsmasq

Ubuntu

If you do not want to change the configuration of the Docker daemon to use a specific IP address, follow these instructions to disable it in NetworkManager dnsmasq.

  1. Edit the /etc/NetworkManager/NetworkManager.conf file.

  2. Comment out the dns=dnsmasq line by adding a # character at the beginning of the line.

    # dns=dnsmasq
    

    Save and close the file.

  3. Restart NetworkManager and Docker. Alternatively, you can reboot the system.

    $ sudo systemctl restart network-manager
    $ sudo systemctl restart docker

RHEL, CentOS, or Fedora

Disable dnsmasq on RHEL, CentOS or Fedora :

  1. Disable the dnsmasq service:

    $ sudo systemctl stop dnsmasq
    $ sudo systemctl disable dnsmasq
  2. Configure DNS servers manually using the Red Hat documentation .

Allow access to remote API through firewall 

If you are running a firewall on the same host where Docker is running, and want to access the Docker Remote API from another host, and remote access is enabled, you will need to configure the firewall to allow incoming connections on the Docker port, if TLS is enabled Encrypted transmission, the default is 2376, otherwise it is 2375.

Two common firewall daemons are  UFW (Uncomplicated Firewall) (commonly used on Ubuntu systems) and firewalld (commonly used on RPM-based systems). Consult the documentation for your operating system and firewall, but the following information may help you get started. The options are fairly loose, and you may wish to lock down the system with a different configuration.

  • UFW : Set DEFAULT_FORWARD_POLICY="ACCEPT" in your config.

  • firewalld : Add rules similar to the following in the policy (one for incoming requests and one for outgoing requests). Make sure the interface name and chain name are correct.

    <direct>
      [ <rule ipv="ipv6" table="filter" chain="FORWARD_direct" priority="0"> -i zt0 -j ACCEPT </rule> ]
      [ <rule ipv="ipv6" table="filter" chain="FORWARD_direct" priority="0"> -o zt0 -j ACCEPT </rule> ]
    </direct>

Your kernel does not support cgroup swap limit capabilities

On Ubuntu or Debian hosts, you may see messages like

WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.

RPM-based systems do not have this warning, and these features are enabled by default.

If these features are not needed, the warnings can be ignored. You can enable these features on Ubuntu or Debian by following the instructions below. Even when Docker is not running, memory and swap accounting results in about a 1% overhead of available memory and a 10% overall performance hit.

  1. Log in to the Ubuntu or Debian host as a user with sudo privileges.

  2. Edit the /etc/default/grub file. Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs:

    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
    

    Save and close the file.

  3. Update GRUB.

    $ sudo update-grub

    Errors can occur if the syntax of your GRUB configuration file is incorrect. In this case, repeat steps 2 and 3.

    Changes take effect when the system is restarted.

3) Start Docker Desktop

To start Docker Desktop for Linux, search for Docker Desktop on  the Applications menu and open it. This will launch the whale menu icon and open the Docker Dashboard, reporting the status of Docker Desktop.

Alternatively, open a terminal and run:

systemctl --user start docker-desktop

When Docker Desktop starts, it creates a dedicated context that the Docker CLI can use as a target, and sets it as the currently used context. This is to avoid conflicts with native Docker engines that might run on a Linux host and use the default context. When shutting down, Docker Desktop resets the current context to the previous context.

The Docker Desktop installer updates the Docker Compose and Docker CLI binaries on the host. It installs Docker Compose V2 and gives users the option to link it as docker-compose from the Settings panel. Docker Desktop installs a new Docker CLI binary that  /usr/local/bin includes cloud integration in and  /usr/local/bin/com.docker.cli creates a symlink to the old Docker CLI in .

After successfully installing Docker Desktop, you can check the versions of these binaries by running:

 docker compose version
 docker --version
 docker version

To enable Docker Desktop to start at login, select  Settings > General > Start Docker Desktop at login from the Docker menu .

Alternatively, open a terminal and run:

systemctl --user enable docker-desktop

To stop Docker Desktop, click the whale menu tray icon to open the Docker menu and select Quit Docker Desktop .

Alternatively, open a terminal and run:

systemctl --user stop docker-desktop

4) Upgrade Docker Desktop

The Docker UI displays a notification when a new version of Docker Desktop is released. Every time you want to upgrade and run Docker Desktop, you need to download a new package:

sudo apt-get install ./docker-desktop-<version>-<arch>.deb

2. Install Docker Desktop on Windows

  • Download Docker Desktop for Windows
  • 1), system requirements

  • Your Windows machine must meet the following requirements to successfully install Docker Desktop.
  • WSL backend, Hyper-V backend and Windows Containers
  1. On Windows, WSL 2 must be enabled or Hyper-V and Windows Containers must be enabled. See  the Microsoft documentation for detailed instructions .
  2. The following hardware prerequisites are required to successfully run WSL 2 or Client Hyper-V on Windows:

  • Containers and images created with Docker Desktop are shared among all user accounts on the machine where it is installed. This is because all Windows accounts use the same VM to build and run containers. Note that containers and images cannot be shared between user accounts when using the Docker Desktop WSL 2 backend.
  • 2) Install Docker Desktop on Windows 

    interactive installation

  1. Double-click Docker Desktop Installer.exe to run the installer.

    If you haven't downloaded the installer (  Docker Desktop Installer.exe), you can  get it from Docker Hub . It will usually download to your Downloadsfolder, or you can run it from the recent downloads bar at the bottom of your web browser.

  2. When prompted, make sure to select or deselect the Use WSL 2 instead of Hyper-V option on the configuration page depending on your chosen backend.

    If your system supports only one of these two options, you will not be able to choose which backend to use.

  3. Follow the instructions on the installation wizard to authorize the installer and continue with the installation.

  4. After the installation is successful, click Close to complete the installation process.

  5. If your administrator account is different from your user account, you must add the user to the docker-users group. Run Computer Management as an administrator and navigate to Local Users and Groups > Groupsdocker-users . Right click to add user to group. Log out and log back in for the changes to take effect.

  • Install from the command line

    After downloading Docker Desktop Installer.exe , run the following command in Terminal to install Docker Desktop:

  • "Docker Desktop Installer.exe" install

    If you're using PowerShell, you should run:

  • Start-Process 'Docker Desktop Installer.exe' -Wait install

    If using Windows Command Prompt:

  • start /w "" "Docker Desktop Installer.exe" install
  • The install command accepts the following flags:

  • --quiet: Disable information output when running the installer
  • --accept-license: now accepts the Docker Subscription Service Agreement instead of requiring it on the first run of the application
  • --no-windows-containers: Disable Windows Containers Integration
  • --allowed-org=<org name>: Requires the user to be logged in and be part of the specified Docker Hub organization when running the application
  • --backend=<backend name>: select the default backend for Docker Desktop hyper-v, windowsor wsl-2(default)

If your administrator account is different from your user account, you must add the user to the docker-users group:

net localgroup docker-users <user> /add
  • 3) Start Docker Desktop

    Docker Desktop does not start automatically after installation. Start Docker Desktop:

    • 1. Search for Docker, and select Docker Desktop in the search results.

  • 2. Select Accept to continue. After accepting the terms, Docker Desktop will start.

  • 3. Login and start

    Quick Start Guide

    After installing Docker Desktop, the quick start guide will launch. It includes a simple exercise of building a sample Docker image, running it as a container, pushing the image to Docker Hub and saving it.

    To run the Quick Start Guide on demand, select , and choose Quick Start Guide.

  • For a more detailed guide, see  Get started .

    Login to Docker Desktop

    It is recommended to authenticate using the Login/Create ID option in the upper right corner of Docker Desktop .

    Once logged in, you can access the Docker Hub repository directly from Docker Desktop.

    Authenticated users get a higher pull rate limit than anonymous users. For example, if you are authenticated, you get 200 pulls every 6 hours, while anonymous users get 100 pulls every 6 hours per IP address. For more information, see Download Rate Limiting .

    In large enterprises with limited administrator access, administrators can create a registry.json file and use device management software to deploy it to developers' machines as part of the Docker Desktop installation process. Forcing developers to authenticate through Docker Desktop also allows administrators to set up guardrails with features like image access management, which allows team members to only access trusted content on Docker Hub and only pull from images of specified categories. See Configuring registry.json to force login for details .

  • two-factor authentication

    Docker Desktop enables you to log in to Docker Hub using two-factor authentication. Two-factor authentication provides an extra layer of security when accessing your Docker Hub account.

    Before you can log into your Docker Hub account through Docker Desktop, you must enable two-factor authentication in Docker Hub. For instructions, see Enable two-factor authentication for Docker Hub .

    After enabling two-factor authentication:

  • Go to the Docker Desktop menu and select Login/Create Docker ID .

  • Enter your Docker ID and password, and click Login .

  • After a successful login, Docker Desktop will prompt you for a verification code. Enter the six-digit code from your phone and click Verify .

  • Credential Management for Linux Users

    Docker Desktop relies on passes to store credentials in gpg2 encrypted files. Before you can log into Docker Hub from the Docker dashboard or the Docker menu, you must initialize the pass. A warning is displayed if you have not initialized pass

  • You can use the gpg key to initialize the pass. To generate a gpg key, run:

$ gpg --generate-key
...
GnuPG needs to construct a user ID to identify your key.

Real name: Molly
Email address: [email protected]
You selected this USER-ID:
    "Molly <[email protected]>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
...
pub   rsa3072 2022-03-31 [SC] [expires: 2024-03-30]
      7865BA9185AFA2C26C5B505669FC4F36530097C2
uid                      Molly <[email protected]>
sub   rsa3072 2022-03-31 [E] [expires: 2024-03-30]

To initialize pass, run:

fbbqt@ubuntu:~$ pass init 7865BA9185AFA2C26C5B505669FC4F36530097C2
mkdir: created directory '/home/fbbqt/.password-store/'
Password store initialized for 7865BA9185AFA2C26C5B505669FC4F36530097C2

After initializing the pass, you can log into the Docker Dashboard and pull your private image. When using credentials with Docker CLI or Docker Desktop, the user may be prompted to enter the passphrase you set during gpg key generation.

$ docker pull fbbqt/privateimage
Using default tag: latest
latest: Pulling from fbbqt/privateimage
3b9cc81c3203: Pull complete 
Digest: sha256:3c6b73ce467f04d4897d7a7439782721fd28ec9bf62ea2ad9e81a5fb7fb3ff96
Status: Downloaded newer image for fbbqt/privateimage:latest
docker.io/fbbqt/privateimage:latest

overview

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use YAML files to configure your application's services. Then, with one command, all services can be created and started from the configuration. To learn more about all of Compose's features, see the features list .

Compose works in all environments: production, bench, development, test, and CI workflows. You can learn more about each case in Common Use Cases.

Using Compose is basically a three-step process:

  1. Use a Dockerfile to define your application's environment so it can be replicated anywhere.

  2. Define the services that make up your application in docker-compose.yml so they can run together in an isolated environment.

  3. Run docker compose up and the Docker compose command starts and runs your entire application. You can also run docker-cose up with compose standalone (docker-compose binary).

docker-compose.yml file looks like this:

version: "3.9"  # optional since v1.27.0
services:
  web:
    build: .
    ports:
      - "8000:5000"
    volumes:
      - .:/code
      - logvolume01:/var/log
    depends_on:
      - redis
  redis:
    image: redis
volumes:
  logvolume01: {}

For more information on Compose files, see  Compose File Reference .

Compose has commands for managing the entire lifecycle of an application:

  • Start, stop and rebuild services
  • View the status of running services
  • Stream the log output of a running service
  • Run a one-off command on a service

Install:

If you have Docker Desktop, you already have a full Docker installation, including Compose.

It can be viewed by clicking About Docker Desktop in the Docker menu .

1. Install on Linux

1) Install Compose 

To install Compose:

Install using repository

Note: These instructions assume that you already have Docker Engine and Docker CLI installed, and now want to install the Compose plugin.
For Compose Standalone, see Installing Compose Standalone .

If you already have a Docker repository set up, skip to step 2.

  1. Set up the repository. Find release-specific instructions at:

    Ubuntu | CentOS | Debian | Fedora | RHEL | SLES.

  2. Update the package index, and install the latest version of Docker Compose:

    • Free, Debian:

        $ sudo apt-get update
        $ sudo apt-get install docker-compose-plugin
      
    • RPM-based distributions:

        $ sudo yum update
        $ sudo yum install docker-compose-plugin
      
  3. Verify that Docker Compose was installed correctly by checking the version.

    $ docker compose version
    Docker Compose version vN.N.N
    

where vN.NN is a placeholder text representing the latest version.

Update Compose

To update Compose, run the following command:

  • Free, Debian:

      $ sudo apt-get update
      $ sudo apt-get install docker-compose-plugin
    
  • RPM-based distributions:

      $ sudo yum update
      $ sudo yum install docker-compose-plugin

 2), Manually install the plug-in

Note: This option requires you to manage upgrades manually. We recommend setting up Docker's repository for easy maintenance.

  1. To download and install the Compose CLI plugin, run:

    $ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
    $ mkdir -p $DOCKER_CONFIG/cli-plugins
    $ curl -SL https://github.com/docker/compose/releases/download/v2.12.0/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
    

    This command downloads the latest version of Docker Compose (from the Compose release) and $HOMEinstalls Compose in the directory for the active user.

    Install:

    • Docker Compose works for all users on the system, please replace ~/.docker/cli-plugins with  /usr/local/lib/docker/cli-plugins .
    • Different versions of Compose, please replace v2.12.0 with the Compose you want to use.
  2. Apply executable permissions to the binary:

     $ chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
    

    Or, if you choose to install Compose for all users:

     $ sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
    
  3. test installation

     $ docker compose version

2. Other installation scenarios

  • Install Compose Standalone

  • on Linux

  • Compose standalone 

    Note that Compose standalone uses dash composition syntax, not the current standard syntax (space compose ).

    Example: When using compose standalone, type docker-compose up instead of docker compose up

  1. To download and install Compose standalone, run:
      $ curl -SL https://github.com/docker/compose/releases/download/v2.12.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
    
  2. Applies executable permissions to standalone binaries in the installation target path.
  3. Use docker-compose to test and execute compose commands.

notes

If the command docker-compose fails after installation, check the path. You can also create symbolic links to /usr/bin or any other directory in your path. For example:

$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

on windows server

If you are running the Docker daemon and client directly on Microsoft Windows Server and want to install Docker Compose, follow these instructions.

  1. Run PowerShell as administrator. When asked if you want to allow this application to make changes to your device, click Yes to continue with the installation

  2. GitHub now requires TLS1.2. In PowerShell, run the following command

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    
  3. Run the following command to download the latest version of Compose (v2.12.0):

      Start-BitsTransfer -Source "https://github.com/docker/compose/releases/download/v2.12.0/docker-compose-Windows-x86_64.exe" -Destination $Env:ProgramFiles\Docker\docker-compose.exe
    

    notes

    On Windows Server 2019, you can add the Compose executable to $Env:ProgramFiles\Docker.  Since the directory is already registered with the systemPATH , you can run the docker-compose --version command in a later step without additional configuration.

    To install a different version of Compose, replace v2.12.0 with the Compose you want to use.

  4. test installation

    $ docker compose version
  • 3. Uninstall Docker Compose

  • Uninstalling Docker Compose depends on the method you used to install Docker Compose. On this page you can find specific instructions for uninstalling Docker Compose.

Uninstall Docker Desktop

If you want to uninstall Compose and you already have Docker Desktop installed, see Uninstalling Docker Desktop  , follow the appropriate link below for instructions on how to remove Docker Desktop.

notes

Unless you have other instances of Docker installed in that particular environment, you will completely remove Docker by uninstalling Desktop.

Uninstall the Docker Compose CLI plugin

To remove the Compose CLI plugin, run:

Ubuntu、Debian:

sudo apt-get remove docker-compose-plugin

RPM-based distributions:

sudo yum remove docker-compose-plugin

Uninstallation of manual installation

If you ever curlinstalled the Compose CLI plugin, to uninstall it, run:

rm $DOCKER_CONFIG/cli-plugins/docker-compose

delete for all users

Or, if you installed Compose for all users, run:

rm /usr/local/lib/docker/cli-plugins/docker-compose

Getting a permission denied error?

If you get a "permission denied" error using any of the above methods, then you do not have permission to remove docker-compose. To force the removal, add sudo before either of the above commands and run it again.

Check the location of the Compose CLI plugin

To check where Compose is installed, use:

docker info --format ''

4. Use profiles ( configuration files) in Compose

Configuration files allow the Compose application model to be tuned for various uses and environments by selectively enabling services. This is achieved by assigning each service to zero or more profiles. If not assigned, the service is always started, but if assigned, it is only started when the profile is activated.

This allows docker-compose.ymldefining additional services in a single file that can only be started in specific scenarios, such as debugging or development tasks.

Assign the profile to the service

Services are associated with profiles via a  profile property that takes an array of profile names:

version: "3.9"
services:
  frontend:
    image: frontend
    profiles: ["frontend"]

  phpmyadmin:
    image: phpmyadmin
    depends_on:
      - db
    profiles:
      - debug

  backend:
    image: backend

  db:
    image: mysql

Here, services frontendand phpmyadminconfigs are assigned to profiles  frontendand  debug are only started when their respective profiles are enabled.

A service with no  profiles attribute will always be enabled, i.e. running in this case  docker compose up will only start  backend and  db.

Valid profile names follow the [a-zA-Z0-9][a-zA-Z0-9_.-]+.

notes

The application's core services should not be assigned a profile, so they will always be enabled and started automatically

enable profile

To enable profiles, provide --profile a command line option or use the COMPOSE_PROFILES environment variable :

$ docker compose --profile debug up
$ COMPOSE_PROFILES=debug docker compose up

The above commands will all debugstart your application with the profile enabled. Using the file above, docker-compose.ymlthis will start the service backendand the .dbphpmyadmin

--profile Multiple profiles can be specified by passing multiple flags or a comma-separated list for the environment variable  :COMPOSE_PROFILES

$ docker compose --profile frontend --profile debug up
$ COMPOSE_PROFILES=frontend,debug docker compose up

Automatically enable configuration files and dependency resolution

When an assigned service profilesis explicitly located on the command line, its configuration file is automatically enabled, so you don't need to enable them manually. This can be used for one-time servicing and debugging tools. For example, consider the following configuration:

version: "3.9"
services:
  backend:
    image: backend

  db:
    image: mysql

  db-migrations:
    image: backend
    command: myapp migrate
    depends_on:
      - db
    profiles:
      - tools
# will only start backend and db
$ docker compose up -d

# this will run db-migrations (and - if necessary - start db)
# by implicitly enabling profile `tools`
$ docker compose run db-migrations

But keep in mind that this docker composeonly automatically enables the service profile on the command line, not any dependencies. This means that all services of the target service  depends_on must have a common configuration file, always enabled (by omitting profiles) or explicitly enabling the matching configuration file:

But keep in mind that docker-compose will only automatically enable the service profile on the command line, not any dependencies. This means that all services of the target service depends_on must have a common profile, always enabled (by omitting profiles), or explicitly enable a matching profile:

version: "3.9"
services:
  web:
    image: web

  mock-backend:
    image: backend
    profiles: ["dev"]
    depends_on:
      - db

  db:
    image: mysql
    profiles: ["dev"]

  phpmyadmin:
    image: phpmyadmin
    profiles: ["debug"]
    depends_on:
      - db
# will only start "web"
$ docker compose up -d

# this will start mock-backend (and - if necessary - db)
# by implicitly enabling profile `dev`
$ docker compose up -d mock-backend

# this will fail because profile "dev" is disabled
$ docker compose up phpmyadmin

Although the target phpmyadmin will automatically enable its profile (i.e. debug), it will not automatically enable the profile required by the db (i.e. dev). To fix this you have to add a debug profile to the db service:

db:
  image: mysql
  profiles: ["debug", "dev"]

or explicitly enable db's configuration file

# profile "debug" is enabled automatically by targeting phpmyadmin
$ docker compose --profile dev up phpmyadmin
$ COMPOSE_PROFILES=dev docker compose up phpmyadmin

Using Compose in production

When you use Compose to define your application in development, you can use this definition to run your application in different environments, such as CI, workbench, and production.

The easiest way to deploy an application is to run it on a single server, similar to how you run a development environment. If you want to scale your application, you can run your Compose application on a Swarm cluster.

Modify Compose file for production

Changes may be required to the application configuration to be ready for production. These changes may include:

  • Remove any volume bindings for the application code so that the code remains inside the container and cannot be changed from outside
  • Bind to a different port on the host
  • Set environment variables differently, such as reducing logging verbosity, or specifying settings for external services such as email servers
  • Specify a restart strategy, such as restart: alwaysavoiding downtime
  • Add additional services, such as log aggregators

For this reason, please consider defining an additional Compose file, eg  production.yml, which specifies configurations suitable for production. This configuration file only needs to contain the changes you want to make to the original Compose file. Additional Compose files can be applied to the original files docker-compose.ymlto create new configurations.

Once you have your second config file, tell Compose to  -fuse it with options:

docker compose -f docker-compose.yml -f production.yml up -d

deploy changes

When changing the application code, remember to rebuild the image and recreate the application's container. To redeploy a service called web, use:

docker compose build web
docker compose up --no-deps -d web

This first rebuilds the image for the web, then stops, destroys and recreates the web service. The --no-deps flag prevents Compose from recreating any services the web depends on.

Run Compose on a single server 

Compose can be used to deploy applications to remote Docker hosts by setting the Docker_host , Docker_TLS_VERIFY , and Docker_CERT_PATH environment variables appropriately. See also Writing CLI environment variables .

After setting the environment variable, all normal docker compose commands work without further configuration.

5. Control startup and shutdown order in Compose

You can control the order in which services are started and shut down using  the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from and network_mode: "service:...".

However, for startup, Compose doesn't wait until the container is "ready" (whatever that means for your particular application), only until it's running. And for good reason.

The problem of waiting for a database (for example) to be ready is really just a subset of a larger problem in distributed systems. In production, your database may be unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.

To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.

The best solution is to perform this check in the application code, either at startup or when the connection is lost for any reason. However, if you don't need this level of resilience, a wrapper script can solve the problem:

  • Use tools like wait-for-itdockerize , Wait4X , sh-compatible  wait-for or the RelayAndContainers template. These are small wrapper scripts that you can include in your application's image to poll a given host and port until it accepts a TCP connection.

    For example, use wait-for-it.sh or wait-for instead of the command for your service:

    version: "2"
    services:
      web:
        build: .
        ports:
          - "80:8000"
        depends_on:
          - "db"
        command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
      db:
        image: postgres
    

    hint

    The first solution has limitations. For example, it doesn't verify when a particular service is actually ready. If you add more parameters to the command, do so using the bash shift command, as shown in the following example.

  • Alternatively, write your own wrapper scripts to perform more specific application health checks. For example, you may wish to wait for Postgres to be ready to accept commands:

    #!/bin/sh
    # wait-for-postgres.sh
    
    set -e
      
    host="$1"
    shift
      
    until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
      >&2 echo "Postgres is unavailable - sleeping"
      sleep 1
    done
      
    >&2 echo "Postgres is up - executing command"
    exec "$@"
    

    The previous example can also be used as a wrapper script, setting:

    command: ["./wait-for-postgres.sh", "db", "python", "app.py"]

3. Portainer

Portainer consists of a container that can run on any cluster. It can be deployed as a Linux container or a Windows native container. Deploy, configure, troubleshoot, and secure containers in minutes on Kubernetes, Docker, Swarm, and Nomad in any cloud, data center, or device.

Portainer Community Edition (Community Edition) is a lightweight service delivery platform for containerized applications that can be used to manage Docker, Swarm, Kubernetes and ACI environments. It is designed to be easy to deploy and use. The application allows management of all orchestrator resources (containers, images, volumes, networks, etc.) through a "smart" GUI and/or an extensive API.

Portainer Tutorials ( Portainer using video tutorials)

 How to install Docker Standalone

How to install Docker Swarm

Note: The following is the installation using docker standalone (independent), and the installation method of docker swarm is not introduced

Five minutes after the new Portainer container is started by Docker , if the administrator account has not been created, portainerthe container will automatically exit and shut down.

1. Install Portainer using Docker on Linux

introduce

Portainer consists of two objects, Portainer Server and Portainer Agent . These two objects run as lightweight Docker containers on the Docker engine. This document will help you install the Portainer Server container in a Linux environment. To add a new Linux environment to an existing Portainer server installation, see the Portainer agent installation instructions .
Before you start you need:
  • The latest version of Docker is installed and running
  • Have sudo access on the machine that will host the Portainer server instance
  • By default, Portainer Server will expose the UI via port 9443 and the TCP Tunnel Server via port 8000 . The latter is optional and only required if you plan to use edge computing capabilities with edge proxies.
  • License key for Portainer.
The installation instructions also make the following assumptions about your environment:
  • The environment meets our requirements . While Portainer can be used with other configurations, it may require configuration changes or have limited functionality.
  • Docker is being accessed through a Unix socket. Alternatively, it is also possible to connect via TCP.
  • SELinux is disabled on the machine running Docker. If you want SELinux, you need to  --privileged  pass the flag to Docker when deploying Portainer.
  • Docker runs as root. Portainer using non-root Docker has some limitations and requires additional configuration.

deploy

First, create the volume that Portainer Server will use to store its database:

docker volume create portainer_data

Then, download and install the Portainer Server container:

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
By default, Portainer generates and uses a self-signed SSL certificate to secure the port 9443 . Alternatively, you can provide your own SSL certificate during installation or through the Portainer UI after installation is complete .
If  9000 you need to open the HTTP port for legacy reasons, add the following to  docker run the command:
-p 9000:9000

The Portainer server is now installed. docker ps You can check that the container Portainer Server is started by running :

root@server:~# docker ps
CONTAINER ID   IMAGE                          COMMAND                  CREATED       STATUS      PORTS                                                                                  NAMES             
de5b28eb2fa9   portainer/portainer-ee:latest  "/portainer"             2 hours ago   Up 0 days   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp   portainer

Similarly, we can also add the startup of this service to docker-componse to start:

When using the portainer_data volume, it must be created before it can be used

# 创建卷portainer_data(卷名自己定)
$ docker volume create portainer_data
portainer_data
# 查询portainer_data卷的详细信息
$ docker volume inspect portainer_data
[
    {
        "CreatedAt": "2022-11-09T15:39:45+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/portainer_data/_data",  # 创建portainer_data卷默认所在位置
        "Name": "portainer_data",
        "Options": {},
        "Scope": "local"
    }
]
version: "3.9"    # compose版本
services:  # 加入服务
  portainer:     # 单个服务标识(名字)
    container_name: portainer      # 启动后的容器名称 相当于 --name 指定的名称
    image: portainer/portainer-ce:latest   # 社区版镜像
    ports:
      - 8000:8000    # 端口映射,前者是宿主机端口,后者是容器端口
      - 9443:9443
    volumes:         # 指定对应的数据卷  前者是宿主机目录,后者是容器目录
      - /var/run/docker.sock:/var/run/docker.sock    # 数据文件挂载
      - /var/lib/docker/volumes/portainer_data:/data     # Portainer Server用于存储其数据库的卷
    restart: always
      

Log in

Now that the installation is complete, you can log in to the Portainer server instance by opening a web browser and going to the following location (directly access port 9443 of the remote virtual machine, register an account, link to the service of the local virtual machine, and you can see the provided web Visualize the page.):
https://localhost:9443
Replace localhost with the relevant IP address or FQDN if necessary, and adjust the port if it was previously changed.
You will see the initial setup page for Portainer Server.

Create the first user:

The first user will be an administrator. Username defaults to  admin,but it can be changed if desired. Passwords must be at least 12 characters long and meet the listed password requirements.

2. Install Portainer service using Docker container on Windows

introduce

Portainer consists of two objects, Portainer Server and Portainer Agent   . These two objects run as lightweight Docker containers on the Docker engine. This document will help you install the Portainer Server container on your Windows server using Windows Containers. To add a new WCS environment to an existing Portainer server installation, see Portainer agent installation instructions .
Before you start you need:
  • Administrator access on the computer that will host the Portainer server instance
  • By default, Portainer Server will expose the UI via port 9443 and the TCP Tunnel Server via port 8000 . The latter is optional and only required if you plan to use edge computing capabilities with edge proxies.
The installation instructions also make the following assumptions about your environment:
  • The environment meets our requirements . While Portainer can be used with other configurations, it may require configuration changes or have limited functionality.

Prepare

To run the Portainer server in a Windows server/desktop environment, it is necessary to create exceptions in the firewall for the required environment. These can easily be added via PowerShell by running:
netsh advfirewall firewall add rule name="cluster_management" dir=in action=allow protocol=TCP localport=2377
netsh advfirewall firewall add rule name="node_communication_tcp" dir=in action=allow protocol=TCP localport=7946
netsh advfirewall firewall add rule name="node_communication_udp" dir=in action=allow protocol=UDP localport=7946
netsh advfirewall firewall add rule name="overlay_network" dir=in action=allow protocol=UDP localport=4789
netsh advfirewall firewall add rule name="swarm_dns_tcp" dir=in action=allow protocol=TCP localport=53
netsh advfirewall firewall add rule name="swarm_dns_udp" dir=in action=allow protocol=UDP localport=53

You also need to install Windows Container Host Service and install Docker:

Enable-WindowsOptionalFeature -Online -FeatureName containers -All
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider

Once complete, the Windows server will need to be restarted. After the reboot is complete, Portainer is ready to be installed.

deploy

First, create the volume that Portainer Server will use to store its database. Using PowerShell:
docker volume create portainer_data

Then, download and install the Portainer Server container:

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart always -v \\.\pipe\docker_engine:\\.\pipe\docker_engine -v portainer_data:C:\data portainer/portainer-ce:latest
By default, Portainer generates and uses a self-signed SSL certificate to secure the port 9443 . Alternatively, you can provide your own SSL certificate during installation or through the Portainer UI after installation is complete .
If  9000 you need to open the HTTP port for legacy reasons, add the following to  docker run  the command:
-p 9000:9000

Log in

Now that the installation is complete, you can log into the Portainer server instance by opening a web browser and going to:

https://localhost:9443

Replace  localhost  with the relevant IP address or FQDN if necessary, and adjust the port if it was previously changed.
You will see the initial setup page for Portainer Server.

Create the first user:

The first user will be an administrator. Username defaults to  admin,but it can be changed if desired. Passwords must be at least 12 characters long and meet the listed password requirements.

3. Install Portainer with Docker on WSL/Docker Desktop

introduce

Portainer consists of two objects, Portainer Server and Portainer Agent   . These two elements run as lightweight Docker containers on the Docker engine. This document will help you install a Portainer Server container in a Windows environment using WSL and Docker Desktop. To add a new WSL/Docker desktop environment to an existing Portainer server installation, see the Portainer agent installation instructions .
Before you start you need:
  • The latest version of Docker Desktop is installed and running.
  • Administrator access on the computer that will host the Portainer server instance.
  • Installed Windows Subsystem for Linux (WSL) and selected a Linux distribution. For new installations, we recommend WSL2.
  • By default, Portainer Server will expose the UI via port 9443 and the TCP Tunnel Server via port 8000 . The latter is optional and only required if you plan to use edge computing capabilities with edge proxies.
The installation instructions also make the following assumptions about your environment:
  • The environment meets the requirements . While Portainer can be used with other configurations, it may require configuration changes or have limited functionality.
  • Docker is being accessed through a Unix socket. Alternatively, it is also possible to connect via TCP.
  • SELinux is disabled in the Linux distribution used by WSL. If you need SELinux, you need --privileged to pass the flag to Docker when deploying Portainer.
  • Docker runs as root. Portainer using non-root Docker has some limitations and requires additional configuration.

deploy

First, create the volume that Portainer Server will use to store its database:
docker volume create portainer_data

Then, download and install the Portainer Server container:

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
By default, Portainer generates and uses a self-signed SSL certificate to secure the port 9443 . Alternatively, you can provide your own SSL certificate during installation or through the Portainer UI after installation is complete .
If 9000 you need to open the HTTP port for legacy reasons, add the following to docker run the command:
-p 9000:9000

The Portainer server is now installed. docker ps You can check that the container Portainer Server is started by running :

root@server:~# docker ps
CONTAINER ID   IMAGE                                              COMMAND                  CREATED        STATUS        PORTS                                                                                  NAMES
f4ab79732007   portainer/portainer-ee:latest                      "/portainer"             0 days ago    Up 2 hours   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9000/tcp, :::9443->9443/tcp   portainer

Log in

Now that the installation is complete, you can log into the Portainer server instance by opening a web browser and going to:

https://localhost:9443

Replace  localhost  with the relevant IP address or FQDN if needed, and adjust the port if you changed it before.
You will see the Portainer Server's initial setup page.

Create the first user:

The first user will be an administrator. Username defaults to  admin,but it can be changed if desired. Passwords must be at least 12 characters long and meet the listed password requirements.

4. Portainer forgot password

1. Use the following command to view information corresponding to all containers

docker ps -a

2. Find the corresponding information of Portainer and stop the Portainer container

docker stop 355a8c4d2cce(容器ID)

3. Use the following command to find the mounting information of the Portainer container, and find the information shown in the red box in the figure below

docker inspect 507566f7086e(容器ID)

4. Execute the command to reset the password

docker run --rm -v /path:/data portainer/helper-reset-password 

command here:

docker run --rm -v /var/lib/docker/volumes/portainer_data:/data portainer/helper-reset-password

Now the password for admin login is: 5,4T/6Ys29WS@C1J8o-[{r#l*pIBhKx7 

5. Start the container, enter the username and password to log in

docker start 355a8c4d2cce(容器ID)

6. Modify the login password of Portainer

After logging in, find admin under Users and click to modify the password

5. Portainer uses its own certificate

By default, Portainer's web interface and API open over HTTPS using the self-signed certificate generated by the installation.  This can be replaced with your own SSL certificate after installation via the Portainer UI or during installation.

When using your own externally issued certificates, make sure to include the full certificate chain (including any intermediate certificates) in the provided --sslcert file. Otherwise, you may face certificate verification issues. The certificate chain can be obtained from the certificate issuer or the What's My Chain Cert website.

Use your own SSL certificate on Docker Standalone

  • Portainer requires certificates in PEM format.

--sslcertUse the and flags during installation --sslkey.

  • If you are using a certificate signed by your own CA, you need to provide the CA certificate with the --sslcacert flag.

Upload the certificate (including chain) and key to the server running Portainer, then start Portainer to reference them. The following commands assume that your certificates are stored in /path/to/your/certs with filenames portainer.crt and portainer.key. And bind the directory to /certs in the Portainer container:

Community Edition:

docker run -d -p 9443:9443 -p 8000:8000 \
    --name portainer --restart always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v portainer_data:/data \
    -v /path/to/your/certs:/certs \
    portainer/portainer-ce:latest \
    --sslcert /certs/portainer.crt \
    --sslkey /certs/portainer.key

Alternatively, Certbot can be used to generate certificates and keys. Because Docker has a problem with symlinks, if you use Certbot, you need to have the "live" and "archive" directories as volumes and use the full chain certificate. For example:

community edition

docker run -d -p 9443:9443 -p 8000:8000 \
    --name portainer --restart always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v portainer_data:/data \
    -v /etc/letsencrypt/live/yourdomain:/certs/live/yourdomain:ro \
    -v /etc/letsencrypt/archive/yourdomain:/certs/archive/yourdomain:ro \
    portainer/portainer-ce:latest \
    --sslcert /certs/live/yourdomain/fullchain.pem \
    --sslkey /certs/live/yourdomain/privkey.pem

Once complete, you can access the  https://$ip-docker-host:9443.

Guess you like

Origin blog.csdn.net/fbbqt/article/details/127408356