Docker (transfer)

Source: https://segmentfault.com/a/1190000002734062

Docker is a container management tool

Docker  is a lightweight, portable container isolated from the outside world, and an engine that can easily build, transfer, and run applications in containers. Different from the traditional virtualization technology, the Docker engine does not virtualize a virtual machine, but directly uses the kernel and hardware of the host to run the application in the container directly on the host. Thanks to this, the performance gap between the application running in the Docker container and the application running on the host is almost negligible. 
But Docker itself is not a container system, but a tool used to create a virtual environment based on the original containerization tool LXC. Tools like LXC have been used in production environments for many years, and Docker provides more friendly image management tools and deployment tools based on this.

Docker is not a virtualization engine

When Docker was first released, many people compared Docker to virtual machines VMware, KVM, and VirtualBox. While functionally Docker and virtualization are both addressing similar problems, Docker takes a very different approach. A virtual machine is a virtual set of hardware, and the disk operations performed by the virtual machine system are actually operating on the virtualized disk. When running CPU-intensive tasks, it is the virtual machine that "translates" the CPU instructions in the virtual system into the host's CPU instructions and executes them. Two disk layers, two processor schedulers, and memory consumed by two operating systems , all virtualized will bring considerable performance losses. The hardware resources consumed by a virtual machine are equivalent to the corresponding hardware. After running too many virtual machines on a host, it will become overloaded. Docker, on the other hand, has no such concerns. Docker runs the application using the "container" solution: using namespace and CGroup to limit resources, sharing the kernel with the host, not virtual disks, all container disk operations are actually operations on /var/lib/docker/. In short, Docker is really just running a restricted application on the host. 
It is not difficult to see from the above that the concepts of containers and virtual machines are not the same, and containers cannot replace virtual machines. In places where containers can't, virtual machines can be very useful. For example: the host is  Linux , and Docker can only run Windows through a virtual machine. For another example, if the host is Windows, Windows cannot run Docker directly. Docker on Windows actually runs in the VirtualBox virtual machine.

Docker Toolbox

http://dwz.cn/34cH1t 
Recently, Docker company released Toolbox. Toolbox is an installer that currently supports Mac and Windows platforms. Use it to quickly install the Docker toolset. This article is translated from the official Docker blog. 
We've always heard in the past that it's hard to get started with Docker in development, especially if you've already defined your application in terms of Compose, and then you're going to install Compose separately. With the popularity of Compose, Kitematic, and Boot2Docker, we realized we needed to make these bits and pieces work better together. 
Toolbox installs everything you need to run Docker in development: Docker Client, Compose (Mac only), Kitematic, Machine, and VirtualBox. Toolbox uses Machine and VirtualBox to create an engine in a virtual machine to run containers. On this virtual machine, you can use the Docker client, Compose, and Kitematic to run containers.

Does it replace Boot2Docker? 
Yes, to play around with Docker, we recommend Toolbox. 
Although the Boot2Docker installer has become quite popular, Docker Toolbox is designed to install the growing collection of Docker developer tools such as Kitematic, Machine, Swarm, and Compose. Before Boot2Docker also installed a command-line tool called Boot2Docker to manage Docker virtual machines, which has been replaced by Machine in Toolbox. 
However, under the hood, Machine still uses the Boot2Docker Linux distribution to run containers. The difference is that these containers are now managed by Machine instead of the Boot2Docker command line tool. 
If you are currently using the official Boot2Docker (boot2docker-VM), Docker Toolbox will prompt you to automatically migrate to a virtual machine using Docker Machine.

Docker Machine

https://docs.docker.com/machine/overview/

Docker Swarm

Swarm is the official native cluster solution of docker, which virtualizes the host pool of Docker as a host, is compatible with the standard API of docker, and ensures that all software that can communicate and interact with the docker daemon can be seamlessly ported and interacted with the docker swarm cluster.


Installation, configuration and use of docker under CentOS system: http://www.server110.com/docker/201411/11105.html 
CentOS6.5 installs Docker: http://www.linuxidc.com/Linux/2015-01/ 111091.htm


The relationship between Dockerfile, Docker image and Docker container

http://www.csdn.net/article/2015-08-21/2825511 
Dockerfile is the raw material of software, Docker image is the deliverable of software, and Docker container can be considered as the running state of software. From the perspective of application software, Dockerfile, Docker image, and Docker container represent three different stages of software. Dockerfile is development-oriented, Docker image is the delivery standard, and Docker container involves deployment and operation and maintenance. The three are indispensable and work together. Acts as the cornerstone of the Docker system.

Simply put, a Dockerfile builds a Docker image and runs a Docker container through the Docker image.

  • Docker images are the basis for running Docker containers. Without Docker images, there can be no Docker containers. This is one of the design principles of Docker. 
    It is understandable that Docker images are images after all, and belong to static content; while Docker containers are different, and containers belong to dynamic content. Dynamic stuff, things like processes, memory, CPU, etc.
  • A Docker container is essentially one or more processes, and the parent process of the container is the Docker daemon. In this way, the execution of the conversion work is not difficult to understand: the Docker daemon holds the json file of the Docker image, configures the corresponding environment for the container, and actually runs the process specified by the Docker image to complete the real creation of the Docker container.
  • After the Docker container is running, the Docker image json file becomes useless. At this time, most of the functions of the Docker image are to provide a file system perspective for the Docker container, so that the processes inside the container can access file resources. 
    All Docker image layers are read-only for the container, and the container's write operation to the file will never affect the image. The Docker daemon will add a readable and writable layer on top of the Docker image, and all write operations of the container will be applied to this layer.

  • The content of the Docker image mainly includes two parts: first, the content of the image layer file; second, the image json file. 
    http://www.csdn.net/article/2015-08-21/2825511

  • View image layer compositiondocker history ubuntu:14.04
  • Image layer file content storage 
    cd /var/lib/docker/aufs/diff/ 
    ls |xargs ls
  • Image json file storage 
    /var/lib/docker/graph 
    ls |xargs ls 
    In addition to the json file, each image layer also contains a layerssize file, which mainly records the total size of the internal file content of the image layer.

Difference between docker and virtual machine

http://www.csdn.net/article/2014-07-02/2820497-what's-docker

Docker containers have the following advantages over VMs: 
fast startup, containers can usually be started within a second, and VMs usually take longer 
. High resource utilization, an ordinary PC can run thousands of containers, and you can run thousands of containers. Try a VM with 
low performance overhead. VM usually needs extra CPU and memory to complete the functions of the OS. This part occupies extra resources. 
Why is there such a huge gap in performance for similar functions? In fact, this is similar to their The idea of ​​the design is related.


Docker underlying foundation

docker lxc cgroup namespace

http://blog.csdn.net/cnsword/article/details/17053865 
Docker is the manager of lxc, lxc is the management tool of cgroup, and cgroup is the management interface of namespace user space. Namespace is the basic mechanism for the Linux kernel to manage process groups in task_struct. 
Docker is implemented in Go , which automates the management process of lxc and can automatically download the rootfs of the corresponding release version online. 
lxc can directly chroot to the rootfs of any system and control the resource occupancy of the system in the container through the cgroup restriction mechanism. 
After a cgroup is configured through a configuration file or command, it restricts the system resources used by the corresponding process or a group of processes. 
Obviously, in lxc and above, a complete system has been run in a limited container with the help of the chroot mechanism, so that multiple new systems that do not pass virtualization technology are running on specific physical resource constraints.

Libcontainer

http://www.infoq.com/cn/articles/docker-container-management-libcontainer-depth-analysis/ 
A container is a managed execution environment that shares a kernel with the host system and can be isolated from other containers in the system . 
When Docker was first released in 2013, it was an open source container management engine based on LXC. Simplify the complex container creation and use of LXC into Docker's own set of command systems. With the continuous development of Docker, it began to have a more ambitious goal, that is, to reversely define the implementation standard of the container, and abstract the underlying implementation to the interface of Libcontainer. This means that the implementation of the underlying container has become a variable solution. Whether it is using namespace, cgroups technology, or other solutions such as systemd, as long as a set of interfaces defined by Libcontainer is implemented, Docker can run. This also makes it possible for Docker to achieve a comprehensive cross-platform.

LXC

http://www.cnblogs.com/lisperl/archive/2012/04/15/2450183.html 
LXC is the abbreviation of Linux containers, which is a virtualization technology at the operating system level based on containers. 
LXC can provide a virtual execution environment for processes at the operating system level. A virtual execution environment is a container. It can bind specific cpu and memory nodes for the container, allocate a specific proportion of cpu time and IO time, limit the amount of memory that can be used (including memory and swap space), provide device access control, and provide independent namespaces (network, pid , ipc, mnt, uts). 
The real implementation of LXC relies on the relevant features of the Linux kernel, and the LXC project just integrates this. Container-based virtualization technology originated from so-called resource containers and security containers. 
LXC relies on the cgroups subsystem of the Linux kernel in terms of resource management. The cgroups subsystem is a process group-based resource management framework provided by the Linux kernel, which can limit the available resources for a specific process group. LXC relies on the namespace feature of the Linux kernel in terms of isolation control, specifically, adding the corresponding flag (NEWNS NEWPID, etc.) to clone. 
LXC is a so-called operating system-level virtualization technology. Compared with the traditional HAL (hardware abstraction layer) level virtualization technology, it has the following advantages: 
1. Smaller virtualization overhead (many features of LXC are basically provided by the kernel) , and the kernel implements these features with very little cost, and there is time for specific analysis) 
2. Rapid deployment. Using LXC to isolate specific applications, you only need to install LXC, and you can use LXC-related commands to create and start containers to provide a virtual execution environment for applications. The traditional virtualization technology needs to create a virtual machine first, then install the system, and then deploy the application. 
Compared with other virtualization technologies at the operating system level, the biggest advantage of LXC is that LXC is integrated into the kernel and does not need to be patched separately for the kernel.

CGroup

http://www.ibm.com/developerworks/cn/linux/1506_cgroup/index.html 
CGroup is the abbreviation of Control Groups, which is used by the Linux kernel to limit, record, and isolate process groups. Mechanisms for physical resources (such as cpu memory i/o, etc.). 
CGroup is a Linux kernel function that manages arbitrary processes in groups. CGroup itself is an infrastructure that provides functions and interfaces to manage processes in groups. Specific resource management functions such as I/O or memory allocation control are implemented through this function. These specific resource management functions are called CGroup subsystems or controllers. The CGroup subsystem includes a Memory controller that controls memory, and a CPU controller that controls process scheduling. The Cgroup subsystems available to a running kernel are identified by /proc/cgroup. 
CGroup provides a CGroup virtual file system as a user interface for group management and subsystem settings. To use CGroup, the CGroup file system must be mounted. In this case, which subsystem is used is specified by the mount option.

cgroup introduction, installation and control cpu, memory, io example

http://my.oschina.net/cloudcoder/blog/424418?p=1

rootfs

http://www.crifan.com/what_is_root_filesystem/ 
The root file system in the Linux system, Root FileSystem, referred to as rootfs 
The so-called rootfs, the root file system, are those, folders and files that allow the operating system to run normally Big collection.

ONTO

Docker basic technology: AUFS: http://coolshell.cn/articles/17061.html


docker common commands

  1. docker images View all local mirrors
  2. docker rmi to remove the maryatdocker/docker-whale and docker-whale images. 
    You can use an ID or the name to remove an image. 
    $ docker rmi -f 7d9495d03763 
    $ docker rmi -f docker-whale
  3. docker run –tid ubuntu:14.04 

    After running successfully, a string of ids will be generated. The id here is 64-bit, and its first 12 digits are used as the ID of the container. You will see below that the complete 64-digit number will be used for parameter interpretation in the aufs folder  (other Parameters can be executed docker run --help) 
    t: start the terminal (-t flag assigns a pseudo-tty or terminal inside the new container.) 
    i: even if the connection is disconnected, the terminal continues. If there is no this parameter, after disconnection, the terminal will exit 
    d: background operation, if this parameter is not added, it will attach to the container 
    p: port mapping by default, host port: internal port 
    by: xiaoshun
  4. docker attach 80ce056e622d 
    To detach the tty without exiting the shell, use the escape sequenceCtrl-p + Ctrl-q.
  5. docker run -tid -v /home/kingson/:/apps/svr/kingson/ ubuntu:14.04 Specify the mapping directory at startup (you can't change it after startup, you can only run one again)
  6. docker commit 容器id/容器名称 镜像名称
  7. docker pull ubuntu:14.04 
    Among them, ubuntu is the repository name, which can be understood as the image name, and 14.04 is the tag, which can be understood as the version number. 
    If the dao tool is installed, it can also be used here.dao pull ubuntu:14.04
  8. docker run -p 9999:22 -tid -v /home/kingson/:/apps/svr/kingson/ ubuntu-ssh:14.04 
    docker容器内安装ssh,通过端口映射容器端口22到主机端口9999 
    ssh登录使用主机ip和9999登录,注意容器内要新建用户,否则没有用户可以登录容器 
    apt-get update 
    apt-get install openssh-server 
    检查ssh服务开启状态ps -s | grep ssh 
    service ssh start
  9. docker ps ,docker ps -a查看正在运行的或者之前运行的容器的日志
  10. docker logs - Shows us the standard output of a container.
  11. docker inspect <container id>查看一个 Docker 容器的各种信息
  12. docker stop <container id> 
    docker kill $(docker ps -q)停止每个正在运行的容器
  13. docker rm <container id> 
    docker rm $(docker ps -aq)移除所有 docker 镜像
  14. docker高级应用(CPU与内存资源限制) 
    http://www.open-open.com/lib/view/1458559644603
  15. Docker Compose—简化复杂容器应用的利器 
    http://www.tuicool.com/articles/AnIVJn
  16. Dockerfile: https://docs.docker.com/engine/reference/builder/
  17. Docker 清理命令集锦 
    http://www.jb51.net/article/56051.htm 
    杀死所有正在运行的容器 
    docker kill $(docker ps -a -q) 
    删除所有已经停止的容器 
    docker rm $(docker ps -a -q) 
    删除所有未打 dangling 标签的镜像 
    docker rmi $(docker images -q -f dangling=true) 
    删除所有镜像 
    docker rmi $(docker images -q)
  18. docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web 
    172.17.0.2
  19. 检查容器
$ docker ps      # 显示运行的容器
$ docker inspect # 显示容器信息(包括ip地址)
$ docker logs    # 获取容器中的日志
$ docker events  # 获取容器事件
$ docker port    # 显示容器的公开端口
$ docker top     # 显示容器中运行的进程
$ docker diff    # 查看容器文件系统中改变的文件
$ docker stats   # 查看各种纬度数据、内存、CPU、文件系统等

docker 学习

https://docs.docker.com/engine/understanding-docker/

  • Docker components 
    Docker has two major components: 
    Docker Engine: the open source containerization platform. 
    Docker Hub: our Software-as-a-Service platform for sharing and managing Docker containers.
  • Docker’s architecture 
    Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. Both the Docker client and the daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate via sockets or through a RESTful API.
  • The underlying technology 
    Docker is written in Go and makes use of several kernel features to deliver the functionality we’ve seen. 
    • Namespaces 
      Docker takes advantage of a technology called namespaces to provide the isolated workspace we call the container. When you run a container, Docker creates a set of namespaces for that container. 
      This provides a layer of isolation: each aspect of a container runs in its own namespace and does not have access outside it.
    • Control groups 
      Docker Engine on Linux also makes use of another technology called cgroups or control groups. A key to running applications in isolation is to have them only use the resources you want. This ensures containers are good multi-tenant citizens on a host. Control groups allow Docker Engine to share available hardware resources to containers and, if required, set up limits and constraints. For example, limiting the memory available to a specific container.
    • Union file systems 
      Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses union file systems to provide the building blocks for containers. Docker Engine can make use of several union file system variants including: AUFS, btrfs, vfs, and DeviceMapper.
    • Container format 
      Docker Engine combines these components into a wrapper we call a container format. The default container format is called libcontainer. In the future, Docker may support other container formats, for example, by integrating with BSD Jails or Solaris Zones.

Docker compose

https://docs.docker.com/compose/install/ 
https://github.com/docker/compose/releases

curl -L https://github.com/docker/compose/releases/download/1.7.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

https://docs.docker.com/compose/gettingstarted/ 
docker-compose up 
docker-compose up -d 
docker-compose run web env 
docker-compose stop 
https://docs.docker.com/compose/compose-file/

使用Dockerfile构造镜像病提交仓库

https://docs.docker.com/linux/step_four/

Dockerfile
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay

docker build -t docker-whale . 
docker tag 4f476950722f kingson4wu/docker-whale:latest 
root@ubuntu:/home/kingson# docker login –username=kingson4wu –[email protected] 
Warning: ‘–email’ is deprecated, it will be removed soon. See usage. 
Password: 
Login Succeeded 
docker push kingson4wu/docker-whale

Self-built mirror warehouse

https://github.com/docker/distribution/blob/master/docs/deploying.md 
https://github.com/docker/distribution

docker run -d -p 5000:5000 --restart=always --name registry registry:2 
root@ubuntu:/home/kingson# netstat -nlp |grep 5000 
tcp6 0 0 :::5000 :::* LISTEN 6320/docker-proxy 
root@ubuntu:/home/kingson# lsof -i:5000 
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 
exe 6320 root 4u IPv6 28188 0t0 TCP *:5000 (LISTEN)

management background

curl -sSL https://shipyard-project.com/deploy | bash -s 
Shipyard available at http://192.168.121.128:8080 
Username: admin Password: shipyard


document query

https://docs.docker.com/engine/userguide/intro/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326227162&siteId=291194637