Docker#Dockfile container image production

Docker container management and image production

Docker container management and image production
1: Create your own image
1. Pack
the file system of the container into a tar package Pack the file system of the container into a tar file, that is, directly export the running container as an image file of the tar package.
Export:

export    
    Export a container's filesystem as a tar archive
有两种方式(elated_lovelace为容器名):
第一种:
docker export -o elated_lovelace.tar elated_lovelace
第二种:
docker export 容器名称 > 镜像.tar

Import:

导入镜像归档文件到其他宿主机:
import    
    Import the contents from a tarball to create a filesystem image
docker import elated_lovelace.tar  elated_lovelace:v1 

Note:
If there is no name when importing the image, then you can name it separately (without name and tag), and you can add tags manually.

docker tag 镜像ID mycentos:7 

2. Image migration
Save the image on one host as a tar file, and then import it to other hosts.

save      
    Save an image(s) to a tar archive
    将镜像打包,与下面的load命令相对应
[root@xingdian ~]# docker save -o nginx.tar nginx
load      
    Load an image from a tar archive or STDIN
    与上面的save命令相对应,将上面sava命令打包的镜像通过load命令导入
docker load < nginx.tar

Note:
1. The name of the tar file has nothing to do with the name of the saved image
. 2. If the imported image does not have a name, tag the name by yourself.
Extension: the difference between
export and save. Export: equivalent to a container snapshot. The container snapshot file will discard all Historical records and metadata information
save: If there is no such phenomenon, it is complete.
3. Create a local image through the container.
Background: After the
container is running, some operations are done in it, and the results of the operation must be saved in the image.
Solution:
Use the docker commit command to directly submit a running container as an image. Commit means to submit, similar to telling the svn server that I want to generate a new version.
Example:
Create a new file inside the container

[root@xingdian ~]# docker exec -it 4ddf4638572d /bin/sh  
root@4ddf4638572d:/app# touch test.txt
root@4ddf4638572d:/app# exit
#  将这个新建的文件提交到镜像中保存
[root@xingdian ~]# docker commit 4ddf4638572d xingdian/helloworld:v2

example:

[root@xingdian ~]# docker commit -m "my images version1" -a "xingdian" 108a85b1ed99 daocloud.io/ubuntu:v2
sha256:ffa8a185ee526a9b0d8772740231448a25855031f25c61c1b63077220469b057
    -m                        添加注释
    -a                        作者
    108a85b1ed99              容器环境id
    daocloud.io/ubuntu:v2     镜像名称:hub的名称/镜像名称:tag 
    -p,–pause=true           提交时暂停容器运行

Two: Use Dockerfile to create
a mirror. Although you can create a rootfs (root file system) by yourself, Docker provides a more convenient way, called Dockerfile.
The docker build command is used to build a Docker image based on the given Dockerfile and context.
docker build syntax

[root@xingdian ~]# docker build [OPTIONS] <PATH | URL | ->
  1. Description of common options
    –build-arg, set the variable during build
    –no-cache, default false. Setting this option will not use Build Cache to build the image-
    pull, the default is false. Set this option, always try to pull the latest version of the
    image-compress, the default is false. Setting this option will use gzip compression to build the context
    -disable-content-trust, the default is true. Setting this option will verify the image
    –file, -f, the full path of the Dockerfile, the default value is'PATH/Dockerfile'
    –isolation, the default –isolation=“default”, that is, the Linux namespace; there are other processes or hyperv -Label,
    set metadata
    -squash for the generated image , the default is false. Set this option to compress the newly constructed multiple layers into a new layer, but the new layer cannot be shared between multiple images; setting this option actually creates a new image while retaining the original image.
    --Tag, -t, the name and tag of the image, usually in name:tag or name format; you can set multiple tags for one image in one build
    -network, the default is default. Set this option, Set the networking mode for the RUN instructions during build
    -quiet, -q, the default is false. Set this option, Suppress the build output and print image ID on success
    --Force-rm, the default is false. Set this option, always delete the intermediate container
    -rm, the default -rm=true, that is, delete the intermediate container after the entire build process is successful
  2. PATH | URL | -Description
    Gives the context of command execution.
    The context can be the local path where the build is executed, or it can be a remote URL, such as a Git library, tarball, or text file. If it is a Git library, such as https://github.com/docker/rootfs.git#container:docker, then implicitly execute git clone --depth 1 --recursive first to the local temporary directory; then the temporary directory Sent to the build process.
    In the process of building the image, any file in the context (note that the file must be in the context) can be added to the image through the ADD command.
    -Indicates that Dockerfile or context is given through STDIN.
    Example:
[root@xingdian ~]#  docker build - < Dockerfile

Note: The build process only has a Dockerfile without context

Description: The Dockerfile is located at the root path of context.tar.gz

[root@xingdian ~]# docker build -t champagne/bbauto:ltest -t champagne/bbauto:v2.1 .
[root@xingdian ~]# docker build -f dockerfiles/Dockerfile.debug -t myapp_debug .

  1. Create the folder where the image is located and the Dockerfile
    command:
mkdir sinatra 
cd sinatra 
touch Dockerfile 
  1. Write instructions in the Dockerfile file, and each instruction will update the image information
# This is a comment 
FROM daocloud.io/library/ubuntu 
MAINTAINER xingdian [email protected]
RUN apt-get update && apt-get install -y ruby ruby-dev 

Format description:
Each command line is in the form of INSTRUCTION statement, which is the mode of command + list. Commands should be capitalized, "#" is a comment.
The FROM command tells docker what our image is.
MAINTAINER is the creator of the description image.
The RUN command is executed inside the mirror. That is to say, the commands behind him should be commands that can be run for mirroring.
5. Create a mirror
command:

docker build -t xingdian/sinatra:v2 .

docker build is the command for docker to create a mirror
-t is to identify the newly created mirror
sinatra is the name of the warehouse
: v2 is the tag
"." is used to indicate the
detailed execution process of the current directory of the Dockerfile file we use :

 [root@master sinatra]# docker build -t xingdian/sinatra:v2 . 
        Sending build context to Docker daemon 2.048 kB
        Step 1 : FROM daocloud.io/ubuntu:14.04
        Trying to pull repository daocloud.io/ubuntu ... 
        14.04: Pulling from daocloud.io/ubuntu
        f3ead5e8856b: Pull complete 
        Digest: sha256:ea2b82924b078d9c8b5d3f0db585297a5cd5b9c2f7b60258cdbf9d3b9181d828
         ---> 2ff3b426bbaa
        Step 2 : MAINTAINER xingdian [email protected]
         ---> Running in 948396c9edaa
         ---> 227da301bad8
        Removing intermediate container 948396c9edaa
        Step 3 : RUN apt-get update && apt-get install -y ruby ruby-dev
         ...
        Step 4 : RUN gem install sinatra
        ---> Running in 89234cb493d9

6. After the creation is complete, create a container from the image

[root@xingdian ~]# docker run -t -i xingdian/sinatra:v2 /bin/bash

Three: Understanding the container file system (extracurricular reading and understanding)
problem:
The role of Namespace is "isolation", it allows the application process to only see the "world" in the Namespace; and the role of Cgroups is "restriction", it gives this " The world" surrounds an invisible wall. After such a toss, the process is really "installed" in an isolated room, and these rooms are the application "sandbox" on which the PaaS project depends.
However, there is another question: Although there are walls around this room, what if the container process looks down at the ground?
In other words, what does the file system seen by the process in the container look like?
Maybe you can immediately think that this must be a problem with Mount Namespace: the application process in the container should see a completely independent file system. In this way, it can operate under its own container directory (such as /tmp) without being affected by the host machine and other containers.
So, is this true?
A small program: the function is to open the specified Namespace when creating a child process. Use it to verify the problem just mentioned.

#define _GNU_SOURCE
#include <sys/mount.h> 
#include <sys/types.h>
#include <sys/wait.h>
#include <stdio.h>
#include <sched.h>
#include <signal.h>
#include <unistd.h>
#define STACK_SIZE (1024 * 1024)
static char container_stack[STACK_SIZE];
char* const container_args[] = {
    
    
  "/bin/bash",
  NULL
};
int container_main(void* arg)
{
    
      
  printf("Container - inside the container!\n");
  execv(container_args[0], container_args);
  printf("Something's wrong!\n");
  return 1;
}
int main()
{
    
    
  printf("Parent - start a container!\n");
  int container_pid = clone(container_main, container_stack+STACK_SIZE, CLONE_NEWNS | SIGCHLD , NULL);
  waitpid(container_pid, NULL, 0);
  printf("Parent - container stopped!\n");
  return 0;
}

The code function is very simple: in the main function, a new child process container_main is created through the clone() system call, and it is declared to enable Mount Namespace (ie: CLONE_NEWNS flag).
And what this child process executes is a "/bin/bash" program, which is a shell. So this shell runs in the isolation environment of Mount Namespace.
Compile this program:

[root@xingdian ~]# gcc -o ns ns.c
[root@xingdian ~]# ./ns
Parent - start a container!
Container - inside the container!

In this way, it enters this "container" (it is not big on the surface-xingdian note). However, if you execute the ls command in the "container", you will find an interesting phenomenon: the contents of the /tmp directory are the same as the contents of the host.

[root@xingdian ~]# ls /tmp
# 你会看到好多宿主机的文件

In other words:
even if Mount Namespace is enabled, the file system seen by the container process is exactly the same as that of the host.
What is going on here?
If you think about it carefully, you will find that this is actually not difficult to understand: Mount Namespace modifies the perception of the "mount point" of the file system by the container process. However, this also means that the view of the process will only be changed after the "mount" operation has occurred. Before that, the newly created container directly inherits the various mount points of the host.
At this time, you may have thought of a solution: when creating a new process, in addition to declaring that you want to enable Mount Namespace, you can also tell the container process which directories need to be remounted, such as the /tmp directory. Therefore, you can add a step to remount the /tmp directory before the container process is executed:

int container_main(void* arg)
{
    
    
  printf("Container - inside the container!\n");
  // 如果你的机器的根目录的挂载类型是 shared,那必须先重新挂载根目录
  // mount("", "/", NULL, MS_PRIVATE, "");
  mount("none", "/tmp", "tmpfs", 0, "");
  execv(container_args[0], container_args);
  printf("Something's wrong!\n");
  return 1;
}

In the modified code, before the container process is started, a mount("none", "/tmp", "tmpfs", 0, "") statement is added. In this way, tell the container to re-int container_main(void* arg) in tmpfs (memory disk) format

{
    
      
  printf("Container - inside the container!\n");
  execv(container_args[0], container_args);
  printf("Something's wrong!\n");
  return 1;
}新挂载了 /tmp 目录。

What about the result of compiling and executing the modified code? Try it out:

[root@xingdian ~]# gcc -o ns ns.c
[root@xingdian ~]# ./ns
Parent - start a container!
Container - inside the container!
[root@xingdian ~]# ls /tmp
这次 /tmp 变成了一个空目录,这意味着重新挂载生效了。mount -l 检查:
[root@xingdian ~]# mount -l | grep tmpfs
none on /tmp type tmpfs (rw,relatime)

The /tmp directory in the container is individually mounted in tmpfs mode. You can uninstall the /tmp directory to see the effect.
More importantly, because the newly created process has Mount Namespace enabled, the remount operation is only valid in the Mount Namespace of the container process. If you check this mount with mount -l on the host, you will find that it does not exist:
on the host

[root@xingdian ~]# mount -l | grep tmpfs

This is where Mount Namespace is slightly different from other Namespaces: its changes to the container process view must be accompanied by a mount operation (mount) to take effect.
What we want is: Whenever a new container is created, I want the file system seen by the container process to be an independent isolation environment, not a file system inherited from the host. How can this be done?
The entire root directory "/" can be remounted before the container process starts. And because of the existence of Mount Namespace, this mount is not visible to the host, so the container process can toss in it.
In the Linux operating system, there is a command called chroot that can help you accomplish this task conveniently in the shell. As the name suggests, its role is to help you "change root file system", that is, change the root directory of the process to the location you specify.
Suppose, there is a /home/xingdian/test directory, and you want to use it as the root directory of a /bin/bash process.
First, create a test directory and several lib folders:

[root@xingdian ~]# mkdir -p /home/xingdian/test
[root@xingdian ~]# mkdir -p /home/xingdian/test/{bin,lib64,lib}
然后,把 bash 命令拷贝到 test 目录对应的 bin 路径下:
# cp -v /bin/{bash,ls}  /home/xingdian/test/bin

Next, copy all the so files required by the ls and bash commands to the lib path corresponding to the test directory. You can use the ldd command to find the so file: (ldd lists dynamic dependent libraries)


```bash
[root@xingdian ~]# list="$(ldd /bin/ls | egrep -o '/lib.*\.[0-9]')"
[root@xingdian ~]# for i in $list; do cp -v "$i" "/home/xingdian/test/${i}"; done
[root@xingdian ~]# list="$(ldd /bin/bash | egrep -o '/lib.*\.[0-9]')"
[root@xingdian ~]# for i in $list; do cp -v "$i" "/home/xingdian/test/${i}"; done

最后,执行 chroot 命令,告诉操作系统,将使用 /home/xingdian/test 目录作为 /bin/bash 进程的根目录:

```bash
[root@xingdian ~]# chroot /home/xingdian/test /bin/bash

At this time, execute "ls /" and you will see that it returns the contents under the /home/xingdian/test directory, not the contents of the host.
More importantly, for the chrooted process, it will not feel that its root directory has been "modified" to /home/xingdian/test.
Is the principle of modification of this view similar to the Linux Namespace I introduced earlier?
In fact, Mount Namespace was invented based on the continuous improvement of chroot. It is also the first Namespace in the Linux operating system.
In order to make the root directory of the container look more "real", generally a file system of a complete operating system, such as the ISO of Ubuntu 16.04, is mounted under the root directory of the container. In this way, after the container is started, execute "ls /" in the container to view the contents of the root directory, which are all the directories and files of Ubuntu 16.04.
And this file system, which is mounted on the root directory of the container and used to provide an isolated execution environment for the container process, is the so-called "container image." It also has a more professional name, called: rootfs (root file system).
Therefore, a most common rootfs, or container image, will include some directories and files as shown below, such as /bin, /etc, /proc, etc.:

[root@xingdian ~]# ls /
bin dev etc home lib lib64 mnt opt proc root run sbin sys tmp usr var

The /bin/bash executed after entering the container is the executable file in the /bin directory, which is completely different from the host's /bin/bash.
Therefore, for the Docker project, its core principle is actually for the user process to be created:
1. Enable Linux Namespace configuration;
2. Set the specified Cgroups parameters;
3. Switch the root directory of the process (Change Root).
In this way, a complete container was born. However, the Docker project will use the pivot_root system call first in the last step of the switch. If the system does not support it, chroot will be used. The functions of these two system calls are similar to
rootfs and kernel:
rootfs is only the files, configurations and directories contained in an operating system, and does not include the operating system kernel. All containers on the same machine share the host operating system kernel. If your application needs to configure kernel parameters, load additional kernel modules, and directly interact with the kernel, you need to pay attention: these operations and dependent objects are the kernel of the host operating system, which is for the machine For all the containers above, it is a "global variable" that affects the whole body.
In the Linux operating system, these two parts are stored separately, and the operating system will load the specified version of the kernel image only when it is booted.
This is one of the shortcomings of the container compared to the virtual machine: the virtual machine not only has a simulated hardware machine acting as a sandbox, but each sandbox also runs a complete Guest OS for applications to use at will.
Container consistency:
Because the cloud and local server environments are different, the application packaging process has always been the most "painful" step when using PaaS.
But with the container image (ie rootfs), this problem was solved.
Since what is packaged in rootfs is not just the application, but the files and directories of the entire operating system, it means that the application and all the dependencies it needs to run are packaged together.
For most developers, their understanding of application dependencies has always been limited to the programming language level. For example, Golang's Godeps.json. But in fact, a fact that has always been easily overlooked is that for an application, the operating system itself is the most complete "dependency library" it needs to run.
With the ability of container images to "package the operating system", this most basic dependent environment has finally become part of the application sandbox. This gives the container the so-called consistency: whether in the local, the cloud, or on a machine anywhere, the user only needs to decompress the packaged container image, and then the complete execution environment required for the operation of this application is rebuilt Came out.
This consistency of operating environment down to the operating system level bridges the insurmountable gap between application development and remote execution environments.
Union File System: Union File System is also called UnionFS.
Docker did not follow the previous standard process of making rootfs when implementing Docker images, and made an innovation:
Docker introduced the concept of layers in the design of images. In other words, each step of the user's mirroring operation will generate a layer, which is an incremental rootfs. A capability called the Union File System is used.
The main function is to mount multiple directories in different locations (union mount) to the same directory.
For example, now there are two directories A and B, which have two files respectively:

[root@xingdian ~]# tree
        ├── A
        │  ├── a
        │  └── x
        └── B
          ├── b
          └── x

Then, use the joint mounting method to mount these two directories to a common directory C:

[root@xingdian ~]# mkdir C
[root@xingdian ~]# yum install funionfs -y   //我这里用的是centos7自带的联合文件系统,效果一样
[root@xingdian ~]# funionfs  -o dirs=./A:./B none ./C

Check the contents of directory C again, and you can see that the files under directories A and B are merged together:

[root@xingdian ~]# tree ./C
        ./C
        ├── a
        ├── b
        └── x

As you can see, in the merged directory C, there are three files a, b, and x, and there is only one x file. This is the meaning of "merging".
In addition, if you make changes to files a, b, and x in directory C, these changes will also take effect in the corresponding directories A and B.

[root@xingdian ~]# echo hello >> C/a
[root@xingdian ~]# cat C/a
hello
[root@xingdian ~]# cat A/a
hello
[root@xingdian ~]# echo hello1 >> A/a
[root@xingdian ~]# cat A/a
hello
hello1
[root@xingdian ~]# cat C/a
hello
hello1

Enterprise-level Dockerfile file to build a container

1. Dockerfile file to build nginx

[root@xingdian ~]# cat Dockerfile 
FROM centos:7.2.1511
ENV TZ=Asia/Shanghai
ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US:en
ENV LC_ALL=en_US.UTF-8
RUN yum -y install gcc openssl openssl-devel  pcre-devel zlib-devel make
ADD nginx-1.14.0.tar.gz /opt/
WORKDIR /opt/nginx-1.14.0
RUN ./configure --prefix=/opt/nginx 
RUN make && make install
WORKDIR /opt/nginx
RUN rm -rf /opt/nginx-1.14.0
ENV NGINX_HOME=/opt/nginx
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/nginx/sbin
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
注意:
Nginx的docker仓库原文说明如下:
If you add a custom CMD in the Dockerfile, be sure to include -g daemon off; in the CMD in order fornginx to stay in the foreground, so that Docker can track the process properly (otherwise your container will stop immediately after starting)!
Running nginx in debug mode
Images since version 1.9.8 come with nginx-debug binary that produces verbose output when using higher log levels. It can be used with simple CMD substitution: 
$ docker run --name my-nginx -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx nginx -g 'daemon off;'
Similar configuration in docker-compose.yml may look like this:

web:
  image: nginx
  volumes:
    - ./nginx.conf:/etc/nginx/nginx.conf:ro
  command: [nginx, '-g', 'daemon off;']

2. Dockerfile file to build redis

FROM centos:7.2.1511
MAINTAINER zhaokun redis4 jichujingxiang
ENV TZ=Asia/Shanghai
ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US:en
ENV LC_ALL=en_US.UTF-8
RUN yum -y install gcc make
ADD redis-4.0.9.tar.gz /opt/
RUN cd /opt/ && mv redis-4.0.9  redis  && cd /opt/redis && make && make install
RUN mkdir -p /opt/redis/logs && mkdir -p /opt/redis/data && mkdir -p /opt/redis/conf && cp /opt/redis/redis.conf /opt/redis/conf/ && cp /opt/redis/src/redis-trib.rb /usr/local/bin/
EXPOSE 6379
CMD ["redis-server","/opt/redis/conf/redis.conf"]
基于哨兵模式的redis镜像
FROM centos:7.2.1511
MAINTAINER  redis4 jichujingxiang
ENV TZ=Asia/Shanghai
ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US:en
ENV LC_ALL=en_US.UTF-8
RUN yum -y install gcc make
ADD redis-4.0.9.tar.gz /opt/
ADD run.sh /
RUN cd /opt/ && mv redis-4.0.9  redis  && cd /opt/redis && make && make install
RUN mkdir -p /opt/redis/logs && mkdir -p /opt/redis/data && mkdir -p /opt/redis/conf && cp /opt/redis/redis.conf /opt/redis/conf/ && cp /opt/redis/src/redis-trib.rb /usr/local/bin/ && cp /opt/redis/sentinel.conf /opt/redis/conf/ && chmod 777 /run.sh
EXPOSE 6379
CMD ["./run.sh"]
#cat /run.sh
#!/usr/bin/bash
#2018/10/24
#行癫
cd /opt/redis/src/
./redis-server /opt/redis/conf/redis.conf &                    #这一个要放在后台运行,不然下面的启动不了
./redis-sentinel /opt/redis/conf/sentinel.conf

3. Dockerfile file to build jenkins

FROM local/c7-systemd
ADD jdk-9.0.1_linux-x64_bin.tar.gz /usr/local/
ADD apache-tomcat-9.0.14.tar.gz /usr/local/
WORKDIR /usr/local/
ENV JAVA_HOME=/usr/local/java
ENV PATH=$JAVA_HOME/bin:$PATH
ENV CATALINA_HOME=/usr/local/tomcat
ENV export JAVA_HOME CATALINA PATH
RUN mv jdk-9.0.1 java && mv apache-tomcat-9.0.14 tomcat
COPY jenkins.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh run"]

Guess you like

Origin blog.csdn.net/kakaops_qing/article/details/109136914