Linux virtualization custom JAVA Docker Docker basis of environmental and publish

These days, the old Mac Pro to be replaced as soon as possible, the old lady started with Docker to build a cluster environment. Development of large data cluster environment, few online resources, there is only a small one environment, the old lady looking at a headache, she had no choice but to build up. Let us build a good foundation for the environment. It includes Java, Scala, and SSHd environment.

Ready to work

First, there is the old lady in their own file directory file: Dockerfilethis create your own, in addition, we need to prepare our environment Scala and Java package package, you can prepare online package, the wife has to use offline offline as well. See the old lady's catalog:
docker-base-1
that did not, this is the old lady directory. Naturally, I am here is created from scratch. So have this Dockerfilefile.

Dockerfile

On this file, the old lady said it was still necessary at some of this inside:

FROM Specifies the base image

There Docker in our warehouse in which many high-quality images, for example: nginx, redis, mongo, mysql , httpd, php, tomcat and so on; there are some easy to develop, build, run a variety of language mirroring applications, such as node, openjdk, python, ruby, golang and so on. In which you can find a mirror that best meet our ultimate goal is to customize the base image.
FROM here refers to, on the basis of the existing mirror to re-construct meaning.
For example, the old where: FROM centos:centos7.7.1908representing base image from the old lady centos7.7.1908.

RUN Run

RUN, by definition is run, meaning execution. Here we go for our execution of instructions executed in Linux, for example, I want to create a folder that is: RUN mkdir -p /opt/module.

COPY Copy files

format:

  • COPY [--chown=<user>:<group>] <Source path> ... <destination path>
  • COPY [--chown=<user>:<group>] [ "<Source path 1>", ... "<destination path>"]

This is pure replication, in addition to copying, without any treatment.

ADD more advanced copy files

Basically the same format and the nature of the COPY and ADD instructions. But COPY basis with some added features.

For example:

  • The path can be a source of the URL of , this is very fast hardware.

  • If the source directory is a compressed file, it will automatically unzip to the target directory , this is more Niubi

In Docker official Dockerfile best practices documentation requirements, as far as possible using the COPY, COPY semantics because it is clear, is to copy the file, but rather ADD contains a more complex function, its behavior is not necessarily very clear. The most suitable occasions ADD, is proposed to be automatically decompressed and the occasion.

Also note that, ADD instruction cache invalidation will make the mirrored building, which may lead to image building becomes slow.

Therefore selected in the COPY and ADD instruction time, you can follow this principle, all the files are copied using the COPY command, use the ADD only need to automatically decompress occasions.
When using the instructions may further add --chown = : Option to change your user group and belongs files.

ADD --chown=55:mygroup files* /mydir/
ADD --chown=bin files* /mydir/
ADD --chown=1 files* /mydir/
ADD --chown=10:11 files* /mydir/

CMD container startup command

RUN CMD format and instructions are similar, but also in two formats:

  • shell format: CMD <command>
  • exec format: CMD [ "executable", "parameter 1", "parameter 2" ...]
  • Parameter list format: CMD [ "parameter 1", "parameter 2", ...]. After specifying ENTRYPOINT instructions specify specific parameters CMD.

Before the introduction of the container, when once said, Docker is not a virtual machine, the container is the process. Since it is a process, so when you start a container, you need to specify the parameters of the program and running. CMD command is used to specify the default container main process start command.

You can specify a new command at run time to replace the default settings in the mirror command, for example, ubuntu mirror CMD default is / bin / bash, if we direct docker run -it ubuntu, then will go directly to bash. We can also specify other commands to run at runtime, such as docker run -it ubuntu cat / etc / os-release. This is replaced with cat / etc / os-release command to the default / bin / bash command, the output of the system version information.

On the instruction format it is generally recommended to use exec format, such format when parsing will be parsed as JSON array, so be sure to use double quotes "instead of single quotes.
Mention CMD would have to mention container application in the foreground the implementation and execution of the background issues. this is a confusing beginners often arise.

Docker is not a virtual machine, application containers should be executed before the station, rather than virtual machines, physical machines inside as with systemd to start the background service, there is no concept of background services in the container.

Some beginners CMD will be written as:

CMD service nginx start
And then found immediately after the execution out of the container. Even to use systemctl command results in the container they found that simply can not be executed. This is because not thoroughly understand the foreground, the background of the concept, did not tell the difference containers and virtual machines, still in the angle of the traditional virtual machines to understand the container.

For containers, the launcher application process is the container, the container is to master process exists, the main process exits, the container would be meaningless existence, and thus exit, it is not something other auxiliary processes need to be concerned.

The use of service nginx startcommand, it is hoped upstart to Taiwan after the start nginx daemon in the form of services. And I just said CMD service nginx start will be understood as CMD [ "sh", "-c", "service nginx start"], therefore the main process is actually sh. Then when the end of the service nginx start command, sh will be over, sh process as the main exit, you will naturally make the container exit.

The correct approach is to directly execute nginx executable file, and requires previous stage in the form of running. such as:

CMD ["nginx", "-g", "daemon off;"]

ENTRYPOINT entry point

Like ENTRYPOINT format and RUN command format, and the format is divided into exec shell format.

ENTRYPOINT purpose and CMD, are in the specified container and start the program parameters. ENTRYPOINT may be replaced at run-time, but to be slightly cumbersome than CMD, need to be specified by the parameters --entrypoint docker run.

When the specified EntryPoint, CMD meaning it has changed, it is no longer a direct operational command, but the contents of the CMD command as a parameter to EntryPoint, in other words when the actual implementation, will become:

<ENTRYPOINT> "<CMD>"

Set the environment variable ENV

There are two formats:

ENV
ENV = = ...

ARG build parameters

Format: ARG <parameter name> [= <Default>]

Effect is the same build parameters and ENV, are set environment variables. The difference is that, provided ARG build environment variable environment, the presence of these environmental variables will not run in the future when the container. But do not hold information on the use of ARG passwords and the like, because docker history can still be seen all the values.

The Dockerfile ARG instruction parameter names are defined, and the definition of the default value. This default value may be constructed in order docker build with --build-arg <parameter name> = <value> covered.

Before version 1.13, request parameter name in --build-arg, too you must be defined in the ARG Dockerfile used, in other words, --build-arg parameters specified must be used in the Dockerfile. If the corresponding parameter is not used, it will error exit building. 1.13 from the beginning, this strict restriction is released, no longer being given exit, but displays a warning message, and continue to build. This time the use of CI systems, build different Dockerfile with the same build process is more helpful to avoid the build command must be modified based on the content of each Dockerfile.

VOLUME anonymous defined volume

The format is:

  • The VOLUME [ "<path 1>", "<Path 2>" ...]
  • VOLUME <path>

We have said before, the container should be kept runtime container storage layer does not occur writes for database application classes need to preserve dynamic data, files should be stored in its database volume (volume), the later chapters we will introduce further roll Docker the concept of. In order to prevent run-time dynamic user forgets to save files to a directory to mount the volume, in Dockerfile, we can pre-specify certain directories mounted as anonymous volume, so at run time if the user does not specify mount, the application can also be normal operation, a large amount of data is not written into the container storage layer.

VOLUME /data

Here / data directory will be automatically mounted at runtime anonymous volume, to any information / data will not be written into the container for storing the recording layer, thus ensuring the free state of the container of the storage layer. Of course, you can override this mount runtime settings. such as:

docker run -d -v mydata:/data xxxx
In this command line, use of this named mydata mount the volume / data this position, the mount alternative configuration anonymous Dockerfile defined volume.

EXPOSE statement port

EXPOSE format <port 1> [<port 2> ...].

EXPOSE directive is to provide services runtime container port statement, this is just a statement, and this statement will not use this service port will open at runtime. In Dockerfile written such a statement has two advantages, a mirror is to help users understand the guardian of this port mirroring services to facilitate configuration mapping; another is to use a random port mapping at run time, that is, docker run -P, it will automatically random mapping EXPOSE port.

To EXPOSE and use -p <host port> at run time: <container port> distinguished. -p, it is mapping host port and container port, in other words, the container port of the corresponding public service access to the outside world, but EXPOSE merely a statement of what you plan to use a container port only, and does not automatically port mapping in the host.

WORKDIR specify the working directory

Format WORKDIR <working directory path>.

Use WORKDIR instruction can specify the working directory (or called the current directory), after the current directory of each layer was changed to the specified directory, if the directory does not exist, WORKDIR will help you to create the directory.

Prior to mention some common mistakes beginners is equivalent to the Dockerfile Shell script to write, this wrong understanding may also lead to the following error occurs:

RUN cd /app

RUN echo "hello" > world.txt

If this Dockerfile build mirroring run, you will find not find /app/world.txt file or its contents is not hello. The reason is simple, in the Shell, successive two lines is the same process execution environment, so the previous command to modify memory state, it will directly affect after a command; and in Dockerfile, the two lines RUN command execution environment fundamentally different , are two entirely different containers. This is the concept of tiered storage Dockerfile build errors caused by lack of understanding.

RUN is said before each start a container, execute commands, and then submit the file storage layer changes. Performing a first layer RUN cd / app is just the working directory of the current process of change, a change in a memory of it, the result will not cause any file changes. By the time the second layer, the start of a new container, the container with the first layer is more completely does not matter, naturally can not change memory before the process of building a layer of inheritance.

So if you need to change the position after the layers of the working directory, you should use WORKDIR instruction.

USER Specifies the current user

Format: the USER <user name> [: <user group>]

USER command and WORKDIR similar, after changing environmental conditions and impact layer. WORKDIR is to change the working directory, USER RUN is executed after the status change layer, CMD and ENTRYPOINT such an order.

Of course, and as WORKDIR, USER just help you switch to the specified user only, the user must be pre-established, otherwise it can not be switched.

RUN groupadd -r redis && useradd -r -g redis redis

USER redis`
RUN [ "redis-server" ]

If, during the execution of the script as root hope that the change of identity, such as a hope to have established a good user to run a service process, do not use su or sudo, which require troublesome configuration, and in the absence of TTY often wrong environment. Recommended gosu.

# 建立 redis 用户,并使用 gosu 换另一个用户执行命令
RUN groupadd -r redis && useradd -r -g redis redis
# 下载 gosu
RUN wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.7/gosu-amd64" \
    && chmod +x /usr/local/bin/gosu \
    && gosu nobody true
# 设置 CMD,并以另外的用户执行
CMD [ "exec", "gosu", "redis", "redis-server" ]

HEALTHCHECK health check

format:

HEALTHCHECK [Options] CMD <command>: Check the settings vessel health command
HEALTHCHECK NONE: If the base image have a health check instruction, use this line can be masked their health check instruction
HEALTHCHECK instruction is to tell how Docker should be judged container state whether normal, this is a new command Docker 1.12 introduced.

In the absence of HEALTHCHECK instruction, Docker engine can only be judged by whether the container is in an abnormal state in the container main process exits. In many cases this is not a problem, but if the program locks up or an infinite loop, the application process does not quit, but the container has been unable to provide the service. In the previous 1.12, Docker will not detect this state of the container, so as not to reschedule, resulting in part of the container may have been unable to provide the service are still accepting user requests.

And since 1.12, Docker provided HEALTHCHECK instruction the instruction specified by a single command, whether the command line is determined by the main process vessel normal service state further, to compare the actual state of the real reaction vessel.

When an image is specified in the instruction HEALTHCHECK, with its starting container, the initial status of Starting, after checking becomes successful HEALTHCHECK Healthy instruction, if a certain number of consecutive failures will become unhealthy.

HEALTHCHECK supports the following options:

  • --interval = <interval>: Two health check interval, default is 30 seconds;
  • --timeout = <duration>: Health Check command to run the timeout, if more than this time, this health check was considered a failure, default 30 seconds;
  • --retries = <number>: When a specified number of consecutive failures, the state will be considered Unhealthy container, the default three times.

And CMD, ENTRYPOINT the same, HEALTHCHECK can appear only once, if you write more, only the last one will work.

In HEALTHCHECK [Options] CMD subsequent commands, format and ENTRYPOINT same format divided into shell, and exec format. The return value of the command determines the success of the sub-health check or not: 0: Success; 1: Failure; 2: Reserved, do not use this value.

Suppose we have a mirror is the most simple Web service, we hope to increase health inspection to determine its Web service is working properly, we can use curl to help determine which Dockerfile of HEALTHCHECK could write:

FROM nginx
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
HEALTHCHECK --interval=5s --timeout=3s \
  CMD curl -fs http://localhost/ || exit 1

Here we set up inspected once every 5 seconds (here for a very short interval test so, should actually be relatively long), if the health check command did not respond to more than three seconds as a failure, and the use of curl -fs http: // localhost / || exit 1 as a health check command.

Use docker build to build this image:

docker build -t myweb:v1 .

Constructing Well, we start a container:

docker run -d --name web -p 80:80 myweb:v1

After running the mirror you can be seen by the docker container ls initial state (health: starting):

docker container ls

After waiting a few seconds, once again docker container ls, you will see changes to the health status (healthy) :( implementation of the above Directive)

If a health check consecutive failures exceeded the number of retries, the status will change to (unhealthy).

To help troubleshoot, health check output commands (including stdout and stderr) will be stored in a healthy state, you could use docker inspect to see.

docker inspect --format '{{json .State.Health}}' web | python -m json.tool
{
    "FailingStreak": 0,
    "Log": [
        {
            "End": "2016-11-25T14:35:37.940957051Z",
            "ExitCode": 0,
            "Output": "<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n    body {\n        width: 35em;\n        margin: 0 auto;\n        font-family: Tahoma, Verdana, Arial, sans-serif;\n    }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n",
            "Start": "2016-11-25T14:35:37.780192565Z"
        }
    ],
    "Status": "healthy"
}

ONBUILD awake for others

Format: ONBUILD <other instructions>.

ONBUILD is a special command, it is followed with the other instructions, such as RUN, COPY, etc., and these instructions will not be executed when the current mirror constructed. Only when the current image as a base image, to build the next stage when the mirror will be executed.

The Dockerfile other instructions are prepared for customized current mirror, only ONBUILD to help people customize their prepared.

Suppose we want to mirror Node.js applications written. We all know that Node.js carried out using npm package management, all the dependencies, configuration, startup information will be put package.json file. In getting the program code, you need to be npm install before they can get all the dependencies needed. Then you can start the application by npm start. Therefore, in general will write Dockerfile:

FROM node:slim
RUN mkdir /app
WORKDIR /app
COPY ./package.json /app
RUN [ "npm", "install" ]
COPY . /app/
CMD [ "npm", "start" ]

After this Dockerfile into the root directory of Node.js project to build a good image, it can be directly used to start container operation. But if we have a second Node.js project is almost too? Well, it would then copy this Dockerfile to the second project. So, if there is a third project it? Then copy it? The more copies of the documents, the more difficult version control, let's continue to look at the issue of maintenance of such a scene.

If the first Node.js project in the development process, found that Dockerfile in question, such as knock typo or need to install additional packages, then the developer fixes this Dockerfile, build again, the problem is solved. ? The first project is no problem, but the second project it? Although initially Dockerfile is a copy and paste from the first project, but not since the first project to repair their Dockerfile, while Dockerfile second project will be automatically repaired.

Can we do a base image, then each item using the base image it? Such base image updates, synchronous Dockerfile various projects without changes, rebuilt after inheriting the updated base image? Well, you can, let us look at this result. Then the above will become the Dockerfile:

FROM node:slim
RUN mkdir /app
WORKDIR /app
CMD [ "npm", "start" ]

Here we build instructions related to the project out, go into subprojects. Assuming that the name of the base image is my-node, then their Dockerfile within each project becomes:

FROM my-node
COPY ./package.json /app
RUN [ "npm", "install" ]
COPY . /app/

After the base image changes, various projects have used this Dockerfile rebuild the mirror, you will inherit the updated base image.

So, problem solved it? No. Accurate to say, only half solved. If this Dockerfile there some things need to adjust it? For example npm install you will need to add some parameters, how do? This line is not possible to put into RUN base image, because it involves the ./package.json current project, do we need to modify it one by one? So, this production base image, only the first four instructions to solve the problem of changes in the original Dockerfile, and the change is completely behind three instructions can not deal with.

ONBUILD can solve this problem. Let us re-write about Dockerfile ONBUILD base image:

FROM node:slim
RUN mkdir /app
WORKDIR /app
ONBUILD COPY ./package.json /app
ONBUILD RUN [ "npm", "install" ]
ONBUILD COPY . /app/
CMD [ "npm", "start" ]

This time we go back to the original Dockerfile, but this time the project-related instruction plus ONBUILD, so when building the base image, three lines will not be executed. Then Dockerfile each project becomes simply:

FROM my-node

Yes, only this line. When each project directory, use this one-line Dockerfile build a mirror, a mirror before the foundation of those three lines ONBUILD will begin, the success of the current project copy the code into the mirror, and the implementation of npm install for this project, generate application image.

Start building our foundation for the environment

# 指定基镜像 centos
FROM centos:centos7.7.1908

MAINTAINER      Fisher "[email protected]"
# 先执行更新

# 复制jdk到指定目录
RUN /bin/mkdir -p  /opt/software
RUN /bin/mkdir -p /opt/module
ADD jdk-8u201-linux-x64.tar.gz /opt/module

# 配置jdk环境
ENV JAVA_HOME /opt/module/jdk1.8.0_201
ENV PATH $PATH:$JAVA_HOME/bin

# 检测java版本,确认是否安装成功
RUN java -version

ADD scala-2.11.11.tgz /opt/module

# 配置SCALA 环境
ENV SCALA_HOME /opt/module/scala-2.11.11
ENV PATH $PATH:$SCALA_HOME/bin

RUN yum -y install openssh-clients
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' && \
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

CMD [ "sh", "-c", "systemctl start ssh;bash"]

After execution, we can see this:
docker-base-2
we see successfully build it means our build success. Now, we can go look at:
docker-base-3

He began to push to Docker Hub

Here, we first need to prepare a little something:

  • DockerHub account password

We then push up, note that here:

  • Our warehouse and should REPOSITORY our local mirroring consistent.

docker-base-4

Push start

docker push sun1534/docker-base:7.8.11

docker-base-5

Now let's look at our DockerHub above:

docker-base-6

Guess you like

Origin www.cnblogs.com/sun-iot/p/12144886.html