docker application, build, container, image, build private cloud docker registry, container communication, port mapping, multi-machine multi-container communication, data persistence, docker deployment wordpress, docker co

1.  Introduction to container technology and docker

1.  Deployment Evolution

l Deploy the Application on a physical machine

l Virtualization technology

2.  The necessity of the container

l Developers need various environments and dependencies to develop an Application

l Operation and maintenance personnel also need to build various environments when deploying the Application

3.  Problems Solved by Containers

l Solve the contradiction between development and operation and maintenance

4.  What is the container

l Standardized packaging of software and its dependencies

l Isolation between applications

l Share the same OS Kernel

l Can run on many mainstream operating systems

5.  The difference between a virtual machine and a container

l The virtual machine is isolated at the physical level, and the container is isolated at the application level

6.  What is docker

l docker is currently the most popular implementation of container technology

l From 2004 to 2008, LXC appeared in Linux. In 2013, docker packaged LXC. In March 2013, it was open sourced. In 2016, docker was divided into enterprise version and community version.

7.  What docker can do

l Simplified configuration

l Improve efficiency

8.  docker and kubernates _

l docker can be managed by k8s

l kubernetes, referred to as k8s

9. DevOps

l DevOps - solve the cooperation and communication between development and operation and maintenance

l Not only rely on docker, but also version management, continuous integration, etc.

10.  Application of docker

l During the 6.18 promotion in 2015, JD.com boldly used Docker-based container technology to carry the key business of the promotion (picture display, single product page, group purchase page). At that time, nearly ten thousand elastic cloud projects based on Docker containers A Docker container runs in an online environment and has withstood the test of heavy traffic

l In June 18, 2016, the elastic cloud project took on a more important role. All application systems and most DB services have been running on Docker, including product pages, user orders, user searches, caches, and databases. JD Online will start Nearly 150,000 Docker containers

l JD Elastic Computing Cloud realizes the unified management of massive computing resources through software-defined data centers and large-scale container cluster scheduling, meets performance and efficiency requirements, and improves the efficiency of self-service online services. The application deployment density is greatly improved, the resource utilization rate is improved, and a large amount of hardware resources are saved

2.  Various construction methods of docker environment

1.  Introduction to docker installation

l Official website: https://docs.docker.com/

l Docker provides two editions: Community Edition (CE) and Enterprise Edition (EE)

2.  docker is installed on the mac system

3.  docker installed on windows system

4.  docker installed on CentOS system

l Official Documentation: Install Docker Engine on CentOS | Docker Documentation

prevent residue

yum remove docker \

docker-client \

docker-client-latest \

docker-common \

docker-latest \

docker-latest-logrotate \

docker-logrotate \

docker-selinux \

docker-engine-selinux \

docker-engine

 

Install dependencies that may be used

yum install -y yum-utils \

device-mapper-persistent-data \

lvm2

 

add location

yum-config-manager \

--add-repo \

https://download.docker.com/linux/centos/docker-ce.repo

 

Query what docker version can be installed

yum list docker-ce --showduplicates | sort -r

 

Install the specified version

yum -y install docker-ce-18.06.1.ce-3.el7

 

turn on

systemctl start docker

 

Set up autostart

systemctl enable docker

 

view version

 

After installing docker, modify the following files, write the content, restart docker, and configure domestic mirroring

vi /etc/docker/daemon.json

Add the following:

{

"registry-mirrors": [

"https://kfwkfulq.mirror.aliyuncs.com",

"https://2lqq34jg.mirror.aliyuncs.com",

"https://pee6w651.mirror.aliyuncs.com",

"https://registry.docker-cn.com",

"http://hub-mirror.c.163.com"

],

"dns": ["8.8.8.8","8.8.4.4"]

}

restart docker 

systemctl restart docker

5.  docker installed on Ubuntu system

6.  The use of docker-machine

l Docker Machine is a tool officially provided by Docker, which can help us install Docker on a remote machine, or install a virtual machine directly on the virtual machine host and install Docker in the virtual machine

l You can manage these virtual machines and Docker through the docker-machine command

7. docker palyground

l Address: https://labs.play-with-docker.com/

l Directly use the docker in the cloud

public, 4 hours per person

Click on the left to add an instance

 

Three,  docker image and container

1.  Docker 's underlying technology implementation architecture

l docker provides a platform for packaging and running apps

l Isolate the app from the underlying infrastructure

2. docker engine

l The docker engine is the core, which has a background process dockerd, which provides a REST API interface and a CLI interface. In addition, docker is a C/S architecture

3.  Overall structure

 

4.  Low-level technical support

l Namespaces: network isolation

l Control groups: resource constraints

l Union file systems: system layering

5.  Overview of docker image

l is a collection of files and metadata

6.  Make baseImage

l baseImage: system-based base image

 

 

Compile the image based on the current location

docker build -t gochaochao/hello-world .

 

You can view the compiled image

docker image ls

 

Run the image and become a container

docker run gochaochao/hello-world

 

You can also pull the image directly from the official

docker pull redis

 

view mirror

 

7.  Container concept and use

l container can be understood as an instance at runtime, which is different from image

View all running container containers

 

If not, go to the official download

 

Can run containers interactively

 

At this point, you can check the current runtime container

 

delete a mirror

 

You can also delete all containers

 

Find all instances that are not running

 

The following command can delete the container that is not running

 

 

8.  Two ways to create an Image

l After creating a container based on an image, if you make some changes in the container, such as installing a certain software, you can commit these changes into a new image, which can also be abbreviated as docker commit

Run a centos instance interactively, and install a lrzsz in it

 

Exit after installation

 

You can view the modified container

 

Package this new container into a new image

Generate a new image

docker commit determined_hermann gochaochao/centos-lrzsz

 

view mirror

 

You can also view the image layer information according to the image id

 

l Use Dockerfile to create an image through build, which can be abbreviated as docker build

Through the definition file, the same effect

 

 

compile image

docker build -t gochaochao/centos-lrzsz2 .

 

 

Multiple mirrors can be viewed

Start again: docker container start 470671670cac

 

9.  Detailed explanation of Dockerfile

l FROM: where to start, starting from a system

FROM scratch # Minimal system

FROM centos         

FROM ubuntu:14.04

l LABEL: comment

LABEL version=”1.0”

LABEL auther=”sjc”

l RUN: Execute the command, every time RUN, there will be one more system layer, as few layers as possible

RUN yum -y update && install lrzsz \

net-tools

l WORKDIR: enter or create a directory, try not to use relative paths

WORKDIR /root # Enter the /root directory

WORKDIR /test # will be under the root, create /test and enter

WORKDIR demo # create demo, enter

RUN pwd          # /test/demo

l ADD and COPY: Add local files to the image, the difference between COPY and ADD is that it will not decompress

ADD hello / # Add hello in the current directory to the root of the container

ADD tt.tar.gz / # Throw in the compressed package and decompress it

l ENV, increase the readability and robustness of Dockerfile

 

l CMD and ENTRYPOINT: Execute a command or run a script

10. Dockerfile——CMD vs ENTRYPOINT

l Shell and Exec formats

 

l ENTRYPOINT and CMD: what command to run when the container starts

 

ENTRYPOINT is used more than CMD, because CMD may execute the previous ones, and ignore the CMDs defined later.

11.  Share the docker  image

l The image name must start with your docker hub user name

Log in

 

Upload image to docker hub

docker image push gochaochao/centos-lrzsz2:latest

 

Download the mirror to the local

 

If you want to delete an image from docker hub

 

 

 

12.  Share Dockerfile

l docker image is not as safe as sharing Dockerfile

13.  Build a private docker registry

l github is public, you can also create your own private warehouse

Docker officially provides a mirror image of the private warehouse

 

Prepare the second machine as a private warehouse

The second runs as follows

docker run -d -p 5000:5000 --restart always --name registry registry:2

 

Check the process, it works fine

 

The browser can see that there is nothing in the warehouse of the second machine

 

Test port, the first one can be connected to telnet

yum -y install telnet

 

Enter q to quit

 

Write dockerfile, compile the required image name

 

Press dd to delete the entire line

 

Compile the image, the IP is the IP of the private warehouse

docker build -t 192.168.190.131:5000/centos .

 

Write a file, add a line

"insecure-registries":["192.168.190.131:5000"],

 

 

restart docker

 

Upload to private repository

 

Refresh the page, the private warehouse has content

 

If you download something from a private warehouse, it is as follows:

docker pull 192.168.190.131:5000/centos

14.  Dockerfile case _

l Create a python web application and package it into a docker image to run

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

    return "hello docker"

if __name__ == '__main__':

    app.run()

Create a directory, create an app.py, and copy the above code

 

 

Create Dockerfile

 

FROM python:3.6

LABEL auth="sun"

RUN pip install flask

COPY app.py /app/

WORKDIR /app

EXPOSE 5000

CMD ["python","app.py"]

build image

docker build -t gochaochao/flask-hello-world .

 

run mirror

docker run gochaochao/flask-hello-world

 

At this point you can see the process

 

15.  Operate the container during operation

l The exec command is used to invoke and execute commands

Background process

 

The machines inside can be run interactively, according to the container ID at runtime

 

Interactively run the python shell of the machine inside the container

docker exec -it e8517231ecd8 python

 

Run a container in the background and specify the container name

docker run -d --name=demo gochaochao/flask-hello-world

 

Can be stopped by name

 

You can view the running log of the container at runtime

 

View runtime container details

 

Ping the address inside

 

16.  Restrictions on container resources

l Limitation on memory

l Limitation on CPU

You can specify the memory and CPU occupied by the open container

 

Four  , docker network

1.  Network classification

l stand-alone

n Bridge Network: Equivalent to VMware's bridge, run this by default

n Host Network: NAT similar to VM

n None Network: no network

l Multi-machine

n Overlay Network: cluster network

Can view network card information

 

2.  Linux network namespace n amespace

l Namespace is an important concept at the bottom of docker

3.  Detailed explanation of Bridge

l Multi-container communication

4.  Vessel communication

l Sometimes when writing code, the IP address to be requested is not known

run a minimal system

docker run -d --name test1 busybox /bin/sh -c "while true;do sleep 3600;done"

 

from the second minimal system, linking the first

Linking is similar to configuring hostnames and mappings

docker run -d --name test2 --link test1 busybox /bin/sh -c "while true;do sleep 3600;done"

 

Assuming that at this time, test2 has deployed the project, and test1 has deployed the mysql used by the project, then when the project accesses mysql, you can write the project without specifying the IP and specify it as test1, and it will work because it is a link

5.  Port mapping

l Realize external access

Run a nginx, you can find the process

 

 

View the IP of the container by bridging the network card

 

 

From this machine, you can access the home page where nginx is started

 

At this time, the outside world is inaccessible, which can be solved by port mapping

Stop and delete the previous container

 

port mapping start container

docker run --name web -d -p 80:80 nginx

 

At this point, the external browser can access the content in the container through the virtual machine IP

 

6.  None and host of the network

l none application scenarios: extremely high security requirements, storing top-secret data, etc.

l The host network is similar to NAT

7.  Multi-container deployment and application

Can clean up the front container

 

 

There are 2 containers, one uses flask as web service, and the other uses redis as auto-increment

First, run a redis

 

 

Create directory, write code

 

l app.py

from flask import Flask

from redis import Redis

import us

import socket

app = Flask(__name__)

redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379)

@app.route('/')

def hello():

    redis.incr('hits')

    return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())

if __name__ == "__main__":

    app.run(host="0.0.0.0", port=5000, debug=True)

Write another dockerfile

 

l Dockerfile

FROM python:2.7

LABEL maintaner="[email protected]"

COPY . /app

WORKDIR /app

RUN pip install flask redis

EXPOSE 5000

CMD [ "python", "app.py" ]

build a mirror

docker build -t gochaochao/flask-redis .

 

To run the built image, external access needs to add -p 5000:5000

 

Successfully ran 2 containers

 

Enter the flask container, access the running code, and you can see that redis auto-increment is called

 

8.  Multi-machine multi-container communication

It may appear that the container IP is the same, and there will be problems. You can add an etcd to manage it to prevent IP conflicts

 

Open 2 machines, etcd installation package is transferred to 2 machines

 

unzip all

 

The first one: enter the decompressed etcd, run as follows, pay attention to modify the IP

Go to Run.txt to find the command and copy it in

etcd startup command (node01), pay attention to modify the IP

nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.83.128:2380 \

--listen-peer-urls http://192.168.83.128:2380 \

--listen-client-urls http://192.168.83.128:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.83.128:2379 \

--initial-cluster-token etcd-cluster \

--initial-cluster docker-node1=http://192.168.83.128:2380,docker-node2=http://192.168.83.130:2380 \

--initial-cluster-state new&

etcd startup command (node02), pay attention to modify the IP

nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.83.130:2380 \

--listen-peer-urls http://192.168.83.130:2380 \

--listen-client-urls http://192.168.83.130:2379,http://127.0.0.1:2379 \

--advertise-client-urls http://192.168.83.130:2379 \

--initial-cluster-token etcd-cluster \

--initial-cluster docker-node1=http://192.168.83.128:2380,docker-node2=http://192.168.83.130:2380 \

--initial-cluster-state new&

 

2 can see that etcd is in a healthy state

 

At this point, stop docker

 

Run docker managed by etcd

/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.83.128:2379 --cluster-advertise=192.168.83.128:2375&

 

The second one is the same, stop docker, and then run the .txt to find the command to start

/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.83.130:2379 --cluster-advertise=192.168.83.130:2375&

View network docker network

 

View network details docker network ls

 

After the above is done, the first machine creates a virtual network card

docker network create -d overlay demo

 

At this point, the second station can also see the network

 

You can also create a container first

docker run -d --name test3 --net demo busybox sh -c "while true;do sleep 3600;done"

 

At this time, the second one runs the same, it will report an error, and it already exists, which solves the situation that the docker machines in the cluster do not know each other

 

l For example, one for redis and one for web processing

l app.py

from flask import Flask

from redis import Redis

import us

import socket

app = Flask(__name__)

redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379)

@app.route('/')

def hello():

    redis.incr('hits')

    return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())

if __name__ == "__main__":

    app.run(host="0.0.0.0", port=5000, debug=True)

l Dockerfile

FROM python:2.7

LABEL maintaner="[email protected]"

COPY . /app

WORKDIR /app

RUN pip install flask redis

EXPOSE 5000

CMD [ "python", "app.py" ]

9.  overlay network and etcd communication

l When deploying multiple machines and multiple containers, it is necessary to ensure that the addresses do not conflict

Five,  docker 's persistent storage and data sharing

1.  Introduction of data persistence

l There is a risk of data loss in the container

2.  Data persistence scheme

l Volume based on the local file system

l Plugin-based Volume

3.  Volume type _

l Managed data volume: automatically created by the docker background

l Bind mounted volume: the specific mount location can be specified by the user

4.  Data persistence - data  Volume

l https://hub.docker.com/ search mysql, you can see that VOLUME is also defined in the official Dockerfile

Defined in the official mysql

 

Clean the front container

 

Start a mysql in the background

docker run -d --name mysql2 -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql

 

View all the data files persisted to the local docker

 

You can see where the details are stored locally

 

Persistent redis data

 

Persistent mysql data

 

Stored mysql library files

 

Unreferenced data files can be cleared

docker volume prune

 

Start the container and specify the persistent data directory file

docker run -d -v mysql:/var/lib/mysql --name mysql1 -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql

 

mysql into the container

 

write something to exit

 

 

shut down and delete the container

 

Specify the data synchronization location and open the mysql container

docker run -d -v mysql:/var/lib/mysql --name mysql1 -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql

 

Run interactively, you can see that the data has been imported

 

 

5.  Data persistence——bind  mounting

l You can specify a folder that is synchronized with the container, the container changes, and the file changes synchronously

Find a directory, create a file, write something

 

 

write dockerfile

FROM nginx:latest

WORKDIR /usr/share/nginx/html

COPY index.html index.html

 

build image

 

Start the container, mount the directory

 

At this point, modify the local index.html file, and the home page accessed in it will be directly changed

 

6.  Docker Compose multi-container deployment

1.  docker deployment wordpress

l Wordpress is a blog site, which requires 2 images and a mysql

view all images

 

clean up image

 

download wordpress

 

under 5.5

 

Change 5.5 to latest

docker tag mysql:5.5 mysql:latest

 

Start the mysql mirror

 

start wordpress mirror

 

browser view

 

The wordpress deployment above needs to be downloaded and opened one by one, which is troublesome and requires docker compose

2.  Introduction to docker  com pose

l Multi-container APP is difficult to deploy and manage, docker compose is similar to batch processing

3.  docker compose  installation and use

First upload the file to the directory

 

l Permissions are required after installation

 

View version number

 

create folder

 

Create a file with the following content

 

l Deploy wordpress with docker-compose

version: '3'

services:

  wordpress:

    image: wordpress

    ports:

      - 80:80

    environment:

      WORDPRESS_DB_HOST: mysql

      WORDPRESS_DB_PASSWORD: admin

    networks:

      - my-bridge

  mysql:

    image: mysql:5.5

    environment:

      MYSQL_ROOT_PASSWORD: admin

      MYSQL_DATABASE: wordpress

    volumes:

      - mysql-data:/var/lib/mysql

    networks:

      - my-bridge

volumes:

  mysql-data:

networks:

  my-bridge:

    driver: bridge

start compose

 

browser view

 

Can view process

 

For very complex projects, as long as docker-compose is written, it can be easily run

 

4.  Container scaling and load balancing

l Container expansion

stop stops the container, down is to stop and delete

 

 

docker-compose combined with Dockerfile

In the previous example, write a docker-compose.yml and add the following content

 

 

version: "3"

services:

  redis:

    image: redis

  web:

    build:

      context: .

      dockerfile: Dockerfile

    ports:

      - 8080:5000

    environment:

      REDIS_HOST: redis

run

 

can be accessed through a browser

 

stop docker-compose

 

Delete the following 2 lines in docker-compose.yml

 

Do container scaling

 

Add load balancing:

First modify app.py

 

 

Also change it in the Dockerfile

 

 

docker-compose.yml file, add the following content, load balancing

lb:

image: dockercloud/haproxy

links:

- web

ports:

- 8080:80

volumes:

- /var/run/docker.sock:/var/run/docker.sock

version: "3"

services:

  redis:

    image: redis

  web:

    build:

      context: .

      dockerfile: Dockerfile

    environment:

      REDIS_HOST: redis

  lb:

    image: dockercloud/haproxy

    links:

      - web

    ports:

      - 8080:80

    volumes:

      - /var/run/docker.sock:/var/run/docker.sock

start the container again

docker-compose up --scale web=3 -d

 

After starting, access the code, you can see that it is the information returned by different containers

 

5.  Complex application deployment

l 6-5 project deployment

Seven,  container arrangement docker Swarm

k8s manages all dockers

docker swarm has similar functions to k8s

1.  Introduction to Arranging s warm

 

l Service creation and scheduling

l Make decisions in swarm management and decide where to deploy workers

 

2.  Construction of three-node swarm cluster

3.  Create and maintain Service and expand

4.  Use DockerStack to deploy VotingApp

5.  Use DockerStack to deploy visual applications

6.  Use and manage DockerSecret

7.  Update the s ervice version

Eight,  docker Cloud and docker enterprise edition

1.  Docker company business introduction

l At the beginning of 2017, docker was divided into community edition and enterprise edition

l docker company also provides training, URL: https://training.docker.com/

 

l The docker company also provides certification, URL: http://success.docker.com/

 

l Similar to Apple's AppStore, URL: https://store.docker.com/

 

l You can create a service through the web interface, URL: https://cloud.docker.com/, which is charged

 

2.  Docker cloud automated construction

l It is the first CaaS (container-as-a-service) product of docker company, which provides container services on top of PaaS (similar to Alibaba Cloud), even if the docker service is built on the cloud

l is a managed service that provides container management, orchestration, and deployment

l The docker company acquired the tutum company in 2015 and converted the tutum into a docker cloud

l The main modules provided by docker cloud are as follows:

 

l When using docker cloud, it needs to be associated with a github account

3.  Online use of docker enterprise edition

l Website: https://www.docker.com/products/docker-enterprise

l Use the single-node enterprise version trial provided by docker company, and the trial time is 12 hours

l Click to apply

 

l The online trial time is too short, the following uses the local trial method

4.  Alibaba Cloud Deployment Container

l Website: Alibaba Cloud - Computing, for the value that cannot be calculated

 

l Scroll down and click Container Service on the right

 

l Click to activate immediately

 

l You can log in directly with Alipay

 

l Authorization

 

l Register

 

l Immediate activation, real-name authentication may be required

 

l successfully activated

 

l Agree to authorize

 

5.  Alibaba Cloud deploys dockerEE

l Website: https://cn.aliyun.com/

Nine,  container orchestration Kubernetes

l Kubernetes is Google's open-source container cluster management system, which may be blocked, so it is necessary. . .

l Referred to as k8s, at the end of 2017, docker announced its support for k8s, announcing that k8s had achieved a phased victory

l In July 2014, Docker acquired Orchard Labs, and thus Docker began to set foot in the field of container arrangement. Orchard Labs, a company founded by two awesome young people in 2013, had a very famous container arrangement tool fig at that time. And this fig is the predecessor of docker-compose

l At the beginning of 2015, Docker released Swarm, began to catch up with Kubernetes, officially entered the field of container orchestration, and competed with k8s

l In March 2017, Docker announced the birth of the Docker Enterprise Edition. Since then, it has begun to distinguish between the community edition and the enterprise edition. From 2016 to early 2017, a series of actions by Docker fully demonstrated the profitability pressure of a start-up company. A series of efforts by Docker did not bring Docker Swarm to the pinnacle of container orchestration. On the contrary, Kubernetes has developed rapidly because of its excellent architecture and healthy community environment, and has been widely used in production environments. Feedback, community response, and a virtuous circle continues. In 2017, major manufacturers began to embrace Kubernetes, Amazon AWS, Microsoft Azure, VMware, and some even abandoned their own products

l At the end of 2017, Docker announced that it supports Kubernetes in its own enterprise version, and together with Swarm, it serves as a container orchestration solution for users to choose

l In the field of container underlying technology, Docker is still the leader, and the bottom layer of Kubernetes still chooses to use containerd

 

l swarm architecture

l k8s architecture, the master is called master, and the slave is called node

 

l The master node of k8s, the API Server is external and can be accessed through the UI, the Scheduler is the scheduling module, the Controller is the control module, etcd is the distributed storage, which stores the entire state and configuration of k8s

 

l The node node of k8s, pod is the smallest unit of scheduling in the container, which refers to the combination of containers with the same namespace, kubelet is similar to a proxy, and can manage the creation of containers, kube-proxy is related to the network, and is used as port proxy and forwarding It is to collect, store and query logs

 

l k8s overall architecture

 

Reposted from https://www.cnblogs.com/zhang-da/p/13062671.html

Guess you like

Origin blog.csdn.net/lingshengxiyou/article/details/130369742