JAVA development and operation and maintenance (how to deploy microservice jar packages through docker)

Goal: Deploy microservices through docker.

1. Background:

The microservices we develop through java can be packaged into jar packages, and we can deploy them directly through bare metal or through docker. This article introduces the deployment of microservices through docker.

2. First, let's introduce the development process of docker:

Dockeris an open platform for developing, delivering and running applications. DockerSeparate applications from infrastructure so software can be delivered quickly. By leveraging Dockerthe rapid delivery, testing, and deployment of code, the delay between writing code and running it in production can be greatly reduced.

The transition from the past development and operation environment with physical machines and virtual machines as the main body to container-based infrastructure is not a gentle reform, but covers network, storage, scheduling, operating system, distribution Containerized understanding and transformation of various aspects such as formula principles.
In 2013, the field of back-end technology has not seen anything exciting for too long. The cloud computing technology, which was once highly expected by people, has also transformed from an illusory concept to a real virtual machine and bills. Compared with the thriving AWS and the flourishing OpenStack, the open source PaaS project represented by Cloud Foundry became a clear stream in the cloud computing technology at that time.
At that time, the Cloud Foundry project had basically passed the most difficult stage of concept popularization and user education, attracted a large number of domestic and foreign technology manufacturers including Baidu, JD.com, Huawei, IBM, etc., and opened the platform layer service capability with open source PaaS as the core. change. If you have the opportunity to ask the cloud computing practitioners at that time, they will tell you in all likelihood: the era of PaaS is coming!
In fact, the Docker company, which was also called dotCloud at the time, was also a part of this PaaS boom. It's just that compared to PaaS trendsetters such as Heroku, Pivotal, and Red Hat, dotCloud is really insignificant, and its main product has been out of touch with the mainstream Cloud Foundry community for a long time. Seeing that it is about to be abandoned by the raging PaaS trend, dotCloud has made such a decision: to open source its own container project Docker.
The concept of a "container" was never new, nor was it invented by the Docker company. Even in Cloud Foundry, the most popular PaaS project at the time, containers were only the lowest and most neglected part. Speaking of which, just take Cloud Foundry, the de facto standard at that time, as an example to explain the PaaS technology.
One of the main reasons why the PaaS project is accepted by everyone is that it provides a capability called "application hosting". At that time, virtual machines and cloud computing were relatively common technologies and services. At that time, the common usage of mainstream users was to rent a batch of AWS or OpenStack virtual machines, and then use scripts or manual way to deploy applications on these machines.
Of course, this deployment process will inevitably encounter the problem of inconsistency between the cloud virtual machine and the local environment, so the comparison of cloud computing services at that time is who can better simulate the local server environment and bring a better "cloud" experience . The emergence of PaaS open source projects was the best solution to this problem at that time.
In fact, the core component of a PaaS project like Cloud Foundry is an application packaging and distribution mechanism. Cloud Foundry defines a packaging format for each mainstream programming language, and the function of "cf push" is basically equivalent to the user putting the application's executable file and startup script into a compressed package and uploading it to the cloud. Foundry storage. Next, Cloud Foundry will select a virtual machine that can run the application through the scheduler, and then notify the Agent on the machine to download the compressed application package and start it.
At this time, the key is coming. Since many applications from different users need to be started on a virtual machine, Cloud Foundry will call the Cgroups and Namespace mechanism of the operating system to create an isolated environment called "sandbox" for each application. These application processes are then launched in a "sandbox". In this way, the purpose of running the applications of multiple users in batches and automatically in the virtual machine without interfering with each other is realized.
This is the core capability of the PaaS project. And these isolated environments, or "sandboxes" that Cloud Foundry uses to run applications, are so-called "containers."
The Docker project is actually not much different from Cloud Foundry's container, so shortly after its release, James Bayer, Cloud Foundry's chief product manager, made a detailed comparison in the community, telling users that Docker is actually just a container of the same It is just a "sandbox" implemented using Cgroups and Namespace, there is no special black technology, and no special attention is required.
In fact, the Docker project is indeed the same as Cloud Foundry's container in most of its functions and implementation principles, but it is just a small part of the remaining different functions that have become the next "calling the wind and rain" of the Docker project. Two magic weapons.
This function is the Docker image.

3. The architecture of docker:

The three most basic concepts of Docker:

  • Image : A Docker image is equivalent to a root file system. For example, the official image ubuntu:16.04 contains a complete set of root file system of Ubuntu16.04 minimal system.
  • Container : The relationship between Image and Container is just like classes and instances in object-oriented programming. Mirror is a static definition, and container is an entity when mirroring is running. Containers can be created, started, stopped, deleted, paused, etc.
  • Warehouse (Repository) : The warehouse can be regarded as a code control center for storing images.

 

Docker uses a client-server (C/S) architecture pattern and uses remote APIs to manage and create Docker containers.

Docker containers are created from Docker images.

Docker images

A Docker image is a template for creating Docker containers, such as an Ubuntu system.

Docker container (Container)

A container is an application or a group of applications that run independently, and is the entity of the image runtime.

Docker client (Client)

The Docker client communicates with the Docker daemon through the command line or other tools using the Docker SDK ( Develop with Docker Engine SDKs | Docker Documentation ).

Docker host (Host)

A physical or virtual machine used to execute the Docker daemon and containers.

Docker Registry

The Docker warehouse is used to store images, which can be understood as a code warehouse in code control.

Docker Hub ( https://hub.docker.com ) provides a huge collection of images for use.

A Docker Registry can contain multiple warehouses (Repository); each warehouse can contain multiple tags (Tag); each tag corresponds to a mirror image.

Usually, a warehouse will contain images of different versions of the same software, and tags are often used to correspond to each version of the software. We can use the format of <warehouse name>:<label> to specify which version of the software is the mirror image. If no tag is given,  latest will be used  as the default tag.

Docker Machine

Docker Machine is a command-line tool that simplifies Docker installation. Docker can be installed on corresponding platforms through a simple command line, such as VirtualBox, Digital Ocean, and Microsoft Azure.

  

 4. Online installation of docker:

1. Install jdk

yum install java-1.8.0-openjdk.x86_64

2. Install docker

Pull the Alibaba Cloud docker image:

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3. Install docker software

yum install docker-ce

4. Start docker

systemctl start docker

5. Set the boot to start docker

systemctl enable docker

6. Common commands of docker:

1.查看docker镜像: docker images
2.查看docker正在运行容器: docker ps
3.查看docker里的容器: docker ps -a
4.停止运行中的容器: docker stop 容器id
5.删除容器: docker rm  容器id
6.删除镜像: docker rmi 镜像id

5. Deploy the jar package through docker in actual combat

 

1. Make the Dockerfile of the jar package, take the deployment of ctg-eureka.jar as an example, where 8761 is the port

FROM openjdk:8
VOLUME /tmp
ADD ctg-eureka.jar ctg-eureka.jar
EXPOSE 8761
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/ctg-eureka.jar"]

2. Put the microservice into a jar package and upload it to the server

 3. Load the ctg-eureka image

docker build -t ctg-eureka .

Check whether the image is loaded successfully through docker images

 4. Start the container:

docker run -p 8761:8761 --name ctg-eureka -d ctg-eureka

Check whether the startup is successful through docker ps

 5. Common container operation commands:

	1.启动: docker start ctg-eureka
	2.重启: docker restart ctg-eureka
	3.停止: docker stop ctg-eureka

The above is an example of deploying microservices through docker. The main attention is the production of Dockerfile and the loading of the image.

Six, the advantages of using docker:

1. Simplify the configuration of environment variables

Virtual machine technology:
It can run another operating system in one operating system, such as running a Linux system in a Windows system. The application is not aware of this, because the virtual machine looks exactly the same as the real system, but for the underlying system, the virtual machine is just an ordinary file, which can be deleted when it is not needed, and has no effect on other parts. This type of virtual machine perfectly runs another system, which can keep the logic between the application program, operating system and hardware unchanged. 

Container technology:

Instead of simulating a complete operating system, containers isolate processes. With containers, it is possible to package all the resources needed for software to run into an isolated container. Unlike a virtual machine, a container does not need to be bundled with a complete operating system, but only the library resources and settings required for the software to work. The system thus becomes efficient and lightweight and ensures that the software deployed in any environment can run consistently.
 

Guess you like

Origin blog.csdn.net/dongjing991/article/details/131209444