System practice first operation

❄ 1. Course survey

At first, I thought it was a practical class about hardware, because there is also a computing system structure class on the timetable. The class name is similar to the teacher. After the discovery, it was a software experiment class. Knowledge, I hope that through this course my comprehensive practical ability can be improved ==

❄ 2. Understanding microservices

(1) What is microservice

Microservice Architecture is an architectural concept designed to decouple solutions by decomposing functions into discrete services. You can think of it as being at the architectural level rather than acquiring services.
Many SOLID principles are applied to the class. Microservice architecture is a very interesting concept. Its main function is to decompose functions into discrete services, thereby reducing the coupling of the system and providing more flexible service support.
Concept : Split a large single application and service into several or even dozens of supporting microservices, which can extend a single component instead of the entire application stack to meet service level agreements.
Definition : Create applications around business domain components. These applications can be independently developed, managed, and iterated. Using cloud architecture and flat desktop deployment, management, and service functions in distributed components makes product delivery easier.
Essence : Use some services with clearer functions and more sophisticated business to solve larger and more practical problems.

(2) Features

  • Service componentization:
    In the microservice architecture, we need to componentize and decompose the service. A service is an out-of-process component that cooperates through communication protocols such as HTTP, rather than working in a built-in way like traditional components. Each service is independently developed and deployed, which can effectively avoid the redeployment of the entire system caused by the modification of a service.
  • Organize teams by business:
    When implementing a microservices architecture, different team division methods are required. Since each service is implemented for a wide or full stack for a specific business, the minutes are responsible for the persistent storage of data, and are responsible for various cross-disciplinary functions such as user interface definition. Therefore, in the face of large projects, it is more recommended to split the microservice team according to the line of business method. On the one hand, it can effectively reduce the internal friction caused by the internal modification of the service. For clarity.
  • Product attitude:
    In the microservice architecture team, each small team should be responsible for its entire life cycle in the way of making products, instead of delivering to testing and operation and maintenance as the goal of traditional project development.
  • Intelligent endpoints and dumb pipes:
    Since each service is not in a process, the communication mode between components has changed, and the original method calls have become RPC calls, which will cause cumbersome communication between microservices The performance is even worse, so we need a more coarse-grained communication protocol:
  • In the microservices architecture, the following two service invocation methods are usually used:
    (1) Use HTTP RESTful API or lightweight message sending protocol to implement message delivery and service invocation trigger.
    (2) By passing messages on a lightweight message bus, some intermediates such as RabbitMQ provide reliable asynchronous exchange.
    In the case of extreme emphasis on performance, it is also possible to use a binary messaging protocol, such as protobuf
  • Decentralized governance:
    Through the use of lightweight contracts to define interfaces in the entire microservice architecture, the specific technical platform of our team service itself is no longer so sensitive, so that each component in the entire microservice architecture system can target different Choose different technology platforms for business characteristics.
  • Decentralized management of data:
    When implementing a microservices architecture, each service is expected to manage its own database. This is the decentralization of data management. Although the decentralization of data management makes data management more detailed, allowing data storage and The performance is optimal, but different database instances, data consistency has also become one of the problems that need to be solved in the microservice architecture. The difficulty of implementing the distributed transaction itself is very large, so in the microservice architecture, we emphasize each A "no transaction" call is made between services, and data consistency only requires that the data is consistent in the final processing state.
  • Infrastructure automation:
    In the microservices architecture, it is important to build a "continuous delivery" platform from the beginning to support the entire implementation process;
  • Fault-tolerant design:
    In the micro-service architecture, it is necessary to quickly detect the source of the failure and automatically restore the service as much as possible. We usually want to implement monitoring and logging components in each service. For example: the dashboard of key data such as service status, circuit breaker status, throughput, network delay, etc.
  • Evolutionary design:
    At the initial stage, design and implementation are carried out in a monolithic system. On the one hand, the system volume is not very large at the initial stage, and the construction and maintenance costs are not high. On the other hand, the core business in the early stage usually does not change significantly in the later stage.
    With the development of the system or the needs of the business, the architect will perform microservice processing on some content that often changes or has a certain time effect, and gradually split the original variable modules in the single system gradually, and stabilize Modules that do not change much form a core microservice that exists in the entire architecture.

(3) Compared with the traditional software structure

  • Traditional software architecture
    • All functions are packaged in a WAR package, basically no external dependencies (except containers), deployed in a JEE container (Tomcat, JBoss, WebLogic), including all logic such as DO / DAO, Service, UI.

    • advantage

      • Simple development and centralized management
      • Basically will not repeat development
      • Functions are local, no distributed management and call consumption
    • Disadvantages

      • Low efficiency: developers are changing code in the same project, waiting for each other, and conflicts are constantly
      • Difficult to maintain: code functions and functions are coupled together, newcomers do not know how to start
      • Inflexible: long build time, any minor changes will require reconstruction of the entire project, time-consuming
      • Poor stability: A small problem may cause the entire application to hang
      • Insufficient scalability: unable to meet business requirements under high concurrency
  • Advantages and disadvantages of microservice architecture
    • advantage
      • The granularity of the service is more detailed, which is conducive to the reuse of resources and improves the development efficiency.
      • It can more accurately formulate optimization plans for each service and improve the maintainability of performance.
      • Applicable to the Internet era, the product iteration cycle is shorter
    • Disadvantages
      • There are too many microservices, and the cost of service governance is high, which is not conducive to system maintenance.
      • The technical cost of distributed system development is high (fault tolerance, distributed things, etc.), which poses great challenges to the team.
  • Microservice architecture deployment
    In terms of microservice architecture, the deployment of microservices plays a vital role and has the following key requirements:
    • Ability to deploy / undeploy independently of other microservices.
    • It must be able to scale at each microservice level (a given service may get more traffic than other services).
    • Quickly build and deploy microservices.
    • A failure in one microservice must not affect any other services.
      Docker (an open source engine that allows developers and system administrators to deploy self-sufficient application containers in a Linux environment) provides a great way to deploy microservices to meet the above requirements. The key steps involved are as follows:
    • Package microservices as (Docker) container images.
    • Deploy each service instance as a container.
    • Scaling is done based on changing the number of container instances.
    • As we use Docker containers, the speed of building, deploying, and starting
      microservices will be greatly improved (much faster than regular VMs). Kubernetes manages and runs Docker containers across multiple hosts by allowing Linux container clusters to be managed as a system, Provides co-location of containers, service discovery and copy control functions, and expands the functions of Docker. As you can see, most of these features are also essential in our microservices environment. Therefore, using Kubernetes (on top of Docker) for microservice deployment has become an extremely powerful method, especially for large-scale microservice deployment.

❄Three, learn docker technology

(1) Learn to understand the relevant concepts of docker

  • docker: Docker is an open source application container engine that allows developers to package their applications and dependent packages into a portable container and then publish it to any popular Linux machine or Windows machine. It can also be virtualized. The sandbox mechanism is used completely, and there will not be any interface between each other.
  • docker compose: Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all the services your application needs. Then, with a single command, you can create and start all services from the YML file configuration.
  • Dockerfile: Dockerfile is a text file used to build an image. The text content contains a series of instructions and instructions needed to build an image.
  • docker machine: Docker Machine is a tool that allows you to install Docker on a virtual host, and you can use the docker-machine command to manage the host.
    Docker Machine can also centrally manage all docker hosts, such as quickly installing docker on 100 servers. The virtual host managed by ocker Machine can be on-board or a cloud provider such as Alibaba Cloud, Tencent Cloud, AWS, or DigitalOcean.
    Using the docker-machine command, you can start, check, stop, and restart the managed host, you can also upgrade the Docker client and daemon, and configure the Docker client to communicate with your host.
  • Swarm: Swarm is a cluster management tool officially provided by Docker. Its main function is to abstract several Docker hosts as a whole, and manage various Docker resources on these Docker hosts through a single entrance. Swarm is similar to Kubernetes, but lighter and has fewer features than kubernetes.
  • k & s: k8s is a tool for orchestrating containers. In fact, it is also a tool for managing the entire life cycle of applications. From creating applications, deploying applications, providing services, expanding and reducing applications, and updating applications, it is very convenient and can be done To recover from failure.

(2) Build a docker environment

1. Ubuntu Docker installation

1.1 Installation using Docker warehouse

1.1.1 Set up warehouse

Update apt package index

    sudo apt-get updat  

Install apt dependency package, used to obtain the repository through HTTPS

    sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common  

Add Docker's official GPG key

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -  

Set up a stable repository

    sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable" 

1.1.2 Install Docker Engine-Community

Update apt package index

    sudo apt-get update  

Install the latest version of Docker Engine-Community and containerd

sudo apt-get install docker-ce docker-ce-cli containerd.io  

Test whether Docker is installed successfully

sudo docker run hello-world 

2. Use of Docker containers

2.1 Get mirror

Use docker pull command to load ubuntu image

sudo docker pull ubuntu  

2.2 Start the container

sudo docker run -t -i ubuntu /bin/bash

Parameter Description:

  • -i: Interactive operation.
  • -t: terminal.
  • ubuntu: ubuntu image.
  • / bin / bash: The command is placed after the image name. Here we want to have an interactive shell, so / bin / bash is used.

2.3 Start / stop

View all container commands

sudo docker ps -a

Start a stopped container

sudo docker start <容器 ID>

2.4 Running in the background

Create a background running container

sudo docker run -d ubuntu:15.10 /bin/sh -c "while true; do echo hi; sleep 1; done"

View the standard output in the container

sudo docker logs <容器 ID>

2.5 Stopping and restarting the container

Stop the started container

sudo docker stop <容器 ID>

Stopped container restart

sudo docker restart <容器 ID>

2.6 Entering the container

When the -d parameter is used, the container will enter the background after it is started. At this time, if you want to enter the container, you can enter it through the following command:

  • docker attach
  • docker exec: It is recommended to use, because this will exit the container terminal and will not cause the container to stop.
2.6.1 attach command
sudo docker attach <容器 ID>

2.6.2 exec command
sudo docker exec -it <容器 ID> /bin/bash

2.7 Update container

View optional update methods

docker container update --help

2.8 Delete container

sudo docker rm -f <容器 ID>

(3) Use of Docker image

3.1 Obtaining Alibaba Cloud Mirror Accelerator

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["加速器地址"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

3.2 List the mirror list

sudo docker images

Description of each option:
REPOSITORY: Represents the source of the mirror warehouse
TAG: Mirror tag
IMAGE ID: Mirror ID
CREATED: Mirror creation time
SIZE: Mirror size

3.3 Find mirror

用Docker Hub

sudo docker search httpd

3.3 Drag and drop images

Download mirror

sudo docker pull httpd

(4) Docker warehouse management

4.1 docker hub warehouse

4.1.1 Create Alibaba Cloud Warehouse

4.1.2 Login to Alibaba Cloud Docker Registry

sudo docker login --username=维他柠檬茶br registry.cn-hangzhou.aliyuncs.com

4.1.3 Push the image to the Registry

sudo docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/bybbr/bbrdocker:v1
sudo docker push registry.cn-hangzhou.aliyuncs.com/bybbr/bbrdocker:v1

4.1.4 Delete previous files

sudo docker rmi registry.cn-hangzhou.aliyuncs.com/bybbr/bbrdocker:v1

4.1.5 Pull image from Registry

sudo docker pull registry.cn-hangzhou.aliyuncs.com/bybbr/bbrdocker:v1

Guess you like

Origin www.cnblogs.com/bbbr/p/12722098.html