Microservice sleuth+zipkin---link tracking+nacos configuration center

Table of contents

1. Distributed Link Tracking

1.1. Link Tracking Sleuth Introduction

1.2. How to complete sleuth

1.3. zipkin server

2. Configuration Center

2.1. Common configuration center components

2.2. Microservice clusters share a configuration file

2.2.1 Real-time Refresh--Configuration Center Data

2.2.2. Manually write a configuration class for real-time refresh----refresh the configuration file

2.3. Multiple microservices share a configuration


Link tracking extended following  Microservice Gateway  article


1. Distributed Link Tracking

In the microservice construction of a large system, a system is split into many microservices. These modules are responsible for different functions, combined into a system, and finally can provide rich functions. In this architecture, a request often involves multiple services. Internet applications are built on different sets of software modules. These software modules may be developed by different teams, may be implemented using different programming languages, and may be deployed on thousands of servers, spanning multiple different data The center [area], which means that this architectural form will also have some problems:

  • How to quickly find the problem?
  • How to judge the impact range of a fault?
  • How to sort out service dependencies?
  • How to analyze link performance issues and real-time capacity planning?

 

Distributed Tracing (Distributed Tracing) is to restore a distributed request to a call link, perform log records, monitor performance, and centrally display the call status of a distributed request. For example, the time-consuming on each service node, the IP on which machine the request arrives at, the request status of each service node 200 500, and so on.

Common link tracking techniques include the following:

  • cat is open sourced by Dianping. It is a real-time application monitoring platform developed based on Java, including real-time application monitoring and business monitoring. The integration solution is to implement monitoring through code burying, such as: interceptors, filters, etc. It is very intrusive to the code and the integration cost is high. The risk is greater.
  • Zipkin is open sourced by Twitter, an open source distributed tracking system, which is used to collect timing data of services to solve the delay problem in microservice architecture, including: data collection, storage, search and display "graphics". This product is relatively simple to use in combination with spring-cloud-sleuth, and the integration is very convenient, but the function is relatively simple.
  • pinpoint Pinpoint is a Korean open source bytecode injection-based call chain analysis and application monitoring and analysis tool. The feature is that it supports multiple plug-ins, the UI is powerful, and there is no code intrusion at the access terminal.
  • skywalking [will be used more by enterprises in the future]
  • SkyWalking is a native open source bytecode injection-based call chain analysis and application monitoring and analysis tool. It is characterized by supporting multiple plug-ins, strong UI functions, and no code intrusion at the access terminal. Currently joined the Apache Incubator. 
  • Sleuth (log records all nodes on each link, as well as the machines where these nodes are located, and time-consuming.) log4j

Link tracking solution in distributed system provided by SpringCloud.

Note: sleuth+zipkin

 

1.1. Link Tracking Sleuth Introduction

The main function of Spring Cloud Sleuth is to provide tracking solutions in distributed systems. It borrows heavily from the design of Google Dapper, let's first understand the terms and related concepts in Sleuth.

Trace (a complete link -- including many spans (microservice interfaces))

A set of spans with the same Trace Id (throughout the entire link) are connected in series to form a tree structure. In order to implement request tracking, when the request arrives at the entry point of the distributed system, the service tracking framework only needs to create a unique identifier (ie TraceId) for the request, and at the same time, the framework always keeps passing the unique ID when it flows inside the distributed system value until the return of the entire request. Then we can use this unique identifier to concatenate all the requests to form a complete request chain.

Span

Represents a set of basic units of work. In order to count the delay of each processing unit, when the request reaches each service component, a unique identifier (SpanId) is also used to mark its start, specific process and end. Through the start and end timestamps of the SpanId, the calling time of the span can be counted. In addition, we can also get the name of the event. Metadata such as request information.

Annotation

Use it to log events over time, important notes used internally:

  • cs (Client Send) The client sends a request to start a requested command
  • sr (Server Received) The server receives the request and starts processing, sr-cs = network delay (time of service call)
  • ss (Server Send) The server is ready to send to the client after processing, ss - sr = request processing time on the server
  • cr (Client Reveived) The client receives the response from the server, and the request ends. cr - cs = total time of request

1.2. How to complete sleuth

Record microservice logs

(1) Microservice access to sleuth

The parent project introduces dependencies

 <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-sleuth</artifactId>
        </dependency>
    </dependencies>

Running log:

 

They have the same traceId in the call link, and each module has its own spanId. In this way, the sleuth will be concatenated by the spanId of the same traceId to form a complete call link. How to think as long as the execution time of each spanId can be subtracted by the time of the log. If you look at it this way, it is very troublesome. ---- Is there a component used to collect the logs generated by sleuth and display them graphically. ---zipkin

1.3. zipkin server

ZIPKIN official website

First things first: install and start the zipkin server

1. Download the jar package: resources to upload

2. run

3. The browser accesses the zipkin server

http://localhost:9411/zipkin

 

The second thing: specify the zipkin server address used by the microservice

1. Need to introduce zipkin dependency in microservices

The parent project introduces dependencies:

     <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-starter-zipkin</artifactId>
     </dependency>

2. Specify the address of the zipkin server in each microservice

In the configuration file add:

#指定zipkin服务器的地址
spring.zipkin.base-url=http://localhost:9411/

Disadvantage: When zipkin is restarted, the link will also be lost. exists in memory by default

Solution:  store in the database

Make sure the mysql database allows remote connections:

CREATE TABLE IF NOT EXISTS zipkin_spans (
 `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit', 
`trace_id` BIGINT NOT NULL, 
`id` BIGINT NOT NULL, 
`name` VARCHAR(255) NOT NULL, 
`parent_id` BIGINT, 
`debug` BIT(1), 
`start_ts` BIGINT COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL',
`duration` BIGINT COMMENT 'Span.duration(): micros used for minDuration and maxDuration query' )
 ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;

ALTER TABLE zipkin_spans ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `id`) COMMENT 'ignore insert on duplicate'; 
ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`, `id`) COMMENT 'for joining with zipkin_annotations'; 
ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTracesByIds';
 ALTER TABLE zipkin_spans ADD INDEX(`name`) COMMENT 'for getTraces and getSpanNames';
 ALTER TABLE zipkin_spans ADD INDEX(`start_ts`) COMMENT 'for getTraces ordering and range';
CREATE TABLE IF NOT EXISTS zipkin_annotations (
 `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit', 
`trace_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.trace_id', 
`span_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.id',
 `a_key` VARCHAR(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1', 
`a_value` BLOB COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB', 
`a_type` INT NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation', 
`a_timestamp` BIGINT COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp',
 `endpoint_ipv4` INT COMMENT 'Null when Binary/Annotation.endpoint is null', 
`endpoint_ipv6` BINARY(16) COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address', 
`endpoint_port` SMALLINT COMMENT 'Null when Binary/Annotation.endpoint is null',
`endpoint_service_name` VARCHAR(255) COMMENT 'Null when Binary/Annotation.endpoint is null' )
 ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
ALTER TABLE zipkin_annotations ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `span_id`, `a_key`, `a_timestamp`) COMMENT 'Ignore insert on duplicate'; 
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`, `span_id`) COMMENT 'for joining with zipkin_spans'; 
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTraces/ByIds'; 
ALTER TABLE zipkin_annotations ADD INDEX(`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames';

ALTER TABLE zipkin_annotations ADD INDEX(`a_type`) COMMENT 'for getTraces';
 ALTER TABLE zipkin_annotations ADD INDEX(`a_key`) COMMENT 'for getTraces'; 
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id`, `span_id`, `a_key`) COMMENT 'for dependencies job';
CREATE TABLE IF NOT EXISTS zipkin_dependencies ( 
`day` DATE NOT NULL, 
`parent` VARCHAR(255) NOT NULL, 
`child` VARCHAR(255) NOT NULL,
 `call_count` BIGINT ) 
ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci; 
ALTER TABLE zipkin_dependencies ADD UNIQUE KEY(`day`, `parent`, `child`);

run

java -jar zipkin-server-2.12.9-exec.jar --STORAGE_TYPE=mysql --MYSQL_HOST=127.0.0.1 --MYSQL_TCP_PORT=3306 --MYSQL_DB=zipkin --MYSQL_USER=root --MYSQL_PASS=123456789

 

2. Configuration Center

think:

  • (1) Each microservice may need to build n clusters. The microservice configuration in this cluster is the same. If you need to modify the configuration of the service, you need to modify each one.
  • (2) Each microservice may have the same configuration.

Idea: Give it to a component for unified management. ---- Configuration Center

2.1. Common configuration center components

Apollo

Apollo is a distributed configuration center open sourced by Ctrip. There are many features, such as: configuration updates can take effect in real time, support grayscale release function, and can perform version management, operation audit and other functions for all configurations, and provide open platform API. And the information is also very detailed. ----very useful.

Disconf

Disconf is a distributed configuration center open sourced by Baidu. It is based on Zookeeper to realize real-time notification and effective after configuration changes.

SpringCloud Config

This is the configuration center component in Spring Cloud. It is seamlessly integrated with Spring, it is very convenient to use, and its configuration storage supports Git<git didn't learn>. However, it does not have a visual operation interface, and the configuration does not take effect in real time, and needs to be restarted or refreshed.

Nacos

This is a component of the SpingCloud Alibaba technology stack, and we have used it as a service registry before. In fact, it also integrates the function of service configuration, and we can directly use it as a service configuration center.

We use nacos as the configuration center. There is no need to talk about installing the nacos server.

2.2. Microservice clusters share a configuration file

In a cluster production environment, it must be deployed on different servers.

(1) A configuration file needs to be created in the nacos configuration center

The name must be: microservice name.suffix

 

 

 (2) Use the configuration file in the nacos configuration in the microservice

The parent project introduces dependencies:

 <!--引入nacos配置中心的依赖-->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
        </dependency>

(3) Bootstrap.properties must be used---used to load the content of the external configuration file

#为微服务定义名称----必须和配置中心的id相同
spring.application.name=qy165-product
#指定配置中心的地址
spring.cloud.nacos.config.server-addr=localhost:8848
#指定配置文件所在的组  默认DEFAULT_GROUP
spring.cloud.nacos.config.group=aaa
#指定配置文件的后缀名  默认properties
#spring.cloud.nacos.config.file-extension=yml

Modify ProductController

test

 

 

2.2.1 Real-time Refresh--Configuration Center Data

 

2.2.2. Manually write a configuration class for real-time refresh----refresh the configuration file

 

 

2.3. Multiple microservices share a configuration

(1) Extract the public content of the above two configuration classes into a public configuration file

 

(2) Let microservices introduce public files

 

#引用额外公共的配置文件
spring.cloud.nacos.config.extension-configs[0].data-id=gg.properties
spring.cloud.nacos.config.extension-configs[0].group=aaa
spring.cloud.nacos.config.extension-configs[0].refresh=true

Guess you like

Origin blog.csdn.net/WQGuang/article/details/131775014