Typical application scenarios Zookeeper (1)

This article comes from knowledge: "distributed consensus principle and practice from Paxos to Zookeeper" Chapter VI

  1. Data Publish / Subscribe (distribution center)
  2. Load balancing (DNS parsing)
  3. Naming Service (order node characteristics)
  4. Distributed Coordination / notification (Watcher mechanism)

1.1 Data Publish / Subscribe

Data Publish / Subscribe (Pulish / Subscribe) system, the so-called distribution center, by definition is the publisher will publish data on a ZooKeeper or series of nodes for subscriber data subscription,
thus achieving the purpose of dynamic data acquisition, realization dynamic update configuration information and centralized management of data.

Publish / Subscribe systems typically have two design patterns, are pushing (Push) mode and pull (Pull) mode. In push mode, the server transmits the active data to all subscribers update clients; and pull mode is a
client initiates a request for the current data, the client generally adopt polling timing of the pulled embodiment. ZooKeeper uses a combination of push and pull: Clients register their concern to the server
node, the node once the data is changed, then the server will send Watcher event notification to the appropriate client, the client receives the news after the notification, the server needs to take the initiative to get the latest data.

If the configuration information stored in the centralized management on ZooKeeper, it's generally the case, the application start time will take the initiative to conduct obtain a configuration information on ZooKeeper server, at the same time,
registered on the specified node a Watcher monitor, so that , provided that the configuration information is changed, the server will be real-time notification to all subscribed clients, so as to achieve real-time for the latest configuration information.

Example: In our usual application system development, often encounter such a demand: the need to use some common system configuration information, such as machine list information, run-time switch configuration, database configuration information.
These global open configuration with the following three features:

  • The amount of data is usually relatively small.
  • Data content will change dynamically at runtime.
  • Each machine in the cluster share the same configuration.

For this type of configuration information, the general practice usually can choose to store the local configuration file or in the memory variable. Local configuration can be updated in real time on the run-time system memory variable using JMX way.

But once machines become larger, and the configuration information change frequently, we need to find a more distributed-oriented solutions.

Following the "Database handover" scenario unfold and see how ZooKeeper configuration management.

1.1.1 configuration storage

Before configuration, first go ZooKeeper initial configuration is stored, for example, /app1/databse_config/(hereinafter referred to as "Node Configuration"), the write data node:

dbcp.driverClassName=com.mysql.jdbc.Driver
dbcp.dbJDBCUrl=jdbc:mysql://localhost:3306/taokeeper
dbcp.characterEncoding=GBK
dbcp.username=admin
dbcp.password=root
dbcp.maxActive=30
dbcp.maxIdle=10
dbcp.maxWait=10000

1.1.2 configuration acquisition

Each machine in the cluster initialization phase starts, it reads the information from the database ZooKeeper node configuration mentioned above, at the same time, the client also need to register a change of data on the configuration node
Water listening, once node data changed all subscribed clients are able to obtain the data change notification.

1.1.3 configuration changes

During system operation, there may be circumstances need to switch the model database, with the Watcher mechanism ZooKeeper, to help us change notification data will be sent to each client, each client
after receiving the notice of this change, we can re-obtain the latest data.

1.2 Load Balancing

According to the definition on Wikipedia, load balancing (Load Balance) is a fairly common computer network technology, to a plurality of computers (computer clusters), network connections, CPU, disk drive or
other resource allocated load to optimize the use of resources to maximize throughput, minimize response time and to avoid excessive overload of purposes. Load balancing can generally be divided into two kinds of hardware and software load balancing, this section
focuses on ZooKeeper is in the "soft" load balancing scenarios.

In a distributed system, load balancing is a common technique, substantially each distributed system require the use of load balancing. In the first chapter of this book to explain the characteristics of distributed systems, we mentioned that a distributed system having
reciprocity, in order to ensure high availability of the system, usually a copy of the way to be deployed to data and services. For consumers, the need to provide a prescription choose to perform the relevant business in these peer service
logic, which is the typical DNS service. In this section, we will describe in detail how to use ZooKeeper to solve the problem of load balancing ( see the in-depth analysis of the first chapter Java_Web technology ).

1.2.1 a dynamic DNS service

DNS is an acronym for Domain Name System (Domain Name System) is. DNS system can be seen as a very large scale distributed mapping table for domain names and IP addresses-one mapping, and thus easy for people
to access the Internet site through the domain name.

Under normal circumstances, we can register the domain name registration service provider application domain, but the biggest flaw in this way is limited only register domain names:

Daily development process, often encounter such a situation, within a Company1 company, need to give the machine a server cluster App1 application to configure a domain name resolution. I believe there have been front-line development
experience readers will know, this time usually required by the domain name similar to a app1.company1.com, which is a corresponding server address. If a small number system, then through
the traditional DNS configuration can cope, however, once the size of the company increases, endless variety of applications, it is difficult to re-unify the management of in this way.

Therefore, in the actual development, often to achieve the domain name resolution job using the local HOST binding. Specifically, how local HOST binding because the focus is not the book, and a large amount of information on the Internet,
so here no more explanation. Use local HOST binding, can easily solve the problem of the shortage domain, basically every system can determine the domain of your landing IO address of the system on their own. Greatly improving the
development and debugging efficiency. (HOST file is modified, so that domain names and IP direct mapping, minus the resolution time) However, this looks perfect solution, but also its fatal flaw:

When the machine scale applications within a certain range, and the domain name change is not particularly frequent, local HOST binding is very efficient and simple way. However, once the machine become larger, they often
encounter such a situation: When we apply on-line, you need to bind up the domain name in the application of each machine, but in the case of very large-scale machine, this approach would rather inconvenient.
In addition, if you want to update the temporary domain name, you also need to go to each machine individually to make changes, more time-consuming, and therefore completely unable to guarantee real-time.

Now, let's introduce a scheme based on dynamic DNS ZoKeeper implementation (referred to as "DDNS", Dynamic DNS).

1.2.2 domain configuration

First you need to create a node on ZooKeeper to domain configuration.


In this way, /DDNS/app1the node will configure up your own domain name, and supports multiple IP

 

192.168.0.1:8080, 192.168.0.2:8080

1.2.3 DNS

In the traditional DNS resolution, we do not need the resolution process between a domain name, all of which are handed over to the operating system's domain name and IP address mapping mechanism (local HOST binding) or a dedicated DNS
server (domain name registered by service providers). Therefore, at this point, DDNS domain name resolution programs and the traditional big difference ---- in DDNS, the domain name resolution process is applied by each responsible for their own.
First, the application will usually get an IP address and port configuration from the domain node, conduct its own resolution. At the same time, each application will register a domain name on the data node Watcher monitor change, so that timely
notified of the domain name change.

1.2.4 Domain Name Change

During operation, you will inevitably run into the IP address or domain name corresponding port change, this time on the need for domain name changing operation. In DDNS, we only need to specify the domain node update operation,
ZooKeeper this event notification will be sent to subscribers of the end customer, the application after receiving notification of this event, will be acquired domain configuration again.

Above us how to use ZooKeeper to achieve a dynamic DNS system. To achieve dynamic DNS service through ZooKeeper, on the one hand, to avoid an unlimited number of domain names brought about by the growth of centralized maintenance
costs; on the other hand, in the case of domain name change, it is possible to avoid by-machine brought to update the local HOST the tedious work.

1.2.5 Automated DNS service

According to the above explanation, I believe that readers have been able to basically use ZooKeeper to achieve a dynamic DNS service up. But take a closer look at the implementation of the above you will find links in the domain name change, when the
time corresponding domain name I address changes, we still need to artificially intervene to modify the IP address and port on the domain node. Next we take a look at this ZooKeeper use more automated implementation
of DNS service. Automated DNS service system mainly to automated location-based services.


First to introduce the system architecture of the whole dynamic DNS system components and their more important duties.

 

  • Register Cluster: Dynamic responsible for registering domain names.
  • Dispatcher Cluster: responsible for domain name resolution.
  • Scanner Cluster: the state is responsible for testing and maintenance services (detection services availability, shielding exception service nodes, etc.).
  • SDK: offers a variety of language system access agreement to provide service registry and query interface.
  • Monitor: Service is responsible for collecting information and monitoring of the state of their own DDNS.
  • Controller: Admin's Console, responsible for authorization management, flow control, service and manual static configuration screen services and other functions, operation and maintenance personnel in top management Register, Dispatcher and Scanner etc Cluster.

The core of the system is of course ZooKeeper clusters, storage and is responsible for coordinating a series of distributed data. Here let us look in detail how the whole system works. In this architecture model, we
will target those IP addresses and port abstracted as service providers, and those who need to use the DNS client were abstracted into consumer services.

.1 Domain Name Registration

 


Mainly for domain name registration service provider for the. Domain name registration process can be summed up as: process each service provider in the start of the will to register their domain name information to the Register Cluster to go.

 

  1. Service provider via SDK API interface provided by the sending domain name, IP address and port to the Register Cluster. For example, A machine for providing serverA.xxx.com, then Register to
    send a "domain name -> IP: PORT" map: "serverA.xxx.com-> 192.168.0.1:8080".
  2. Register to obtain the domain name, IP address and port configuration, according to the domain name corresponding to the write information ZooKeeper domain node.

.2 DNS

域名解析是针对服务消费者来说的,正好和域名注册过程相反:服务消费者在使用域名的时候,会向Dispatcher发出域名解析请求。Dispatcher收到请求后,
会从ZooKeeper上的指定域名节点读取相应的IP:PORT列表,通过一定的策略选取其中一个返回给前端应用。

.3 域名探测

域名探测是指DDNS系统需要对域名下所有注册的IP地址和端口的可用性进行检测,俗称“健康度检测”。健康度检测一般有两种方式,第一种是服务端主动发起健康度心跳
检测,这种方式一般需要在服务端和客户端之间建立起一个TCP长链接;第二种则是客户端主动向服务端发起健康度心跳检测。在DDNS架构中的域名探测,使用
的是服务提供者都会定时向Scanner进行状态汇报(即第二种健康度检测方式)的模式,即每个服务提供者后都会定时向Scanner汇报自己的状态。

Scanner会负责记录每个服务提供者最近一次的状态汇报时间,一旦超过5秒没有收到状态汇报,那么就认为该IP地址和端口已经不可用,于是开始进行域名
清理过程。在域名清理过程中,Scanner会在ZooKeeper中找到该域名对应的域名节点,然后将该IP地址和端口配置从节点内容中移除。

3.1 命名服务

命名服务(Name Service)也是分布式系统中比较常见的一类场景。在分布式系统中,被命名的实体通常可以是集群中的机器、提供的服务地址或远程对象等————
这些我们都可以统称它们为名字(Name),其中较为常见的就是一些分布式服务框架(如RPC、RMI)中的服务地址列表,通过使用命名服务,
客户端应用能够根据指定名字来获取资源的实体、服务地址和提供者的信息等。

Java语言中的JNDI便是一种典型的命名服务。JNDI是Java命名与目录接口(Java Naming and Directory Interface)的缩写,是J2EE体系中重要的规范之一,
标准的J2EE容器都提供了对JNDI规范的实现。因此,在实际开发中,开发人员常常使用应用服务器自带的JNDI实现来数据源的配置与管理————使用JNDI方式后,
开发人员可以完成不需要关心与数据库相关的任何信息,包括数据库类型、JDBC驱动类型以及数据库账号等。

ZooKeeper提供的命名服务功能与JNDI技术有相似的地方,都能够帮助应用系统通过一个资源引用的方式来实现对资源的定位与使用。另外,广义上命名服务
的资源定位都不是真正意义的实体资源————在分布式环境中,上层应用仅仅需要一个全局唯一的名字,类似于数据库中的唯一主键。下面我们来看看如何使用
ZooKeeper来实现一套分布式全局唯一ID的分配机制。

所谓ID,就是一个能够唯一标识某个对象的标识符。在我们熟悉的关系型数据库中,各个表都需要一个主键来唯一标识每条数据库记录,这个主键就是这样的唯一ID。
在过去的单库单表型系统中,通常可以使用数据库字段自带的auto_increment属性来自动为每条数据库记录生成一个唯一的ID,数据库会保证生成的这个ID
在全局唯一。但是随着数据库数据规模的不断增大,分库分表随之出现,而auto_increment属性仅能针对单一表中的记录自动生成ID,因此在这种情况下,
就无法再依靠数据库的auto_increment属性来唯一标识一条记录了。于是,我们必须寻求一种能够在分布式环境下生成全局唯一ID的方法。

一说起全局唯一ID,相信读者都会联想到UUID。没错,UUID是通用唯一识别码(Universally Unique Identifier)的简称,是一种在分布式系统中广泛
使用的用于唯一标识元素的标准,最典型的实现是GUID(Globally Unique Identifier,全局唯一标识符),主流ORM框架Hibernate有对UUID的直接支持。

确实,UUID是一个非常不错的全局唯一ID生成方式,能够非常简便地保证分布式环境中的唯一性。一个标准的UUID是一个包含32位字符和4个短线的字符串,
例如“asd321a-sd-sdwds321d5w4a2-w5e4w51d”。UUID的优势自然不必多说,我们重点来看看它的缺陷。

  • 长度过长:与数据库中的INT类型相比,存储一个UUID需要花费更多得空空间。
  • 含义不明:影响问题排查和开发调试的效率。

接下来,我们结合一个分布式任务调度系统来看看如何使用ZooKeeper来实现这类全局唯一ID的生成。

通过ZooKeeper节点创建的API接口可以创建一个顺序节点,并且在API返回值中会返回这个节点的完整名字。利用这个特性,我们就可以借助ZooKeeper来生成
全局唯一的ID了。

 

  1. 所有客户端都会根据自己的任务类型,在指定类型的任务下面通过调用create()接口创建一个顺序节点,例如创建“job-”节点。
  2. 节点创建完毕后,create()接口会返回一个完整的节点名,例如“job-0000000003”。
  3. 客户端拿到这个返回值后,拼接上type类型,例如“type2-job-0000000003”,这就可以作为一个全局唯一的ID了。

在ZooKeeper中,每一个数据节点都能够维护一份子节点的顺序顺列,当客户单对其创建一个顺序子节点的时候ZooKeeper会自动以后缀的形式在其子节点上
添加一个序号,在这个场景中就是利用了ZooKeeper的这个特性。以下为博主测试:


另外如果子节点过多,导致连接读取超时,可以适当提高配置中的initLimit以及syncLimit的数值(10倍也是可以的)。

 

4.1 分布式协调/通知

分布式协调/通知服务是分布系统不可缺少的环节,是将不同的分布式组件有机结合起来的关键所在。对于一个在多台机器上部署运行的应用而言,通常
需要一个协调者(Coordinator)来控制整个系统的运行流程,例如分布式事务的处理、机器间的互相协调等。同时,引入这样一个协调者,便于将分布式协调的职责从
应用中分离出来,从而可以大大减少系统之间的耦合性,而且能够显著提高系统的可扩展性。

ZooKeeper中特有的Watcher注册与异步通知机制,能够很好地实现分布式环境下不同机器,甚至是不同系统之间的协调与通知,从而实现对数据变更的实时处理。
基于ZooKeeper实现分布式协调与通知功能,通常的做法是不同的客户端都对ZooKeeper上同一个数据节点进行Watcher注册,监听数据节点的变化(包括
数据节点本身及其子节点),如果数据节点发生变化,那么所有订阅的客户端都能够接收到相应的Watcher通知,并做出相应的处理。

4.1.1 MySQL数据复制总线:MySQL_Replicator

MySQL数据复制总线(以下简称“复制总线”)是一个实时数据复制框架,用于在不同的MySQL数据库实例之间进行异步数据复制和数据变化通知。整个系统是一个由
MySQL数据库集群、消息队列系统、任务管理监控平台以及ZooKeeper集群等组件共同构成的一个包含数据生产者、复制管道和数据消息者等部分的数据总线系统。


在该系统中,ZooKeeper主要负责进行一系列的分布式协调工作,在具体的实现上,根据功能将数据复制组件划分为三个核心子模块:Core、Server和Monitor,
每个模块分别为一个单独的进程,通过ZooKeeper进行数据交换。

 

  • Core实现了数据复制的核心逻辑,其将数据复制封装成管道,并抽象出生产者和消费者两个概念,其中生产者通常是MySQL数据库的Binlog日志。
  • Server负责启动和停止复制任务。
  • Monitor负责监控任务的运行状态,如果在数据复制期间发生异常或出现故障会进行告警。

三个子模块之间的关系如下图:


每个模块作为独立的进程运行在服务端,运行时的数据和配置信息均保存在ZooKeeper上,Web控制台通过ZooKeeper上的数据获取到后台进程的数据,同时发布控制信息。

 

4.1.2 任务注册

Core进程启动的时候,首先会向/mysql_replicator/tasks节点(以下简称“任务列表节点”)注册任务。例如,对于一个“复制热门商品”的任务,Task
所在机器在启动的时候,会首先在任务列表节点上创建一个子节点,例如/mysql_replicator/tasks/copy_hot_time(以下简称“任务节点”),如下图:


如果在注册过程中发现该子节点已经存在,说明已经有其他Task机器注册了该任务,因此自己不需要再创建该节点了。

 

4.1.3 任务热备份

为了应对复制任务故障或者复制任务所在主机故障,复制组件采用“热备份”的容灾方式,即将同一个复制任务部署在不同的主机上,我们称这样的机器为“任务机器”,
主、备任务机器通过ZooKeeper互相检测运行健康状况。

为了实现上述热备方案,无论在第一步中是否创建了任务节点,每台任务机器都需要在/mysql_replicator/tasks/copy_hot_item/instances节点上
将自己的主机名注册上去。注意,这里注册的节点类型很特殊,是一个临时的顺序节点。在注册完这个子节点后,通常一个完整的节点名如下:
/mysql_replicator/tasks/copy_hot_item/instances/[Hostname]-1,其中最后的序列号就是临时顺序节点的精华所在。

在完成该子节点的创建后,每台任务机器都可以获取到自己创建的节点的完成节点名以及所有子节点的列表,然后通过对比判断自己是否是所有子节点中序号最小的。
如果自己是序号最小的子节点,那么就将自己的运行状态设置为RUNNING,其余的任务机器则将自己设置为STANDBY————我们将这样的热备份策略称为“小序号优先”策略。

4.1.4 热备切换

完成运行状态的标识后,任务的客户端机器就能够正常工作了,其中标记为RUNNING的客户端机器进行正常的数据复制,而标记为STANDBY的客户端机器则进入待命状态。
这里所谓待命状态,就是说一旦标记为RUNNING的机器出现故障停止了任务执行,那么就需要在所有标记为STANDBY的客户端机器再次按照“小序号优先”策略来
选出RUNNING机器来执行,具体的做法就是标记为STANDBY的机器都需要在/mysql_replicator/tasks/copy_hot_item/instances节点上注册一个
“子节点列表变更”的Watcher监听,用来订阅所有任务执行机器的变化情况————一旦RUNNING机器宕机与ZooKeeper断开连接后,对应的节点就会消失,
于是其他机器也就接收到了这个变更通知,从而开始新一轮的RUNNING选举。

4.1.5 记录执行状态

既然使用了热备份,那么RUNNING任务机器就需要将运行时的上下文状态保留给STANDBY任务机器。在这个场景中,最主要的上下文状态就是数据复制过程中的
一些进度信息,例如Binlog日志的消费位点,因此需要将这些信息保存到ZooKeeper上以便共享。在Mysql_Replicator的设计中,选择了
/mysql_replicator/tasks/copy_hot_item/lastCommit作为Binlog日志消费位点的存储节点,RUNNING任务机器会定时向这个节点写入当前的Binlog日志消费位点。

4.1.6 控制台协调

在上文中我们主要讲解了Core组件是如何进行分布式任务协调的,接下来我们再看看Server是如何来管理Core组件的。在Mysql_Replicator中,Server主要的
工作就是进行任务的控制,通过ZooKeeper来对不同的任务进行控制与协调。Server会将每个复制任务对应生产者的元数据,即库名、表名、用户名与密码等数据库信息以及
消费者的相关信息以配置的形式写入任务节点/mysql_replicator/tasks/copy_hot_item中去的,以便该任务的所有任务机器都能够共享该复制任务的配置。

4.1.7 冷备切换

到目前为止我们已经基本了解了Mysql_Replicator的工作原理,现在再回过头来看上面提到的热备份。在该热备份方案中,针对一个任务,都会至少分配两台
任务机器来进行热备份,但是在一定规模的大型互联网公司中,往往有许多MySQL实例需要进行数据复制,每个数据库实例都会对应一个复制任务,
如果每个任务都进行双机热备份的话,那么显然需要消耗太多的机器。

因此我们同时设计了一种冷备份,它和热备份方案的不同点在于,对所有任务进行分组,如下:


和热备份中比较大的区别在于,Core进程被配置了所属Group(组)。举个例子来说,假如一个Core进程被标记了group1,那么在Core进程启动后,会到对应
的ZooKeeper group1节点下面获取所有的Task列表,假如找到了任务“copy_hot_item”之后,就会遍历这个Task列表的instances节点,但凡还没有子节点的,
则会创建一个临时的顺序节点:/mysql_replicator/task-groups/group1/copy_hot_item/instances/[Hostname]-1————当然,在这个过程中,其它
Core进程也会在这个instances节点下创建类似的子节点。和热备份中的“小序号优先”策略一样,顺序小的Core进程将自己标记为RUNNING,不同之处在于,其它Core
进程则会自动将自己创建的子节点删除,然后继续遍历下一个Task节点————我们将这样的过程称为“冷备份扫描”。就这样,所有Core进程在一个扫描周期内不断地对相应
的Group下面的Task进行冷备份扫描。整个过程如下图:

 

4.1.8 冷热备份对比

从上面的讲解中,我们基本对热备份和冷备份两种运行方式都有了一定的了解,现在再来对比下这两种运行方式。在热备份方案中,针对一个任务使用了两台机器进行
热备份,借助ZooKeeper的Watcher通知机制和临时顺序节点的特性,能够非常实时地进行互相协调,但缺陷就是机器资源消耗比较大。而在冷备份方案中,采用了扫描机制,
虽然降低了任务协调的实时性,但是节省了机器资源。(博主总结冷备份与热备份的区别在于,热备份一个运行多个等待,冷备份在于一个运行,系统轮询判断是否有一个
在运行,只要有一个在运行就遍历下个任务,如果一个都没有在运行这个任务就让自己运行
)。

4.1.9 一种通用的分布式系统机器间通信方式

在绝大部分的分布式系统中,系统机器间的通信无外乎心跳检测、工作进度汇报和系统调度这三种类型。接下来,我们将围绕这三种类型的机器通信讲解
如何基于ZooKeeper去实现一种分布式系统间的通信方式。

.1 心跳监测

机器间的心跳检测机制是指在分布式环境中,不同机器之间需要检测到彼此是否在正常运行,例如A机器需要知道B机器是否正常运行。在传统的开发中,我们
通常是通过主机之间是否可以互相PING通来判断,更复杂一点的话,则会通过在机器之间建立长连接,通过TCP连接固有的心跳检测机制来实现上层机器的心跳检测
这些确实都是一些非常常见的心跳检测方法。而ZooKeeper基于ZooKeeper的临时节点特性,可以让不同的机器都在ZooKeeper的一个指定节点下创建临时子节点,不同的机器
之间可以根据这个临时节点来判断对应的客户端机器是否存活。通过这种方式,检测系统和被检测系统之间并不需要直接相关联,而是通过ZooKeeper上的
某个节点进行关联,大大减少了系统耦合。

.2 工作进度汇报

在一个常见的任务分发系统中,通常任务被分发到不同的机器上执行后,需要实时地将自己的任务执行进度汇报给分发系统。这个时候就可以通过ZooKeeper来实现。
在ZooKeeper上选择一个节点,每个任务客户端都在这个节点下面创建临时子节点,这样便可以实现两个功能:

  • 通过判断临时节点是否存在来确定任务机器是否存活;
  • 各个任务机器会实时地将自己的任务执行进度写到这个临时节点上去,以便中心系统能够实时地获取到任务的执行进度。

.3 系统调度

使用ZooKeeper,能够实现另一种调度模式:一个分布式系统由控制台和一些客户端系统两部分组成,控制台的职责就是需要将一些指令信息发送给所有的
客户端,以控制它们进行相应的业务逻辑。后台管理人员在控制台上做的一些操作,实际上就是修改了ZooKeeper上某些节点的数据,而ZooKeeper进一步
把这些数据变更以事件通知的形式发送给了对应的订阅客户端。

总之,使用ZooKeeper来实现分布式系统机器间的通信,不仅能省去大量底层网络通信和协议设计上重复的工作,更为重要的一点是大大降低了系统之间的耦合,
能够非常方便地实现异构系统之间的灵活通信。



作者:李文文丶
链接:https://www.jianshu.com/p/52ed785f1d07
来源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。

Guess you like

Origin blog.csdn.net/demon7552003/article/details/92055707