1 million connections, how is the graphite document WebSocket gateway structured?

Said it in front

In the reader exchange group (50+) of Nien, a 40-year-old architect , many friends have obtained interview qualifications from first-tier Internet companies such as Alibaba, NetEase, Youzan, Xiyin, Baidu, and Didi.

Recently, Nien guided a friend's resume and wrote a " High Concurrency Gateway Project ". This project helped this guy get an interview invitation from Byte/Alibaba/Weibo/Autohome , so this is an awesome project. project.

In order to help you get more interview opportunities and get more offers from big companies,

Nien decided to publish a chapter of video in September to introduce the architecture and practical operation of this project, "Chapter 33: 10Wqps High Concurrency Netty Gateway Architecture and Practical Operation", which is expected to be released at the end of the month. Then, provide one-on-one resume guidance to make your resume sparkling and completely transformed.

"Chapter 33: 10Wqps High Concurrency Netty Gateway Architecture and Practical Operation" The poster is as follows:

In conjunction with "Chapter 33: 10Wqps High Concurrency Netty Gateway Architecture and Practical Operation", Nien will sort out several industrial-grade and production-grade gateway cases as architectural and design materials.

Sorted out in front

In addition to the above 6 cases, here, Nion found another beautiful production-level case:

" 1 million connections, how is the graphite document WebSocket gateway structured? 》,

Attention, this is another very awesome industrial-grade and production-grade gateway case .

These cases are not original to Nien. These cases are just collected by Nien while searching for information on the Internet during the course preparation of "Chapter 33: 10Wqps High Concurrency Netty Gateway Architecture and Practical Operation" for everyone to learn and communicate with.

For the PDFs of "Nien Architecture Notes", "Nien High Concurrency Trilogy" and "Nien Java Interview Guide", please go to the official account [Technical Freedom Circle] to obtain

1 million connections, how is the graphite document WebSocket gateway structured?

Author: Graphite Documentation Technical Team

In some of Graphite Document's businesses, such as document sharing, commenting, slide presentations, and document table following scenarios, involving real-time synchronization of multi-client data and online push of bulk data from the server, the general HTTP protocol cannot meet the needs of the server. In the scenario of actively pushing data, we choose to use the WebSocket solution for business development.

With the development of the graphite document business, the current daily connection peak has reached one million levels. The growing number of user connections and the architectural design that does not meet the current level have led to a sharp increase in memory and CPU usage. Therefore, we consider long-connection gateways. Refactor.

This article shares the evolution process of graphite document long connection gateway from 1.0 architecture to 2.0, and summarizes the entire performance optimization practical process.

1. Problems faced by v1.0 architecture

The v1.0 version of this long-connection gateway system was modified and developed using Node.js based on Socket.IO, which well met the needs of business scenarios at the user level at that time.

1.1 Architecture introduction

Version 1.0 architecture design diagram:

Version 1.0 client connection process:

  • 1) The user connects to the gateway through NGINX, and this operation is perceived by the business service;
  • 2) After the business service senses the user connection, it will query the relevant user data and then Pub the message to Redis;
  • 3) The gateway service receives the message through Redis Sub;
  • 4) Query user session data in the gateway cluster and push messages to the client.

1.2 Problems faced

Although version 1.0 of the long-connection gateway runs well online, it cannot well support subsequent business expansion.

And there are several problems that need to be solved:

  • 1) Resource consumption: Nginx only uses TLS decryption and requests transparent transmission, resulting in a lot of waste of resources. At the same time, the previous Node gateway had poor performance and consumed a lot of CPU and memory;
  • 2) Maintenance and observation: It is not connected to the graphite monitoring system and cannot be connected with existing monitoring and alarms, so there are certain difficulties in maintenance;
  • 3) Business coupling problem: Business services and gateway functions are integrated into the same service, and targeted horizontal expansion cannot be carried out to address the performance loss of the business part. In order to solve performance problems and subsequent module expansion capabilities, service decoupling is required.

2. v2.0 architecture evolution practice

2.1 Overview

The v2.0 version of the long-connection gateway system needs to solve many problems.

For example, there are many components inside graphite documents (documents, tables, slides, forms, etc.). In version 1.0, components can make business calls to the gateway through Redis, Kafka and HTTP interfaces. The source cannot be checked and management and control is difficult.

In addition, from the perspective of performance optimization, it is also necessary to decouple the original services and split the 1.0 version gateway into the gateway functional part and the business processing part.

specifically is:

  • 1) The gateway function part is WS-Gateway: integrated user authentication, TLS certificate verification and WebSocket connection management, etc.;
  • 2) The business processing part is WS-API: the component service directly communicates with the service via gRPC.

Moreover:

  • 1) Targeted expansion can be carried out for specific modules;
  • 2) Service reconstruction and removal of Nginx significantly reduce overall hardware consumption;
  • 3) The service is integrated into the graphite monitoring system.

2.2 Overall architecture

Version 2.0 architecture design diagram:

Version 2.0 client connection process:

  • 1) The client and WS-Gateway service establish a WebSocket connection through the handshake process;
  • 2) After the connection is successfully established, the WS-Gateway service stores the session on the node, caches the connection information mapping relationship in Redis, and pushes the client online message to the WS-API through Kafka;
  • 3) WS-API receives client online messages and client upstream messages through Kafka;
  • 4) The WS-API service preprocesses and assembles messages, including obtaining the necessary data for message push from Redis, completing the filtering logic for message push, and then Pub the message to Kafka;
  • 5) WS-Gateway uses Sub Kafka to obtain the messages that the server needs to return and push the messages to the client one by one.

2.3 Handshake process

When the network status is good, after completing steps 1 to 6 as shown in the figure below, enter the WebSocket process directly;

When the network environment is poor, the communication mode of WebSocket will degenerate into the HTTP mode. The client pushes messages to the server through POST, and then returns data from the reading server through GET long polling.

The handshake process when the client first requests the server to establish a connection:

The process description is as follows:

  • 1) Client sends a GET request to try to establish a connection;
  • 2) Server returns relevant connection data, sid is the unique Socket ID generated by this connection, and subsequent interactions are used as credentials:
    {"sid":"xxx","upgrades":["websocket"],"pingInterval":xxx,"pingTimeout":xxx}
  • 3) Client carries the sid parameter in step 2 and requests again;
  • 4) Server returns 40, indicating that the request was received successfully;
  • 5) Client sends a POST request to confirm the later downgrade channel status;
  • 6) Server returns ok, and the first phase of the handshake process is completed;
  • 7) Try to initiate a WebSocket connection. First, respond to the requests of 2probe and 3probe. After confirming that the communication channel is smooth, normal WebSocket communication can be carried out.

2.4 TLS memory consumption optimization

The wss protocol is used to establish the connection between the client and the server. In version 1.0, the TLS certificate is mounted on Nginx, and the HTTPS handshake process is completed by Nginx. In order to reduce the machine cost of Nginx, in version 2.0 we mount the certificate to the service.

By analyzing the service memory, as shown in the figure below, the memory consumed during the TLS handshake accounts for about 30% of the total memory consumption.

The memory consumption of this part cannot be avoided, we have two options:

2.5 Socket ID design

A unique code must be generated for each connection. If there are duplicates, it will lead to problems with serial numbers and chaotic message push.

Here, the SnowFlake algorithm is selected as the unique code generation algorithm.

In the physical machine scenario, fixing the number of the physical machine where the replica is located can ensure that the Socket ID generated by the service on each replica is a unique value.

In the K8S scenario, this solution is not feasible, so the registration and distribution method is used to return the number. After all replicas of WS-Gateway are started, the startup information of the service is written to the database to obtain the replica number, which is used as a parameter as a replica of the SnowFlake algorithm. Socket ID is produced according to the number. When the service is restarted, the existing copy number will be inherited. When a new version is released, a new copy number will be issued based on the auto-incremented ID.

At the same time, the Ws-Gateway copy will write heartbeat information to the database as a basis for health check of the gateway service itself.

2.6 Cluster session management solution: event broadcast

After the client completes the handshake process, the session data is stored in the memory of the current gateway node, and some serializable data is stored in Redis.

The redis session storage structure description is shown in the figure below.

key illustrate
ws:user:clients:${uid} Stores the relationship between users and WebSocket connections in an ordered collection.
ws:guid:clients:${guid} The relationship between stored files and WebSocket connections is stored in an ordered combination.
ws:client:${socket.id} Stores all user and file relationship data under the current WebSocket connection, using Redis Hash method to store, the corresponding keys are user and guid

For message push triggered by the client or component service, through the data structure stored in Redis, the WS-API service queries the Socket ID of the target client that returns the message body, and then the WS-Gateway service performs cluster consumption.

If the Socket ID is not in the current node, you need to query the relationship between the node and the session to find the WS-Gateway node actually corresponding to the client Socket ID. There are usually two solutions (as shown in the figure below).

advantage shortcoming
event broadcast Simple to implement The number of message broadcasts will increase with the number of nodes
Registration center Clear mapping relationship between sessions and nodes Strong dependence on the registration center, additional operation and maintenance costs

After deciding to use the event broadcast method for message transmission between gateway nodes, you can further choose which specific message middleware to use. Three alternative options are listed (as shown in the figure below).

characteristic Redis Kafka RocketMQ
Development language C Scala Java
Single machine throughput 10w+ 10w+ 10w+
Availability master-slave architecture Distributed architecture Distributed architecture
Features Simple function Extremely high throughput and availability Rich functions, strong customization, and high throughput availability
Features Excellent performance and simple functions for data within 10K, suitable for simple business scenarios Supports core MQ functions, but does not support functions such as message query or message traceback Supports core MQ functions and has strong scalability

So Redis and other MQ middleware were enqueued and dequeued 1 million times. During the test, it was found that Redis performance was very good when the data was less than 10K.

Further combined with the actual situation: the data size of the broadcast content is about 1K, the business scenario is simple and fixed, and it must be compatible with historical business logic. Finally, Redis was chosen for message broadcast.

Later, you can also interconnect WS-API and WS-Gateway in pairs, and use gRPC stream bidirectional stream communication to save intranet traffic.

2.7 Heartbeat mechanism

After the session is stored in the node memory and Redis, the client needs to continuously update the session timestamp through heartbeat reporting. The client reports heartbeats according to the cycle issued by the server. The reporting timestamp is first updated in the memory, and then through another cycle. Perform Redis synchronization to avoid pressure on Redis caused by a large number of clients reporting heartbeats at the same time.

specific process:

  • 1) After the client successfully establishes the WebSocket connection, the server sends the heartbeat reporting parameters;
  • 2) The client transmits the heartbeat packet based on the above parameters, and the server will update the session timestamp after receiving the heartbeat;
  • 3) Other upstream data from the client will trigger corresponding session timestamp updates;
  • 4) The server regularly cleans up the timeout session and executes the active shutdown process;
  • 5) Clean the relationship between WebSocket connections, users and files through the timestamp data updated by Redis.

Session data memory and Redis cache cleaning logic:

for {
    
    
   select {
    
    
   case <-t.C:
      var now = time.Now().Unix()
      var clients = make([]*Connection, 0)
      dispatcher.clients.Range(func(_, v interface{
    
    }) bool {
    
    
         client := v.(*Connection)
         lastTs := atomic.LoadInt64(&client.LastMessageTS)
         if now-lastTs > int64(expireTime) {
    
    
            clients = append(clients, client)
         } else {
    
    
            dispatcher.clearRedisMapping(client.Id, client.Uid, lastTs, clearTimeout)
         }
         return true
      })
      for _, cli := range clients {
    
    
         cli.WsClose()
      }
   }
}

Based on the existing two-level cache refresh mechanism, the server performance pressure caused by heartbeat reporting is further reduced by dynamic heartbeat reporting frequency. In the default scenario, the client reports heartbeats to the server at intervals of 1s. It is assumed that a single machine currently carries 50w The number of connections, the current QPS is: QPS1 = 500000/1 .

From the perspective of server performance optimization, the dynamic interval under normal heartbeat conditions is implemented. Every x normal heartbeats are reported, the heartbeat interval increases by a, the upper limit of the increase is y, and the minimum dynamic QPS value is: QPS2=500000/ y .

In the extreme case, the QPS generated by the heartbeat is reduced by y times. After a single heartbeat times out, the server immediately changes the value of a to 1s and tries again. Using the above strategy, while ensuring connection quality, the performance loss caused by heartbeats on the server is reduced.

2.8 Custom Headers

The purpose of using Kafka custom headers is to avoid the performance loss caused by decoding the message body at the gateway layer.

After the client WebSocket connection is successfully established, a series of business operations will be performed. We choose to put the operation instructions and necessary parameters between WS-Gateway and WS-API into Kafka's Headers, for example, through X-XX-Operator. Broadcast, then read the X-XX-Guid file number, and push messages to all users in the file.

Field illustrate describe
X-ID WebSocket ID Connection ID
X-Uid User ID User ID
X-Guid File ID File ID
X-Inner Gateway internal operation instructions User joins, user quits
X-Event gateway events Connect/Message/Disconnect
X-Locale Language type setting Language type setting
X-Operator api layer operation instructions Unicast, broadcast, gateway internal operations
X-Auth-Type User authentication type SDKV2, main site, WeChat, mobile terminal, desktop
X-Client-Version client version client version
X-Server-Version Gateway version Server version
X-Push-Client-ID Client ID Client ID
X-Trace-ID Link ID Link ID

The trace id and timestamp are written in Kafka Headers, and the complete consumption link of a message and the time consumption of each stage can be traced.

2.9 Message receiving and sending

type Packet struct {
    
    
  ...
}
 
type Connect struct {
    
    
  *websocket.Con
  send chan Packet
}
 
func NewConnect(conn net.Conn) *Connect {
    
    
  c := &Connect{
    
    
    send: make(chan Packet, N),
  }
  go c.reader()
  go c.writer()
  return c
}

The writing method of the first version of the message interaction between the client and the server is similar to the above writing method.

We conducted a stress test on the Demo and found that each WebSocket connection occupies 3 goroutines. Each goroutine requires a memory stack, and a single machine is very limited in carrying connections.

Mainly limited by the large amount of memory usage, and c.writer() is idle most of the time.

So I considered whether to enable only 2 goroutines to complete the interaction.

type Packet struct {
    
    
  ...
}
 
type Connect struct {
    
    
  *websocket.Conn
  mux sync.RWMutex
}
 
func NewConnect(conn net.Conn) *Connect {
    
    
  c := &Connect{
    
    
    send: make(chan Packet, N),
  }
  go c.reader()
  return c
}
 
func (c *Connect) Write(data []byte) (err error) {
    
    
   c.mux.Lock()
   defer c.mux.Unlock()
   ...
   return nil
}

The goroutine of c.reader() is retained. If the polling method is used to read data from the buffer, reading delay or locking problems may occur. The c.writer() operation is adjusted to active calling instead of starting the goroutine to continuously monitor. Reduce memory consumption.

We investigated lightweight, high-performance event-driven network libraries such as gev and gnet , and found that message delays may occur in a large number of connection scenarios, so we did not use them in production environments.

2.10 Core Object Cache

After determining the data receiving and sending logic, the core object of the gateway part is the Connection object, and functions such as run, read, write, and close are developed around the Connection.

Use sync.pool to cache the object to reduce GC pressure. When creating a connection, obtain the Connection object through the object resource pool.

After the life cycle ends, reset the Connection object and put it back into the resource pool.

In actual coding, it is recommended to encapsulate GetConn() and PutConn() functions to converge data initialization, object reset and other operations.

var ConnectionPool = sync.Pool{
    
    
   New: func() interface{
    
    } {
    
    
      return &Connection{
    
    }
   },
}
 
func GetConn() *Connection {
    
    
   cli := ConnectionPool.Get().(*Connection)
   return cli
}
 
func PutConn(cli *Connection) {
    
    
   cli.Reset()
   ConnectionPool.Put(cli) // 放回连接池
}

2.11 Optimization of data transmission process

During the message flow process, it is necessary to consider optimizing the transmission efficiency of the message body, and use MessagePack to serialize the message body and compress the message body size. Adjust the MTU value to avoid packetization. Define a as the detection packet size. Use the following instructions to detect the MTU limit value of the target service IP.

ping -s {
    
    a} {
    
    ip}

When a = 1400, the actual transmission packet size is: 1428.

Among them, 28 consists of 8 (ICMP echo request and echo response message format) and 20 (IP header).

If a is set too high, the response will time out. When the actual environment packet size exceeds this value, sub-packetization will occur.

While debugging the appropriate MTU value, serialize the message body through MessagePack to further compress the size of the data packet and reduce CPU consumption.

2.12 Infrastructure support

Use the EGO framework for service development: business log printing, asynchronous log output, dynamic log level adjustment and other functions to facilitate online troubleshooting and improve log printing efficiency; microservice monitoring system, monitoring CPU, P99, memory, goroutine, etc.

Client Redis monitoring:

Client Kafka monitoring:

Customized monitoring dashboard:

3. Time to check the results: Performance stress test

3.1 Preparation for stress test

The test platforms prepared are:

  • 1) Select a virtual machine configured with 4 cores and 8G as a server machine, with the target hosting 48w connections;
  • 2) Select eight virtual machines configured with 4 cores and 8G as clients, and each client opens 6w ports.

3.2 Simulation Scenario 1

Users are online, 500,000 online users.

Serve CPU Memory quantity CPU% Mem%
WS-Gateway 16 cores 32G 1 set 22.38% 70.59%

The peak number of connections established per second by a single WS-Gateway is: 1.6w/s, and each user occupies 47K of memory.

3.3 Simulation Scenario 2

The test time is 15 minutes, and there are 500,000 online users. One message is pushed to all users every 5 seconds, and users have a receipt.

The push content is:

42["message",{
    
    "type":"xx","data":{
    
    "type":"xx","clients":[{
    
    "id":xx,"name":"xx","email":"[email protected]","avatar":"ZgG5kEjCkT6mZla6.png","created_at":1623811084000,"name_pinyin":"","team_id":13,"team_role":"member","merged_into":0,"team_time":1623811084000,"mobile":"+xxxx","mobile_account":"","status":1,"has_password":true,"team":null,"membership":null,"is_seat":true,"team_role_enum":3,"register_time":1623811084000,"alias":"","type":"anoymous"}],"userCount":1,"from":"ws"}}]

After 5 minutes of testing, the service restarted abnormally. The reason for the restart was that the memory usage exceeded the limit.

Analyze the reason why the memory exceeds the limit:

The new broadcast code uses 9.32% of memory:

The part that receives user receipt messages consumes 10.38% of memory:

Adjust the test rules. The test time is 15 minutes. There are 480,000 online users. One message is pushed to all users every 5 seconds. Users have a receipt.

The push content is:

42["message",{
    
    "type":"xx","data":{
    
    "type":"xx","clients":[{
    
    "id":xx,"name":"xx","email":"[email protected]","avatar":"ZgG5kEjCkT6mZla6.png","created_at":1623811084000,"name_pinyin":"","team_id":13,"team_role":"member","merged_into":0,"team_time":1623811084000,"mobile":"+xxxx","mobile_account":"","status":1,"has_password":true,"team":null,"membership":null,"is_seat":true,"team_role_enum":3,"register_time":1623811084000,"alias":"","type":"anoymous"}],"userCount":1,"from":"ws"}}]
Serve CPU Memory quantity CPU% Mem%
WS-Gateway 16 cores 32G 1 set 44% 91.75%

The peak number of connections established: 10,000/s, the peak number of received data: 9.60,000/s, and the peak number of sent data: 9.60,000/s.

3.4 Simulation scenario three

The test time is 15 minutes, and there are 500,000 online users. One message is pushed to all users every 5 seconds, and users do not need a receipt.

The push content is:

42["message",{
    
    "type":"xx","data":{
    
    "type":"xx","clients":[{
    
    "id":xx,"name":"xx","email":"[email protected]","avatar":"ZgG5kEjCkT6mZla6.png","created_at":1623811084000,"name_pinyin":"","team_id":13,"team_role":"member","merged_into":0,"team_time":1623811084000,"mobile":"+xxxx","mobile_account":"","status":1,"has_password":true,"team":null,"membership":null,"is_seat":true,"team_role_enum":3,"register_time":1623811084000,"alias":"","type":"anoymous"}],"userCount":1,"from":"ws"}}]
Serve CPU Memory quantity CPU% Mem%
WS-Gateway 16 cores 32G 1 set 30% 93%

The peak number of connections established is 1.10,000/s, and the peak number of data sent is 10,000/s. There are no other abnormalities except that the memory usage is too high.

The memory consumption is extremely high. When analyzing the flame graph, most of it is consumed in the broadcast operation scheduled for 5 seconds.

3.5 Simulation scenario four

The test time is 15 minutes, and there are 500,000 online users. One message is pushed to all users every 5 seconds, and users have a receipt. 40,000 users go online and offline per second.

The push content is:

42["message",{
    
    "type":"xx","data":{
    
    "type":"xx","clients":[{
    
    "id":xx,"name":"xx","email":"[email protected]","avatar":"ZgG5kEjCkT6mZla6.png","created_at":1623811084000,"name_pinyin":"","team_id":13,"team_role":"member","merged_into":0,"team_time":1623811084000,"mobile":"+xxxx","mobile_account":"","status":1,"has_password":true,"team":null,"membership":null,"is_seat":true,"team_role_enum":3,"register_time":1623811084000,"alias":"","type":"anoymous"}],"userCount":1,"from":"ws"}}]
Serve CPU Memory quantity CPU% Mem%
WS-Gateway 16 cores 32G 1 set 46.96% 65.6%

The peak number of connections established : 18570/s, the peak number of received data : 329949 pieces/s, the peak number of sent data : 393542 pieces/s, and no abnormality occurred.

3.6 Stress test summary

在16核32G内存的硬件条件下:单机 50w 连接数,进行以上包括用户上下线、消息回执等四个场景的压测,内存和 CPU 消耗都符合预期,并且在较长时间的压测下,服务也很稳定。

测试的结果基本上是能满足目前量级下的资源节约要求的,我们认为完全可以在此基础上继续完善功能开发。

4、总结

面临日益增加的用户量,网关服务的重构是势在必行。

本次重构主要是:

  • 1)对网关服务与业务服务的解耦,移除对 Nginx 的依赖,让整体架构更加清晰;
  • 2)从用户建立连接到底层业务推送消息的整体流程分析,对其中这些流程进行了具体的优化。

2.0 版本的长连接网关有了更少的资源消耗,更低的单位用户内存损耗、更加完善的监控报警体系,让网关服务本身更加可靠。

以上优化内容主要是以下各个方面:

  • 1)可降级的握手流程;
  • 2)Socket ID 生产;
  • 3)客户端心跳处理过程的优化;
  • 4)自定义 Headers 避免了消息解码,强化了链路追踪与监控;
  • 5)消息的接收与发送代码结构设计上的优化;
  • 6)对象资源池的使用,使用缓存降低 GC 频率;
  • 7)消息体的序列化压缩;
  • 8)接入服务观测基础设施,保证服务稳定性。

在保证网关服务性能过关的同时,更进一步的是收敛底层组件服务对网关业务调用的方式,从以前的 HTTP、Redis、Kafka 等方式,统一为 gRPC 调用,保证了来源可查可控,为后续业务接入打下了更好的基础。

说在最后:有问题可以找老架构取经

架构之路,充满了坎坷

架构和高级开发不一样 , 架构问题是open/开放式的,架构问题是没有标准答案的

正由于这样,很多小伙伴,尽管耗费很多精力,耗费很多金钱,但是,遗憾的是,一生都没有完成架构升级

所以,在架构升级/转型过程中,确实找不到有效的方案,可以来找40岁老架构尼恩求助.

前段时间一个小伙伴,他是跨专业来做Java,现在面临转架构的难题,但是经过尼恩几轮指导,顺利拿到了Java架构师+大数据架构师offer 。所以,如果遇到职业不顺,找老架构师帮忙一下,就顺利多了。

推荐阅读

百亿级访问量,如何做缓存架构设计

多级缓存 架构设计

消息推送 架构设计

阿里2面:你们部署多少节点?1000W并发,当如何部署?

美团2面:5个9高可用99.999%,如何实现?

网易一面:单节点2000Wtps,Kafka怎么做的?

字节一面:事务补偿和事务重试,关系是什么?

网易一面:25Wqps高吞吐写Mysql,100W数据4秒写完,如何实现?

亿级短视频,如何架构?

炸裂,靠“吹牛”过京东一面,月薪40K

太猛了,靠“吹牛”过顺丰一面,月薪30K

炸裂了…京东一面索命40问,过了就50W+

问麻了…阿里一面索命27问,过了就60W+

百度狂问3小时,大厂offer到手,小伙真狠!

饿了么太狠:面个高级Java,抖这多硬活、狠活

字节狂问一小时,小伙offer到手,太狠了!

收个滴滴Offer:从小伙三面经历,看看需要学点啥?

《尼恩 架构笔记》《尼恩高并发三部曲》《尼恩Java面试宝典》PDF,请到下面公号【技术自由圈】取↓↓↓

Guess you like

Origin blog.csdn.net/crazymakercircle/article/details/133236681