Evolution of Distributed Architecture

Author: Li Xiaochong
Link : https://www.zhihu.com/question/22764869/answer/31277656
Source: Zhihu The
copyright belongs to the author. For commercial reprints, please contact the author for authorization, and for non-commercial reprints, please indicate the source.

Evolution of Distributed Architecture


System Architecture Evolution History - Initial Phase Architecture


In the initial stage, all resources such as small system applications, databases, and files are on one server, commonly known as LAMP

features:
all resources such as applications, databases, and files are on one server.

Description:
Usually the server operating system uses linux, the application is developed using PHP, and then deployed on Apache, the database uses Mysql, and a variety of free open source software and a cheap server can be used to start the development of the system.

Evolution of System Architecture - Separation of Application Services and Data Services


The good times didn't last long, and it was found that with the increase of system access again, the pressure on the webserver machine would rise to a relatively high level during the peak period. At this time, I began to consider adding a webserver

feature:
applications, databases, and files are deployed on independent resources. .

Description:
The amount of data increases, and the performance and storage space of a single server are insufficient. It is necessary to separate applications and data, and the concurrent processing capability and data storage space have been greatly improved.

System Architecture Evolution History - Using Cache to Improve Performance


Features:
A small part of the data that is accessed intensively in the database is stored in the cache server, which reduces the number of database accesses and reduces the access pressure of the database.

Description:
The system access characteristics follow the 28th law, that is, 80% of the business access is concentrated on 20% of the data.
The cache is divided into local cache and remote distributed cache. The local cache access speed is faster but the amount of cached data is limited, and there is memory contention with the application.

Evolution of System Architecture - Using Application Server Clusters


After completing the work of sub-database and sub-table, the pressure on the database has been reduced to a relatively low level, and I began to live a happy life of watching the surge in traffic every day. Suddenly one day, I found that the system access began to change again. The trend is slow. At this time, first check the database, the pressure is normal, and then check the webserver, and find that apache blocks a lot of requests, and the application server is relatively fast for each request, it seems that the number of requests is too high, which leads to the need to wait in line , the response speed becomes slower

Features :
Multiple servers provide services to the outside at the same time through load balancing, which solves the problem of the processing capacity and storage space limit of a single server.

Description:
Using clusters is a common method for systems to solve high concurrency and massive data problems. By adding resources to the cluster, the concurrent processing capability of the system is improved, so that the load pressure of the server is no longer the bottleneck of the entire system.

System Architecture Evolution Process - Database Read and Write Separation


After enjoying the happiness of the high-speed growth of system access for a period of time, I found that the system began to slow down again. What is the situation this time? After searching, I found that the resource competition of some database connections for the operations of database writing and updating is very high. Intense, causing the system to slow down

Features :
Multiple servers provide services to the outside at the same time through load balancing, solving the problems of the processing capacity and storage space limit of a single server.

Description:
Using clusters is a common method for systems to solve high concurrency and massive data problems. By adding resources to the cluster, the load pressure of the server is no longer the bottleneck of the entire system.

Evolution of System Architecture - Reverse Proxy and CDN Acceleration


Features:
Adopt CDN and reverse proxy to speed up the access speed of the system.

Description:
In order to cope with the complex network environment and the access of users in different regions, the speed of user access is accelerated through CDN and reverse proxy, and the load pressure of the back-end server is reduced at the same time. The basic principle of CDN and reverse proxy is cache.

Evolution of System Architecture - Distributed File System and Distributed Database


With the continuous operation of the system, the amount of data begins to increase significantly. At this time, it is found that the query after the database is still a little slow, so according to the idea of ​​​​the database, the work of the table is started.

Features :
The database adopts a distributed database, and the file system adopts a distributed database. format file system.

Description:
Any powerful single server cannot meet the business needs of the continuous growth of large-scale systems. The separation of database read and write will eventually fail to meet the demand with the development of the business. It needs to be supported by a distributed database and a distributed file system.
Distributed database is the last method of system database splitting. It is only used when the scale of single-table data is very large. The more commonly used method of database splitting is business sub-database, which deploys different business databases on different physical servers.

Evolution of System Architecture - Using NoSQL and Search Engines


Features:
The system introduces NoSQL database and search engine.

Description:
As the business becomes more and more complex, the requirements for data storage and retrieval become more and more complex, and the system needs to use some non-relational databases such as NoSQL and sub-database query technologies such as search engines. The application server accesses various data through the unified data access module, which relieves the trouble of managing many data sources by the application program.

Evolution of system architecture - business split


Features:
The system is split and transformed according to the business, and the application servers are deployed separately according to the business.

Description:
In order to cope with increasingly complex business scenarios, the whole system business is usually divided into different product lines by means of divide and conquer. The relationship between applications is established through hyperlinks, and data can also be distributed through message queues. The same data storage system to form an associated complete system.

Vertical splitting: Split
a large application into multiple small applications. If the new business is relatively independent, then design and deploy it as an independent Web application system.

Vertical splitting is relatively simple. The related business can be divested.

Horizontal splitting: Split the reused services and deploy them independently as distributed services. New services only need to call these distributed services.

Horizontal splitting requires identifying reusable services, designing service interfaces, and standardizing service dependencies.


Evolution of System Architecture - Distributed Services


Features:
Common application modules are extracted and deployed on distributed servers for application server calls.

Description:
As the business becomes smaller and smaller, the overall complexity of the application system increases exponentially. Since all applications need to be connected to all database systems, the database connection resources will eventually be insufficient and service will be denied.

Q: What problems will distributed service applications face?

A:
(1) When there are more and more services, the management of service URL configuration becomes very difficult, and the single point pressure of F5 hardware load balancer is also increasing.
(2) With further development, the dependencies between services become complicated and misunderstood, and it is not even clear which application should be started before which application, and architects cannot fully describe the architectural relationship of the application.
(3) Then, the call volume of the service is increasing, and the capacity problem of the service is exposed. How many machines does this service need? When should the machine be added?
(4) With more services, communication costs have begun to rise. Who should be contacted if a service fails to be adjusted? What are the conventions for the parameters of the service?
(5) A service has multiple business consumers, how to ensure service quality?
(6) With the continuous upgrade of services, some unexpected things always happen, such as memory overflow caused by wrong cache writing, failure is inevitable, every time the core service hangs, affecting a large area, people panic, how to control the failure influence? Can the service be functionally downgraded? Or resource degradation?

Java distributed application technology foundation


Key technologies under distributed services: message queue architecture


The message pair column decomposes the system coupling through the message object, and different subsystems process the same message

Key Technologies under Distributed Services: Message Queuing Principle


Key Technologies under Distributed Services: Service Framework Architecture


The service framework decomposes system coupling through interfaces, and different subsystems use the same interface description to enable services. The
service framework is a point-to-point model. The
service framework is oriented to homogeneous systems.
Suitable for : mobile applications, Internet applications, and external systems

Key Technologies under Distributed Services: Principles of Service Framework


Key Technologies under Distributed Services: Service Bus Architecture


Like the service framework, the service bus decomposes the system coupling through interfaces. Different subsystems use the same interface description to enable
services. The service bus is a bus-like model. The
service bus is oriented to homogeneous and heterogeneous systems
. Suitable for: internal systems

Key Technologies under Distributed Services: Service Bus Principle


Five communication modes for interaction between systems under distributed architecture

request/response mode (synchronous mode): The client initiates a request and blocks until the server returns the request.

Callback (asynchronous mode): The client sends an RPC request to the server, and the server processes it and then sends a message to the callback endpoint provided by the message sender. This kind of situation is very suitable for the following scenarios: A component sends an RPC request to B, and B processes it After completion, the A component needs to be notified for subsequent processing.

Future mode: After the client sends the request, it continues to do its own thing and returns a Future object containing the result of the message. When the client needs to use the returned result, use the .get() of the Future object. If no result is returned at this time, it will block until a result is returned.

Oneway mode: The client continues to execute after calling, regardless of whether the receiver is successful or not.

Reliable mode: In order to ensure reliable communication, the message center will be used to achieve reliable delivery of messages. The request will be persistently stored and delivered when the receiver is online, and the message center will ensure abnormal retry.

Implementation of Five Communication Modes-Synchronous Point-to-Point Service Mode


Implementation of Five Communication Modes - Asynchronous Point-to-Point Messaging Mode 1


Implementation of five communication modes - asynchronous point-to-point message mode 2


Implementation of Five Communication Modes - Asynchronous Broadcast Message Mode


Service Governance under Distributed Architecture
Service governance is the core function of the service framework/service bus. The so-called service governance refers to the agreement between the service provider and the consumer to ensure the high quality of the service. The service governance function can solve the problem of introducing certain traffic into a certain batch of machines, restricting malicious access of some illegal consumers, and refusing to accept new access when the processing volume of the provider reaches a certain level.

Service governance based on the service framework Dubbo - service management
is about your system, how many services are provided to the outside world, you can upgrade, downgrade, deactivate, adjust weights and other operations
to know the services you provide, who is using them, and due to business needs , you can perform operations such as shielding and deactivating the consumer

based on the service framework Dubbo Service Governance-Service Monitoring


The number of requests per second, average response time, call volume, and peak time of services can be counted as reference indicators for service cluster planning and performance tuning.

Service Governance-Service Routing Based on Service Framework Dubbo


Service Governance-Service Protection Based on Service Framework Dubbo


Service Governance Based on Service Bus OSB - Function Introduction


Service Governance Based on Service Bus OSB


Q: Is Dubbo a god horse?
A:

Taobao's open source high-performance and transparent RPC remote call service framework
SOA service governance scheme

Q: What is the principle of Dubbo?
A:

-end-

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325861895&siteId=291194637