Take you through the evolution of software system architecture

A mature system is not perfect in all aspects from the beginning, nor does it consider any high concurrency and high availability issues, but as time goes by, the problems of the existing architecture will gradually appear. For example, the number of users has surged and the number of visits has continued to increase. During this process, new problems will continue to appear. In order to solve these problems, the software technology architecture will undergo major changes, and systems for different business characteristics will have their own focus. , such as Taobao and other sites to solve the problem of massive commodity search, order payment and other issues. For example, Tencent wants to solve the implementation of message transmission for hundreds of millions of users. Each business has its own different system architecture.

Take Java Web as an example to build a simple e-commerce system. This e-commerce system has multiple business modules, assuming that there are now: user module, commodity module, payment module

Stage 1. Monolithic Architecture

In the early days of the website, all programs were often run on a single computer, all functions were in one jar, and the database and application were on a single server. The initial focus was on efficiency. In the early days of the Internet, when the number of users was small, the single Architecture can also support usage.
insert image description here

Stage 2. Separation of application server and database server

With the launch of the website and the increase in the number of visits, the performance requirements for the server are also increasing. The web server is mainly used to process network connections and resource requests. Therefore, the requirements are high bandwidth and high concurrency. The requirements for CPU are actually not high. High, memory requirements are high, however optimizations made for web servers are obviously not suitable for database servers. The main responsibility of the database server is to process SQL statements, manage the data stored on the disk, require a lot of disk IO, and have extremely high requirements on the buffer pool. In short, the web server and the database server have different positioning and optimization points. Forcing them together will be serious. Affect the performance of both, so, slowly, the server and the database are deployed separately.
insert image description here

Stage 3 Application server cluster, application server load is tight

With the passage of time, the number of website visits continues to increase, and the performance of a single server can no longer meet the demand. If the database server does not reach the bottleneck, we can increase the application server and distribute user requests to each server through the application server cluster . Load capacity, we can use Nignx's reverse proxy, load balancing , to make user requests reach each cluster server. At this time, there is no direct interaction between the servers, and they all interact with each other through the database.
insert image description here

In stage four, the pressure on the database increases, and the database implements read-write separation

Architecture evolution is not the end here. We have improved the performance of the application layer through the above methods, but the load of the database is too large. In order to improve the performance of the data library, we have the previous ideas. We began to consider deploying the database to the cluster, and then load the database requests to multiple machines separately. However, after the database is clustered, problems such as data synchronization, read-write separation, and sub-database and sub-table need to be solved.

insert image description here

Stage 5 Pressure to use the search engine link degree database

If the database is used as a reading library, the efficiency of fuzzy query is not good. For example, e-commerce website search is a very core function, and the problem of separation of reading and writing cannot be effectively solved after several times of reading and writing, so it is necessary to introduce a search engine at this time. , the use of search engines can greatly improve query efficiency, but there are also problems, such as index maintenance
insert image description here

Stage 6: Introduce cache mechanism link database pressure (such as hot data)

As the traffic continues to increase, many users gradually access the same part of the data. It is not necessary to query the database every time for these hot data. We can use caching technologies such as memcache redis as the application layer cache, and in some scenarios We limit the access frequency of certain IPs of users, so it is not suitable to put them in memory, and it is too troublesome to put them in the database. At this time, NoSql methods such as mongoDB can be used instead of traditional databases.
insert image description here

Stage 7, horizontal and vertical splitting of the database

In the process of website evolution, the data of user's commodity transactions are still in the same database. Although the cache and read-write separation are adopted, the pressure on the database continues to increase, and the bottleneck of the database is still a big problem. Therefore, we consider Data split vertically and horizontally

Vertical splitting: Split different business data in the database into different databases
Horizontal splitting: Split the data in the same table into different databases.
insert image description here

Stage 8 Microservice Split

With the development of business, more and more applications are under pressure. With the development of the business, more and more applications will put more pressure on the server. If all functions are written in one service, there will be many inconveniences. The entire system is rebuilt and tested, and then the entire system is deployed or the service is suspended due to too many business visits such as product descriptions, and all other services are also suspended. Therefore, full, microservice architecture has become popular.
According to the division of system business, each business is independent as a project, which can be run separately. When combined into a whole system, each microservice is basically its own independent project, and the R&D team corresponding to its own independent project Basically, they correspond to each other independently. This structure ensures the parallel development of microservices and the rapid iteration of each, and will not cause bottlenecks in the development stage because all research and development is invested in a near-single-point project. The independence of the development stage ensures that the research and development of microservices can be carried out efficiently. Even if one of the services fails, it will not affect the operation of other services.
insert image description here
Of course, this knowledge is briefly introduced. With the continuous deepening of the system business, the system may introduce more components, such as MQ, HABS and so on. Not only redis, and es.
Of course, the development of the Internet continues, and the microservice architecture is not the end. The optimization and evolution of the architecture are still going on.

Guess you like

Origin blog.csdn.net/qq_45171957/article/details/123930895