Concentrated essence of architecture evolution

Architecture Evolution design process

The development of business-driven technology is the everlasting truth. The very beginning, the business is small, low complexity of the business, technology is relatively simple to take, basically meet the needs of users function. With the popularity of IT information technology, more transactions on to the network, increase the amount of information and visits frequently the problem is to be solved. So, gradually adding caching, clustering and other technical means. Meanwhile scalability and flexibility requirements of business have become more sophisticated. High concurrency, high availability, scalable, extensible, enough to secure software architecture has been the architectural design goal. Today we look at the architecture has gone through what stages, each stage which problems, which in turn raises new questions. Mainly caused everyone to think, to take appropriate technical means at different stages of business development, embrace change with change is the IT people to pursue.

Application and Data Integration Model

The first business application to the site, OA, etc, the number of visitors is limited, a single server can handle. Typically, and deploying the application to a server database above, shown in Figure 1-1. At this stage, we use LAMP (Linux Apache MySQL PHP) technology can quickly get, and these tools are open source. A long period of time, there are a variety of models can be used for this application open source code. This model is substantially free of high concurrent requirements, the availability of very poor. Some servers use a hosted model, it is installed on top of different business applications, once the server problems, all of the applications on strike. However, its development and deployment costs are relatively low, suitable for application service has just started. Figure 1 depicts a single on the application and database running on a single server mode, we call this model for application integration and data mode.
Architecture evolution, concentrated essence of the lessons learned, worth collecting!
FIG 1 is integrally applications and data pattern

Application and data separation mode

As your business grows, the number of users and requests gradually increased server performance problems. The more simple solution is to increase the resources to separate applications and data storage service, its structure shown in Figure 2. Wherein the application server needs to handle a large number of service requests, there are certain requirements on CPU and memory; and the database server needs to store the data and indexes IO operations, and the rotational speed of the disk memory will be considered more. Such separation performance to solve the problem, we need to expand more hardware resources allowed to perform their duties, so that the system can handle more user requests. Although the business is still there and the couple, but separate hardware level than on the availability of one-piece design is much better.
Architecture evolution, concentrated essence of the lessons learned, worth collecting!
Figure 2 application model and data separation

Added caching

With the increase in the number of Internet development and use of information systems, business volume, number of users, data volume is growing. We also found some users will request the amount of data is particularly large, such as news, product information and hot news. Before Obtaining this information is to rely on the database, so the affected database IO performance. At this point the database is the bottleneck of the whole system. If we increase the number of servers, probably hard to resolve, so caching on the stage, its structure shown in Figure 3. Caching techniques mentioned here into the client browser cache, application server local cache and cache server cache.

    • The client browser cache: When a user requests to the server through a browser initiates an HTTP request. If each HTTP request caching, you can reduce the pressure application server.
    • Application Server local cache: it uses the in-process cache, also known as the managed heap cache. To Java, for example, this part of the cache on the managed heap on top of the JVM, and it will be affected by the managed heap collection algorithm. Because it runs in memory, the response speed of data quickly, usually we will focus here on the data. In the process, when a cache miss, the cache server will go to get information, if missed again, only to get the database.
    • Cache server cache: it is relative to the application server's local cache, the cache is out of process, and application services both deployed on the same server can be deployed to different servers. Generally, in order to facilitate the management and rational use of resources will be deployed to a dedicated cache server above. Since the cache takes up memory space, so this type of server configuration relatively large memory.
      Figure 3 depicts the order cache requests to access client-side caching, followed by a local cache in the process, followed by a cache server, and finally the data. If the cache information acquired at any level, no longer visited down, otherwise it will have to obtain cache information in this order, until the database.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      FIG 3 buffer were added
      sequentially to a user requesting access to the data cache client browser application → → local cache server cache server cache. If you follow the above order has not yet hit data, you will access the database to obtain data.
      Join cache designed to improve the performance of the system. Since the cache in memory, and the memory read speed is much faster than disk, can quickly respond to user requests. Especially for some hot data, advantage is particularly evident. Meanwhile, in terms of usability have significantly improved. Even if the database server fails a short time, stored in the cache server or hot core data still satisfy users to temporarily access. Of course, the latter will be optimized for usability.

      Adding server clusters

      After the first three stages of evolution, the system requests the user has a good amount of support. In fact, this is a high-performance and usability in solving the problem, the core issue has been throughout the whole evolution of the system architecture. With the increasing amount of user requests, another problem has emerged, and that is concurrent. Look at these two words apart: and, understood as "parallel together", while there is meaning; hair, understood as "the calling", that is the meaning of the request. Together is a plurality of users simultaneously requesting application server. If a large amount of data is just the original system face, then now we need to face multiple users simultaneously request. If you still are derived in accordance with a stage on the architecture diagram, a single application server has been unable to meet the high demands of concurrency. At this time, the server cluster joined the battle, its structure shown in Figure 4. Server cluster that is meant to get together multiple servers, with more servers to share the load pressure of a single server, improving performance and availability. Besides white point, it is to increase the number of services per unit of time to process the request. Turned out to be a server process, now is a bunch of servers to handle. If the bank counter, like increasing the number of tellers to serve more people.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      Join 4 server clusters
      The architecture evolution compared with the previous increase of the number of application servers, form a cluster with multiple application servers. Application Service deployed in the application server does not change between the user and requests the server load balancer added to help the user request is routed to the corresponding server. Increase the server's move shows that the bottleneck in the system is processing concurrent user on request. For database and cache do not change, so only by increasing the number of servers will be able to ease the pressure on request. Will request server cluster to share a single server had to be processed by multiple servers, a system running on multiple servers at the same time, it is possible to handle large numbers of concurrent requests of the user simultaneously. Three Stooges top one bit of Zhuge Liang in this sense, a single server cluster hardware requirements will be reduced. Note that at this time the load balancing balancing algorithms, such as polling, and weighted round. We want to ensure that user requests can be distributed to the server above, request a session with the guarantee of the same treatment in the above server to dynamically adjust the flow rate for the pros and cons of different server resources. After the load balancer to join, due to its location between the Internet and the application server, the user is responsible for access to traffic, and therefore can monitor user traffic, while access to the user's identity and permissions for authentication.

      Database separate read and write

      Added caching of hot data read section can be solved, but the limited capacity of the cache data, and those data are still non-hotspot will read from the database. Database performance for writing and reading are not the same. When data is written, the line will cause the lock or lock the table at this time if there are other concurrent execution of a write operation, there will be queues. And the read operation faster than the write operation, and can be implemented by the index, the database cache like. Thus, the introduction of a separate read and write the database program, its structure shown in Figure 5. At this time, provided from the main database, the main database (master) is mainly used to write data, then binlog synchronized manner, synchronized to the updating of data from the library (slave). For an application server, when writing data only need to access the main library, when reading data only accessible from the library just fine.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      FIG separate read and write the database 5
      using the database separate read and write manner, the database read / write separation of duties. The read data with a higher efficiency advantages, expand more from the library, thereby serving the user requests a read operation. After all, in real-life scenarios, most of the operations are read operations. In addition, data synchronization from the technical point of view, can be divided into synchronous replication, asynchronous replication and semi-synchronous replication. While the database to read and write separation benefits, the architecture also need to consider the issue of reliability. For example, if the main library hang up, how to take over the work of the main library from the library. After recovering the main library, it is to become or continue to play the main library, and how to synchronize data from the library.

      Reverse proxy and CDN

      With the increasing popularity of the Internet, people's requirements for network security and user experience are increasingly high. Before users are obtaining direct access to the service through a client application server, the application server will be exposed to the Internet, the vulnerable ***. If between the application server and the Internet plus a reverse proxy server, which receives a user's request, and then forwarded to the application server within the network, serving as a buffer between the external network and the internal network. Just do a reverse proxy server forwards the request, and on it is not running any application, so when someone *** it, will not affect the application server within the network. This is tantamount to protect the application server to improve security. At the same time, it also play a role in adaptation and speed conversion between the Internet and intranet. For example, the application server needs to serve the public network and education network, but two different network speeds, you can put two reverse proxy server between the application server and the Internet, a public network connection, another connection Education Network , difference shield network, serving more user groups. 6 FIG client public network and campus networks are from a client on the public network and campus network two different networks, since two different network access speed, and therefore provided a total network proxy server and the proxy server for the campus network are two networks in this way the network is located nowhere user requests access to the system.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      6 reverse proxy server to join
      our chat reverse proxy, come to talk about CDN, it stands for Content Delivery Network, which is the content delivery network. If you imagine the Internet as a big net, then each server or the client is a node in a distributed network. The distance between nodes that are far and near, a user will jump from a requesting node to another node, finally jumps to the application server to obtain information. If fewer number of hops, it is possible to obtain information faster, so you can store information from the client node near. So the user through the client, requires less number of hops can be a touch-up information. Since this part of the information update frequency is not high, it is recommended to store some static data, such as JavaScript files, static HTML, image files and so on. In this way the client can access resources from the nearest network node, greatly improving the user experience and transmission efficiency. After addition of CDN architecture diagram shown in Fig.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      7 Join CDN
      CDN adding significantly accelerated the speed of user access to the application server, but also reduces the pressure on the application server, the original must directly access the application server requests, without going through layers of the network, but only to find the nearest network node can obtain resources. However, the angle of view of the requested resource, this approach also has limitations, it can only act on the static resources required timing CDN server resource update. Reverse proxy and CDN join solve the security, availability and performance issues.

      Distributed database table and points and warehouses

      After the experience of the first few stages, the system software architecture is relatively stable. With increasing system uptime, database data accumulated more and more, while the system also records the number of process data, such as operating data and log data, which will increase the burden on the database. Even if the database is set up index and cache, but when the massive data queries will be stretched. If the separate read and write, is the allocation of resources from the database to read and write level, then distributed database on the need for the allocation of resources from business and data levels.

    • For the data table, when the table contains too many records, which will be split into multiple tables to store. For example: There are 10 million record members, it can be divided into two 5,000,000, respectively, into two tables stored. The operations may be performed according to columns in the table is divided, some columns in the table stored in the table into the other, then a foreign key to the main table, to the separation column typically data that is infrequently accessed.
    • For the databases, each database can withstand a maximum number of connections and connection pools are capped. In order to improve the efficiency of data access, the database will be divided according to business needs, so that different different services to access the database. Of course, the same data may be different services into different storage libraries.
      If these resources are placed in a different database server database is distributed database design. Since different table / database, even on different servers above, when performing database operations will increase the complexity of the code data is stored. At this point you can be added to the database middleware to eliminate these differences. Architecture shown in Figure 8, after splitting the data are placed in Tables 2, two tables where the database server is also different 1, also need to consider the problem of synchronization of data between libraries. Since the deployment of dispersed data, from business applications get the data you need to rely on the database middleware to help.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      8 points distributed database and library sub-table
      database partition tables and warehouses as well as distributed design, would bring improved performance, but also increases the difficulty of database management and access. The original only access to a table and a library, and now needs across multiple tables and multiple libraries.
      From the point of view of software programming, database middleware provides some best practices, such as MyCat and Sharding JDBC. In addition, from the perspective of the database server management point of view, need to monitor server availability. From a data management point of view, it is necessary to consider the issue of data expansion and data governance.

      Business Split

      When a large amount of data storage to solve the problem, the system will be able to store more data, which means being able to handle more business. Increase the number of visits increased volume of business, it is a severe test of any software system at any time to be faced. Through several stages of the previous study, we know that the system relies on basic upgrade space for time, use more resources and space to handle more user requests. With the increasing complexity of the service, and the advent of high concurrent some manufacturers began to be segmented business system, deployed separately, at this time is a schematic diagram shown in Figure 10. If the front of the server cluster model is copied to the same application on a different server, then the business is to split an application into multiple split deployed to different servers. Additionally, you can scale horizontally to the core application, deploy it to multiple servers. Although it has been split application, but still there is an association between applications, there is a call, the communication and coordination problems between applications. Thus also introduce queue, registration service discovery, message center middleware, which can help manage the distribution system to different servers, applications on the network node.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      9 business split
      after split will form a business service applications, both based business services such as goods and services, service orders, but also basic services, such as push notifications and permissions validation. These applications and services, along with database servers distributed in different containers, servers, network nodes, their communication, coordination, management and monitoring of all we need to solve the problem.

      Distributed and Micro Services

      In recent years, micro-services architecture is more fire way, it will be more refined business applications of cutting, making it even more small business module. Achieve high cohesion and low coupling within the module, each module can be independently present, maintained by a separate team. Each internal module can take specific technology, without concern for the other modules of technology. Modules run through the deployment of a container, make calls between modules via interfaces and protocols. Any module can call themselves open to other modules. While hot module may extend horizontally, enhanced system performance. When a module in which a problem, but it can be replaced by another work of the same module, enhanced usability.
      Summary down substantially, the micro service has the following characteristics, fine resolution service, autonomous, heterogeneous technologies, high performance, high availability. It resembles a distributed architecture, let's look at the difference between them, as shown in FIG.
      Conceptually, they have done a "split" action, but there is a difference in these areas below.

    • Split different purposes: Distributed designed to solve the problem of limited resources monomer applications, on one server can not support higher user access, so will a disassembled into different parts of the application, and then deploy it to a different server thus sharing highly concurrent pressure. Service is a service micro-fine components, in order to better decoupling, so that between the service is completed by a combination of high-performance, highly available, scalable, and extensible.
    • Split in different ways: a distributed service system architecture will be split according to the classification of business and technology, the goal is to split the original single service load service business. Micro service is a finer resolution in a distributed basis, it will be split into smaller service module, more specialized, more fine division, and each small modules can be run independently.
    • Different deployment: a distributed service after the split, usually deployed on different servers. And micro-services may be placed on different service modules in different servers, but it also can be deployed on a plurality of micro-services server, or multiple copies of the same micro-service and multi-use container is deployed.
      Architecture evolution, concentrated essence of the lessons learned, worth collecting!
      The difference between 10 and distributed micro-services
      although the service has more than distributed and micro differences, but from a practical point of view, they are all based on the idea of building a distributed architecture. Micro services are distributed evolutionary version is also distributed subset. It will face the same split service, service communication, coordination, management, scheduling and other issues.

      to sum up

      This article accordance with the technical follow changing business ideas, from the description of the monomer to the cluster architecture, to the stage of development of micro-distributed architecture and services. About the characteristics of each stage of the software architecture change, the relationship between cause and architecture before and after the turnover, indicating that the software architecture development will always vary with the direction of business development. Follow-performance, high availability, scalability, extensibility, security architecture purpose.

Guess you like

Origin www.cnblogs.com/liu2020/p/12570271.html