Distributed + microservice system architecture foundation 1

Insert image description here
Step 1: The client sends a request.
Step 2: Nginx load balancing receives each request and assigns multiple threads to different clusters-servers.
Step 3: After the server receives the request, it may call the interface from different clusters-servers. , to complete the processing of the entire business logic.
Step 4: Perform database check or database write operations.

  • Database check operation: first check whether the target data exists in a cache database such as redis, check the database if there is no data, and then cache the queried data in the cache database, which can speed up the next query until the clearing time is set. If no one queries the data, it will be cleared. If a second query is made during the period, the clearing date will be postponed by one cycle from the date of the second query;
    • Queries may experience cache breakdown or cache penetration
      • Cache breakdown: Normal hot-load data in the cache database, after malicious query, replaces the original data, which will cause a large number of normal users to crash.
      • Cache penetration: There will be hot load data (often queried content) in the cache database. Since the query keyword is not hot load data, the second step of the query operation is required, which is to query in the database.
  • Library writing operation: After writing to the main library, it needs to be synchronized to the slave library.
  • There will be multiple slave libraries because the number of times the database is written is much less than the number of database checks;

Step 5: After the preliminary business processing is completed, all the businesses that need to be processed later in the business process are stored in the message server and responded at the set time.

Guess you like

Origin blog.csdn.net/TDLDDMZ/article/details/127786491