"Routines" of Architecture

    Looking through the titles of recruitment websites and technical conferences, the keywords in the past few years were big data, distributed, tens of millions of users, and 100 million PVs. This year, they have become artificial intelligence and blockchain.

    Hot words are changed every one or two years, and all that can be changed are scenes, technologies or "old wine". How did the 100,000-level "new XX technical experts" newly added within a year come from? Just new bottles.

案例一:

    最近有人提问:如何构建一个千万用户级的抢购系统?
    
    简答曰:
         1.静态文件提前推到CDN,逻辑实现要无状态,可重入/幂等性;
         2.库存扣减用内存操作+原子化(比如redis的decr);
         3.前端交互细分步骤,拉长操作时间轴,削平系统峰值压力;
         4.用户交互用异步方式处理,包括前端应答和后端处理(MQ)
         5.到一定程度时,存入后端,并生成对应单,后续只需通过单ID交互;


    追问1:
         若单个redis无法承受压力?
    回应: 
         1.多个(Redis)实例,分摊库存量,前端LRU等方式分摊请求压力
         2.由于有多个实时变化的库存变量(一个Redis里有一个动态变化的库存变量),给用户显示的“剩余库存”肯定是不准确的,但这个不会妨碍用户使用(过年上12306的感觉,有谁没怎么体验过吗:D)
 
     追问2:
          考虑到全球用户,如果读写的数据不在同一个地方(之间有延迟),该怎么处理
     回应:
         (What?你这是故意挖坑吧,脑袋正常的人不会允许读写来自有明显延迟的不同地域....但还是继续吧)
          1.全局数据带一个时间相关序列ID(时钟问题单独考虑了,不在这其中)
          2.写成功后,后端返回一个ID1,本地(浏览器或客户端都行)保存下来
          3.发起读请求时,带上ID1,若读取的数据IDn大于ID1则使用;若小于ID1,则取回数据,前端显示时拼接上本地的数据

    The so-called distributed, the most difficult thing is to process the selection of data in CAP. The "routine" of personal inclination is as described above,

         1. Subdivide steps to cut peaks;

         2. Insert the link of saving data in the middle to reduce the experience loss or user loss caused by the failure of a certain step;

         3. Select one in CAP that can be "Sacrifice (to be done later)"

 

    to be continue...

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324927637&siteId=291194637