Shangpin Summary II: Product Details Module (for interviews only)

1. Business Introduction

     The product details page, simply put, is to display the detailed information of a SKU from the perspective of shoppers.

     This page is different from the traditional crud details page. The user is not an administrator, and needs to check, delete, and modify the information. Instead, click to buy, put it in the shopping cart, switch colors, and so on.

     Another feature is the high number of visits to this page. Although it is only a query operation, we must optimize its performance to the greatest extent due to frequent visits.

The data required to build product details is as follows:

  1. Sku basic information
  2. Sku image information
  3. Sku category information
  4. Information about Spu sales attributes
  5. The sales attribute corresponding to the Sku is selected by default
  6. Sku price real-time
  7. Main body of product introduction content (poster)
  8. Attributes corresponding to sku (platform attributes, specifications)

2. Use caching to optimize

    Although we have implemented the functions required by the page, considering that the page is frequently accessed by users, the performance must be optimized as much as possible.

Generally, the biggest performance bottleneck of a system is the io operation of the database. Starting from the database is also the most cost-effective entry point for tuning.

It is generally divided into two levels, one is to improve the performance of the database SQL itself , and the other is to avoid directly querying the database .

The first thing to improve the performance of the database itself is to optimize SQL, including: using indexes, reducing the number of unnecessary large table associations, and controlling the number of rows and columns of query fields. In addition, when the amount of data is huge, sub-databases and tables can be considered to reduce the pressure on a single point.

The focus is on another level: try to avoid directly querying the database.

The solution is: cache

The cache can be understood as a protective umbrella for the database. As long as any request can be hit in the cache, it will not directly access the database. The processing performance of the cache is 10-100 times that of the database.

We will use Redis as the cache system for optimization.

Why is it faster to use redis and use local cache

local cache:

small memory capacity

Cache data disappears with items

In-memory database, Nosql database, non-relational data

  1. Redis is based on memory storage, reducing IO operations
  2. The corresponding value can be quickly obtained according to the key
  3. No cost consumption of SQL statement parsing

Structure diagram:

3. The problems and solutions of the cache ( the key points in the key points )

The 3 most common problems with caching:

1. Cache penetration

Concept: Query a data, if the data is not in the cache, it will check Mysql, but it is not in mysql, this situation will cause every access to penetrate to the Mysql database, causing pressure on mysql, causing the cache to fail effect.

solve:

1. If there is no such data in the cache, check the database. If there is no such data in the database, create an object. The attribute of this object has no value (it is an empty object). Put this object in the redis cache, and the expiration time is slightly shorter. Because this data is useless, it just prevents cache penetration for a period of time.

2. Bloom filter: It can judge whether a data exists.

 You can use redisson's built-in Bloom filter to add the skuid to the Bloom filter when adding a product sku. When querying, check the cache first. If there is no cache, look for it in the Bloom filter according to the skuid to see if it exists. If it exists, then query the database and load the data from the database into the cache so that subsequent queries can be retrieved from the cache. obtained from . If there is no Bloom filter, then there is no need to check the database, and the returned data does not exist directly.

2. Cache Avalanche

  Concept: The redis cache is unavailable, and a large number of them have failed.

  1. The server of the redis database cannot provide access.
  2. A large number of keys collectively expired.

Solution: The problem of server unavailability needs to ensure the high availability of redis, and redis is used for cluster deployment.

      Key collectively expires, when setting the key expiration time, use a random time.

3. Cache breakdown

 Concept: A hot key has just expired. At this time, a large number of query accesses come, causing this large number of requests to instantly penetrate the database, which doubles the pressure on the database.

Penetration: neither in the cache nor in the database

Breakdown: It must be that the data of the hotspot has expired , the cache has expired, and the database still exists. As long as there is access to check the database, the data in the database can be loaded into the redis cache.

Solution:

      Distributed locks.

Local lock: synchronized is a JVM-level lock, which can only lock one server, and cannot be locked across services.

      During the test, AB pressure testing is used to send a large number of requests to a large number of threads.

Implementation of distributed locks:

  1. redis setNX command + key expiration time + lua script

The set nx command can add data only when the key does not exist. If the key exists, it cannot be added.

Under normal circumstances, if the set command of redis does not have a key, it is added, and if the key exists, it is overwritten.

           Expiration time of the key: It is to prevent problems when performing business processing after the lock is added. If the lock cannot be deleted, it needs to rely on the expiration time to realize the automatic release of the lock to prevent deadlock.

The purpose of Lua: When deleting the lock, prevent yourself from deleting the lock added by other threads.

  1. Distributed lock based on redisson.

The Redisson framework provides the Lock object

        Api method lock.lock () or trylock () lock

             Lock.unLock() is unlocked.

        Using redisson distribution, there is a watchdog mechanism, which can realize the automatic renewal of locks.

If you use the watchdog, you cannot set an expiration time for the key. The default is 30s. If 30s is up and you have not called the unlock method, it will think that your business has not been executed, and automatically renew the expiration time.

The lock has an expiration time to prevent the lock from expiring, and your business has not been executed yet, causing other threads to come in again.

Specific steps when querying product data: ( using Bloom filter )

  1. When adding sku, add the id of sku to the Bloom filter
  2. Check the cache first to see if there is any cache
  3. If there is no cache, check the Bloom filter to see if this data exists in the Bloom filter
  4. Existence Add distributed locks to check the database. If it does not exist, it will return empty directly, and there is no need to check the database.

Specific steps when querying product data: ( do not use Bloom filter, use empty object )

  1. Check the cache first to see if there is any cache
  2. There is no cache, add distributed locks, and check the database. Determine whether the database has

3. If the database exists, put it into the cache. If the database does not have this data, create an empty object and put it into the cache, with a shorter expiration time.

Optimized for code redundancy:

 Custom annotations + dynamic proxy of AOP + redisson distributed lock business extraction for reuse.

The idea of ​​design mode: dynamic proxy mode + template mode

Template mode: There are 10 methods in a class, and multiple methods share a piece of code. We can extract this shared code into a method. In this way, this code only needs to be written once. Which method needs to be used directly Make this method call in the method that needs to be used. The extracted method is a template.

Customize a comment:

      Function: Declare the function, only the method marked with this annotation will be controlled by the distributed lock.

Distributed lock business processing:

     I wrote a facet class, which is the business of locking and checking the database, adding cache and unlocking.

When to do this lock and unlock:

      A notification is defined in the aspect class. The function of the notification is to tell the method when to dynamically enhance the function.

* AOP: Aspect-Oriented Programming
* Entry point: Use AOP to enhance the function of which piece, and automatically help others to do something.
*    It can be a package or multiple packages, or it can be a class or a method, or it can be an annotation
* Aspect: What to do? Specific content to be done
* Notification: When will this thing be done?
*   Surround notification, pre, post, exception

Differences from Cache Avalanche :

1. Breakdown is a hot key failure

2. Avalanche is the collective failure of many keys

solution:

With the needs of business development, after the original single-machine deployment system is evolved into a distributed cluster system, since the distributed system is multi-threaded, multi-process and distributed on different machines, this will make the concurrency control lock in the original stand-alone deployment situation The strategy fails, and the pure Java API cannot provide the ability to distribute locks. In order to solve this problem, a cross-JVM mutual exclusion mechanism is needed to control the access to shared resources. This is the problem to be solved by distributed locks!

Use distributed locks , using redis KEY expiration time to achieve

Expiration time of command + key

Redis: Expiration time of command setNX+key

# set skuid:1:info “OK” NX PX 10000

EX second : Set the expiration time of the key to second seconds.

PX millisecond : Set the expiration time of the key to millisecond milliseconds.

NX: Only when the key (key) does not exist, the key (key) is set.

XX : Only when the key (key) already exists, the key (key) will be set.

The Redis SET command is used to set the value of a given key. If the key already has other values, SET will overwrite it, regardless of the type.

Problem: Delete operations lack atomicity.

Scenes:

1. When index1 executes deletion, the queried lock value is indeed equal to uuid

2. Before index1 executes deletion, the lock expires just before the expiration time, and is automatically released by redis

3. index2 acquired the lock

4. Index1 executes deletion, and the lock of index2 will be deleted at this time

Solution: Use LUA script to ensure atomicity of deletion

Use redisson distributed lock

redisson: tools

Official document address: https://github.com/redisson/redisson/wiki

连接文档:GitHub - redisson/redisson: Redisson - Easy Redis Java client with features of In-Memory Data Grid. Over 50 Redis based Java objects and services: Set, Multimap, SortedSet, Map, List, Queue, Deque, Semaphore, Lock, AtomicLong, Map Reduce, Publish / Subscribe, Bloom filter, Spring Cache, Tomcat, Scheduler, JCache API, Hibernate, MyBatis, RPC, local cache ...

RLock lock = redisson.getLock("anyLock");

// most commonly used

lock.lock();

// Automatically unlock after 10 seconds after locking

// No need to call the unlock method to manually unlock

lock.lock(10, TimeUnit.SECONDS);

// Try to lock, wait up to 100 seconds, and automatically unlock after 10 seconds after locking

boolean res = lock.tryLock(100, 10, TimeUnit.SECONDS);

if (res) {

   try {

     ...

   } finally {

       lock.unlock();

   }

}

4. Distributed lock + AOP realizes cache

With the addition of caching and distributed locks in the business, the business code becomes more complicated. In addition to the business logic itself, cache and distributed locks also need to be considered, which increases the programmer's workload and development difficulty. The caching gameplay routine is very similar to transactions, and declarative transactions are implemented using the idea of ​​aop .

Take the @Transactional annotation as the cut point of the implantation point, so that you can know that the method marked by the @Transactional annotation needs to be proxied.

The aspect logic of @Transactional annotation is similar to @Around

To simulate a transaction, the cache can be implemented like this:

1. Custom cache annotation @GmallCache (similar to transaction @Transactional )

2. Write the aspect class and use the surround notification to realize the logical encapsulation of the cache

define an annotation

package com.atguigu.gmall.common.cache;

  

  import java.lang.annotation.*;

  

  @Target({ElementType.METHOD})

  @Retention(RetentionPolicy.RUNTIME)

  @Documented

  public @interface GmallCache {

  

    /**

     * 缓存key的前缀

     * @return

     */

    String prefix() default "cache";

}

Define an aspect class to strengthen annotations

package com.atguigu.gmall.common.cache;

  

  

  

  @Component

@Aspect

  public class GmallCacheAspect {

  

    @Autowired

    private RedisTemplate redisTemplate;

  

    @Autowired

    private RedissonClient redissonClient;

  

    /**

     * 1.返回值object

     * 2.参数proceedingJoinPoint

     * 3.抛出异常Throwable

     * 4.proceedingJoinPoint.proceed(args)执行业务方法

     */

    @Around("@annotation(com.atguigu.gmall.common.cache.GmallCache)")

    public Object cacheAroundAdvice(ProceedingJoinPoint point) throws Throwable {

  

        Object result = null;

        // 获取连接点签名

        MethodSignature signature = (MethodSignature) point.getSignature();

        // 获取连接点的GmallCache注解信息

        GmallCache gmallCache = signature.getMethod().getAnnotation(GmallCache.class);

        // 获取缓存的前缀

        String prefix = gmallCache.prefix();

  

        // 组装成key

        String key = prefix + Arrays.asList(point.getArgs()).toString();

  

        // 1. 查询缓存

        result = this.cacheHit(signature, key);

  

        if (result != null) {

            return result;

        }

  

        // 初始化分布式锁

        RLock lock = this.redissonClient.getLock("gmallCache");

        // 防止缓存穿透 加锁

        lock.lock();

  

        // 再次检查内存是否有,因为高并发下,可能在加锁这段时间内,已有其他线程放入缓存

        result = this.cacheHit(signature, key);

        if (result != null) {

            lock.unlock();

            return result;

        }

  

        // 2. 执行查询的业务逻辑从数据库查询

        result = point.proceed(point.getArgs());

        // 并把结果放入缓存

        this.redisTemplate.opsForValue().set(key, JSONObject.toJSONString(result));

  

        // 释放锁

        lock.unlock();

  

        return result;

    }

  

    /**

     * 查询缓存的方法

     *

     * @param signature

     * @param key

     * @return

     */

    private Object cacheHit(MethodSignature signature, String key) {

        // 1. 查询缓存

        String cache = (String)redisTemplate.opsForValue().get(key);

        if (StringUtils.isNotBlank(cache)) {

            // 有,则反序列化,直接返回

            Class returnType = signature.getReturnType(); // 获取方法返回类型

            // 不能使用parseArray<cache, T>,因为不知道List<T>中的泛型

            return JSONObject.parseObject(cache, returnType);

        }

        return null;

    }

}

Caching is done using annotations

@GmallCache(prefix = RedisConst.SKUKEY_PREFIX)

  @Override

  public SkuInfo getSkuInfo(Long skuId) {

  

    return getSkuInfoDB(skuId);

}

4. Use asynchronous threads to optimize product details

Interview questions:

1 ) Have you ever done parallel tasks in your project

Asynchronous orchestration is

2 ) What is the concurrency of your project

     I don’t know the data of the official launch, but I have pressure tested the interface I wrote, and the concurrency is between 500 and 700

Problem: The logic of querying the product details page is very complicated, and the acquisition of data requires remote calls, which will inevitably take more time.

The service of Service-item product details remotely calls the service-product product management service

If each query on the product details page requires the time marked as follows to complete

// 1. Get the basic information of sku for 0.5s

// 2. Get the picture information of sku for 0.5s

// 3. Get all sales attributes of spu 1s

// 4. sku price 1.5s

//5. Increase heat for 1.0s

It is possible to call the comment interface

...

Then, it takes 4.5 seconds for the user to see the content of the product details page. Obviously not acceptable.

If multiple threads complete these 4 steps at the same time, it may only take 1.5s to complete the response.

使用多线程 同时调用的形式,提升了接口的响应速度,执行速度.

  线程资源使用自定义线程池管理,并且线程池是单例的,咱们怎么实现的单例,没有去写什么懒汉式 饿汉式,基于springioc实现单例,因为springbean的作用域,默认就是单例的,咱们把自己创建的线程池 放入spring容器中了,使用@bean注解.

1.使用CompletableFuture实现异步线程优化商品详情

@Service

  public class ItemServiceImpl implements ItemService {

    @Autowired

    private ProductFeignClient productFeignClient;

    @Autowired

    private ThreadPoolExecutor threadPoolExecutor;

    @Override

    public Map<String, Object> getBySkuId(Long skuId) {

        Map<String, Object> result = new HashMap<>();

        // 通过skuId 查询skuInfo

        CompletableFuture<SkuInfo> skuCompletableFuture = CompletableFuture.supplyAsync(() -> {

            SkuInfo skuInfo = productFeignClient.getSkuInfo(skuId);

            // 保存skuInfo

            result.put("skuInfo", skuInfo);

            return skuInfo;

        }, threadPoolExecutor);

        // 销售属性-销售属性值回显并锁定

        CompletableFuture<Void> spuSaleAttrCompletableFuture = skuCompletableFuture.thenAcceptAsync(skuInfo -> {

            List<SpuSaleAttr> spuSaleAttrList = productFeignClient.getSpuSaleAttrListCheckBySku(skuInfo.getId(), skuInfo.getSpuId());

            // 保存数据

            result.put("spuSaleAttrList", spuSaleAttrList);

        }, threadPoolExecutor);

        //根据spuId 查询map 集合属性

        // 销售属性-销售属性值回显并锁定

        CompletableFuture<Void> skuValueIdsMapCompletableFuture = skuCompletableFuture.thenAcceptAsync(skuInfo -> {

            Map skuValueIdsMap = productFeignClient.getSkuValueIdsMap(skuInfo.getSpuId());

            String valuesSkuJson = JSON.toJSONString(skuValueIdsMap);

            // 保存valuesSkuJson

            result.put("valuesSkuJson", valuesSkuJson);

        }, threadPoolExecutor);

       //获取商品最新价格

        CompletableFuture<Void> skuPriceCompletableFuture = CompletableFuture.runAsync(() -> {

            BigDecimal skuPrice = productFeignClient.getSkuPrice(skuId);

            result.put("price", skuPrice);

        }, threadPoolExecutor);

        //获取分类信息

        CompletableFuture<Void> categoryViewCompletableFuture = skuCompletableFuture.thenAcceptAsync(skuInfo -> {

            BaseCategoryView categoryView = productFeignClient.getCategoryView(skuInfo.getCategory3Id());

            //分类信息

            result.put("categoryView", categoryView);

        }, threadPoolExecutor);   

        CompletableFuture.allOf(skuCompletableFuture, spuSaleAttrCompletableFuture, skuValueIdsMapCompletableFuture,skuPriceCompletableFuture, categoryViewCompletableFuture).join();

        return result;

    }

}

package com.atguigu.gmall.item.config;
@Configuration

public class ThreadPoolConfig {
    @Bean
    public ThreadPoolExecutor threadPoolExecutor(){
        /**
         * 核心线程数
         * 拥有最多线程数
         * 表示空闲线程的存活时间
         * 存活时间单位
         * 用于缓存任务的阻塞队列
         * 省略:
         *  threadFactory:指定创建线程的工厂
         *  handler:表示当workQueue已满,且池中的线程数达到maximumPoolSize时,线程池拒绝添加新任务时采取的策略。
         */
        return new ThreadPoolExecutor(50,500,30, TimeUnit.SECONDS,new ArrayBlockingQueue<>(10000));
    }
}
 

2.简历:

责任描述

  1. 负责商品详情模块开发与优化,具体包括详情页展示、SKU锁定、SKU切换等功能
  2. 负责。。。
  3. 负责。。
  4. 参与。。
  5. 参与。。。

技术描述

  1. 项目采用Redis作为分布式缓存,使用Redisson解决缓存击穿,使用。。。。
  2. 项目采用异步编排解决。。。。
  3. 项目采用AOP
  4. 项目采用
  5. 项目采用nacos
  6. 项目采用 gateway 跨域、鉴权、请求转发

7~8

断点调试:

1:定位到当前断点处

2:下一步或者执行下一行

3:进入到方法里,我们写的方法,不包括源码

4: Forcibly enter the method, including source code

5: Jump out of the method

6: Transfer to the line where the cursor is located

7: Evaluate the expression and return the result

8: Put the current breakpoint and enter the next breakpoint, if it is not released directly

9: Query all breakpoints that have been hit

10: Disable breakpoints

Guess you like

Origin blog.csdn.net/leader_song/article/details/132112260