Redis practice-redis implements friend following & message push

Follow and unfollow

When viewing the note details, a request will be automatically sent and the interface will be called to check whether the current user has followed the note author. We need to implement these two interfaces.

Requirements: Based on the table data structure, implement two interfaces:

  • Follow and unfollow interface

  • Determine whether to pay attention to the interface

 Follow is the relationship between Users and the relationship between bloggers and fans. There is a tb_follow table in the database to identify it. If we want to associate users with bloggers, we only need to insert data into this table . The primary key needs to be Set to self-increasing to reduce development difficulty

 

 FollowController


//关注
@PutMapping("/{id}/{isFollow}")
public Result follow(@PathVariable("id") Long followUserId, @PathVariable("isFollow") Boolean isFollow) {
    return followService.follow(followUserId, isFollow);
}
//取消关注
@GetMapping("/or/not/{id}")
public Result isFollow(@PathVariable("id") Long followUserId) {
      return followService.isFollow(followUserId);
}

 To determine whether the current user and blogger have followed the blogger, we only need to query whether there is a record in the follow table . It is recommended to perform a non-empty judgment for equivalent query to prevent exceptions.

@Override
    public Result isFollow(Long followUserId) {
        Long userId = UserHolder.getUser().getId();
        LambdaQueryWrapper<Follow> queryWrapper = new LambdaQueryWrapper<>();
        queryWrapper.eq(followUserId != null, Follow::getFollowUserId, followUserId)
                .eq(userId != null, Follow::getUserId, userId);
        Integer count = followMapper.selectCount(queryWrapper);
        return Result.ok(count > 0);
    }

When the front end sends a request, it will send the boolean value isFollow, which is used by the back end to determine whether the current user has followed the blogger. If it is true, it will be directly inserted into the database to indicate that the follow is successful. If it is false, it indicates that the user has already followed and needs to unfollow. Got it

@Override
    public Result follow(Long followUserId, Boolean isFollow) {
        Long userId = UserHolder.getUser().getId();
        String key = "follows:" + userId;
        if (isFollow) {
            //封装follow对象
            Follow follow = new Follow();
            follow.setFollowUserId(followUserId);
            follow.setUserId(userId);
            follow.setCreateTime(LocalDateTime.now());
            //插入数据库
            boolean isSuccess = save(follow);
        
        } else {
            LambdaQueryWrapper<Follow> queryWrapper = new LambdaQueryWrapper<>();
            queryWrapper.eq(Follow::getFollowUserId, followUserId)
                    .eq(Follow::getUserId, userId);
            boolean isSuccess = remove(queryWrapper);
           
        }
        return Result.ok();
    }

Common concern

Friends who follow together need to enter this page first. This page will initiate two requests.

1. Query the user’s details

2. Query the user’s notes

 

// UserController 根据id查询用户
@GetMapping("/{id}")
public Result queryUserById(@PathVariable("id") Long userId){
	// 查询详情
	User user = userService.getById(userId);
	if (user == null) {
		return Result.ok();
	}
	UserDTO userDTO = BeanUtil.copyProperties(user, UserDTO.class);
	// 返回
	return Result.ok(userDTO);
}

// BlogController  根据id查询博主的探店笔记
@GetMapping("/of/user")
public Result queryBlogByUserId(
		@RequestParam(value = "current", defaultValue = "1") Integer current,
		@RequestParam("id") Long id) {
	// 根据用户查询
	Page<Blog> page = blogService.query()
			.eq("user_id", id).page(new Page<>(current, SystemConstants.MAX_PAGE_SIZE));
	// 获取当前页数据
	List<Blog> records = page.getRecords();
	return Result.ok(records);
}

Requirements for joint attention: Utilize appropriate data structures in Redis to realize the joint attention function. Display the common concerns of current users and bloggers on the blogger's personal page. Here we choose the set set, because in the set set, there is an API for the intersection, union and complement. We can put the two people they follow into a set set respectively , and then view the two sets through the API. Intersection data in the collection.

 First, we need to modify the adding of following. When inserting data, we should also insert the user ID of the blogger that the current user follows into the set collection to find the intersection.

 @Override
    public Result follow(Long followUserId, Boolean isFollow) {
        Long userId = UserHolder.getUser().getId();
        String key = "follows:" + userId;
        if (isFollow) {
            //封装follow对象
            Follow follow = new Follow();
            follow.setFollowUserId(followUserId);
            follow.setUserId(userId);
            follow.setCreateTime(LocalDateTime.now());
            //插入数据库
            boolean isSuccess = save(follow);
            if (isSuccess) {
                stringRedisTemplate.opsForSet().add(key, followUserId.toString());
            }
        } else {
            LambdaQueryWrapper<Follow> queryWrapper = new LambdaQueryWrapper<>();
            queryWrapper.eq(Follow::getFollowUserId, followUserId)
                    .eq(Follow::getUserId, userId);
            boolean isSuccess = remove(queryWrapper);
            if (isSuccess) {
                stringRedisTemplate.opsForSet().remove(key);
            }
        }
        return Result.ok();
    }

The controller layer defines the common interface. The incoming ID is the ID of the note blogger.

 @GetMapping("/common/{id}")
    public Result common(@PathVariable("id") Long id){
        System.out.println(id);
        return followService.commonFollow(id);
    }

service layer

@Override
    public Result commonFollow(Long id) {
        //获取当前用户
        Long userId = UserHolder.getUser().getId();
        String key1 = "follows:" + userId;
        String key2 = "follows:" + id;
        Set<String> intersect = stringRedisTemplate.opsForSet().intersect(key1, key2);
        if(intersect == null || intersect.isEmpty()) {
            // 无交集
            return Result.ok(Collections.emptyList());
        }
        //解析用户id集合
        List<Long> userIds = intersect.stream().map(item -> Long.valueOf(item)).collect(Collectors.toList());
        List<UserDTO> users = userService.selectByIds(userIds).stream().map(
                user -> BeanUtil.copyProperties(user, UserDTO.class)).collect(Collectors.toList());
        return Result.ok(users);
    }

Feed flow implementation solution

When we follow a user and the user posts updates, we should push this data to the user. In fact, we call this requirement a feed stream. Follow-up push is also called a feed stream, which is literally translated as feeding. Continuously provide users with an "immersive" experience and obtain new information through infinite pull-down refresh.

For the traditional mode of content unlocking: we need users to unlock the content they want to watch through search engines or other methods.

Regarding the effect of the new feed stream: we do not need users to push information anymore . Instead, the system analyzes what users want and then directly pushes the content to users , so that users can save more time and do not have to actively search for it. Big data push mechanism similar to Bilibili, Douyin and other platforms

Two modes of feed flow

There are two modes for implementing feed flow:

There are two common modes for feed streaming products: Timeline: No content filtering is performed, and content is simply sorted according to the release time. It is often used for friends or followers. For example, circle of friends

  • Advantages: Comprehensive information , no gaps. And the implementation is relatively simple

  • Disadvantages: There is a lot of information noise , users are not necessarily interested, and content acquisition efficiency is low.

Intelligent sorting: Use intelligent algorithms to block content that violates regulations and is not of interest to users. Push information that users are interested in to attract users

  • Advantages: Feeding information that users are interested in , users have high viscosity and are easily addicted

  • Disadvantages: If the algorithm is not accurate, it may be counterproductive. The personal page in this example is based on the friends you follow, so it uses the Timeline mode. There are three implementation options for this mode           

Implementation plan of Timeline mode

Pull mode : also called read diffusion

The core meaning of this model is: when Zhang San, Li Si and Wang Wu send messages, they will be saved in their own mailboxes. Assuming that Zhao Liu wants to read the message, he will read it from his own inbox. At this time The system will pull all the information about the people he follows from the people he follows , and then sort them.

Advantages: It saves space, because when Zhao Liu reads the message, he does not read it repeatedly, and after reading it, he can clear his inbox.

Disadvantages: It is relatively delayed. When the user reads the data, the data is read from the people he follows. Assuming that the user follows a large number of users, a massive amount of content will be pulled at this time, which puts great pressure on the server.

Push mode : also called write diffusion.

In the push mode, there is no writing to the mailbox. When Zhang San writes a content, the content written by Zhang San will be actively sent to his fans’ inbox. Assuming that Li Si comes to read it again at this time, there is no need to read it again. Went to pull it temporarily

Advantages: fast timeliness, no need to pull temporarily  

Disadvantages: High memory pressure . If a big V writes information and many people follow him, a lot of data will be written to the fans.

Push-pull combined mode : Also called read-write hybrid, it has the advantages of both push and pull modes.

The push-pull model is a compromise solution. From the sender's perspective, if it is an ordinary person, then we use the write diffusion method to directly write the data to his fans, because ordinary people's fans The amount of attention is relatively small , so there is no pressure to do this. If it is a big V, then he will directly write one copy of the data to the outbox, and then write another copy directly to the inbox of active fans. Now looking at the recipient's side, if they are active fans, then the messages sent by big Vs and ordinary people will be written directly into their inbox, but if they are ordinary fans, since they are not online very frequently , so when they come online, they will pull the information from the outbox. That is, judging whether it is a push mode or a pull mode based on the user's activity level

Push to fan inbox

need:

  • Modify the business of adding store visit notes, and push the blog to the fans' inbox while saving it to the database.

  • The inbox can be sorted according to timestamp, which must be implemented using the Redis data structure.

  • When querying inbox data, paging query can be implemented

 Pagination query plan

The data in the feed stream is constantly updated, so the index of the data is also changing, so the traditional paging mode cannot be used.

Traditional paging is not applicable in feed flow because our data will change at any time.

Assume that at time t1, we read the first page. At this time, page = 1, size = 5, then what we get are the records 10~6. Assume that another record is released at t2, and at this time t3 moment, let’s read the second page. The parameters passed in when reading the second page are page=2 and size=5. Then the second page read at this time actually starts from 6, and then 6~2 , then we will read duplicate data , so the paging of the feed stream cannot be done using the original solution.

Scroll pagination of feed stream

We need to record the last item of each operation, and then start reading data from this position

For example: we start from time t1, take the first page of data, and get 10~6, and then record the last record taken, which is 6. A new record is released at time t2, and this 11 is placed in The top, but it will not affect the 6 we recorded before. At this time, we will get the second page at t3. When we get the data on the second page, we still get it from the 5 a little bit behind the 6, and we get the record of 5-1. . We can use sortedSet to do this, we can perform range queries, and we can also record the minimum timestamp of the currently obtained data, so that we can achieve rolling paging. The newly inserted data 11 will be refreshed when it is pulled up and refreshed. Don’t worry about missing reading

 

Pagination demo

First create a Zset collection

Through this command, we can take out the first three records in descending order by score value. It seems that there is no problem. What we want to query on the next page is the last four records, but the data of the feed stream is constantly updated. It happens to be set at this time. Add m8 in

zrevrange z1 0 2 withscores

In theory, the query for the last three records is to query three more records. At this time, querying m5 according to the corner mark has already repeated the query, which is not allowed in business.

 This problem can be avoided by checking according to the score value. We only need to remember the minimum score of the last query and let it be the maximum in the next query, so that paging query can be realized. The first parameter after limit is the offset. , 0 means to include max, 1 means the next element that is less than the max score

 zrevrangebyscore z1 6 0 withscores limit 1 3

push inbox

After the user has finished sending the note, the note must also be pushed to the fan's inbox, that is, to the zest collection with the user ID as key.

@Override
public Result saveBlog(Blog blog) {
    // 1.获取登录用户
    UserDTO user = UserHolder.getUser();
    blog.setUserId(user.getId());
    // 2.保存探店笔记
    boolean isSuccess = save(blog);
    if(!isSuccess){
        return Result.fail("新增笔记失败!");
    }
    // 3.查询笔记作者的所有粉丝 select * from tb_follow where follow_user_id = ?
    List<Follow> follows = followService.query().eq("follow_user_id", user.getId()).list();
    // 4.推送笔记id给所有粉丝
    for (Follow follow : follows) {
        // 4.1.获取粉丝id
        Long userId = follow.getUserId();
        // 4.2.推送
        String key = FEED_KEY + userId;
        stringRedisTemplate.opsForZSet().add(key, blog.getId().toString(), System.currentTimeMillis());
    }
    // 5.返回id
    return Result.ok(blog.getId());
}

Implement paged query and receive mailboxes

In the "Follow" card on your personal homepage, query and display the pushed Blog information:

The specific operations are as follows:

1. After each query is completed, we need to analyze the minimum timestamp of the queried data. This value will be used as the condition for the next query, that is, the maximum timestamp of the next query.

2. We need to find the same number of queries as the previous query as the offset. When querying next time, skip these queried data and get the data we need.

To sum up: our request parameters need to carry lastId: the minimum timestamp and offset of the last query.

These two parameters will be specified by the front end for the first time, and subsequent queries will be passed to the backend again based on the background results as conditions.

 

Define return entity class

@Data
public class ScrollResult {
    private List<?> list;
    private Long minTime;
    private Integer offset;
}

BlogController

Note: RequestParam represents the annotation that accepts parameters passed in the url address bar. When the name of the parameter on the method is different from the url address bar, it can be specified through RequestParam.

@GetMapping("/of/follow")
public Result queryBlogOfFollow(
    @RequestParam("lastId") Long max, @RequestParam(value = "offset", defaultValue = "0") Integer offset){
    return blogService.queryBlogOfFollow(max, offset);
}

BlogServiceImpl  

@Override
public Result queryBlogOfFollow(Long max, Integer offset) {
    // 1.获取当前用户
    Long userId = UserHolder.getUser().getId();
    // 2.查询收件箱 ZREVRANGEBYSCORE key Max Min LIMIT offset count
    String key = FEED_KEY + userId;
    Set<ZSetOperations.TypedTuple<String>> typedTuples = stringRedisTemplate.opsForZSet()
        .reverseRangeByScoreWithScores(key, 0, max, offset, 2);
    // 3.非空判断
    if (typedTuples == null || typedTuples.isEmpty()) {
        return Result.ok();
    }
    // 4.解析数据:blogId、minTime(时间戳)、offset
    List<Long> ids = new ArrayList<>(typedTuples.size());
    long minTime = 0; // 2
    int os = 1; // 2
    for (ZSetOperations.TypedTuple<String> tuple : typedTuples) { // 5 4 4 2 2
        // 4.1.获取id
        ids.add(Long.valueOf(tuple.getValue()));
        // 4.2.获取分数(时间戳)
        long time = tuple.getScore().longValue();
        if(time == minTime){
            os++;
        }else{
            minTime = time;
            os = 1;
        }
    }
	os = minTime == max ? os : os + offset;
    // 5.根据id查询blog
    String idStr = StrUtil.join(",", ids);
    List<Blog> blogs = query().in("id", ids).last("ORDER BY FIELD(id," + idStr + ")").list();

    for (Blog blog : blogs) {
        // 5.1.查询blog有关的用户
        queryBlogUser(blog);
        // 5.2.查询blog是否被点赞
        isBlogLiked(blog);
    }

    // 6.封装并返回
    ScrollResult r = new ScrollResult();
    r.setList(blogs);
    r.setOffset(os);
    r.setMinTime(minTime);

    return Result.ok(r);
}

Guess you like

Origin blog.csdn.net/weixin_64133130/article/details/133211561