[Golang Implementation] Thinking and Simple Implementation of the Like Function of Site B

written in front

For the purpose of learning, let's take a look at how the entanglement of the like function of station B works? Or should we implement a like function?
Of course, I am not an employee of station b, nor have I participated in any activities of station b, so we use our own ideas, if it is us, how to realize the like function of the app with more than 100 million daily active users.

1. Demand Analysis

Let's clarify the requirements first. The like function mainly affects two points:

  • 观众与点赞的关系, that is, whether the audience has liked the video. The visual impact is the style of the like button.
  • 视频与点赞的关系, that is, how many likes this video has. Affected the recommendation of this video.

After clarifying our needs, let's think again. How to find out whether this person has watched this video from hundreds of millions of users? Is it doing io? This obviously won't work!

insert image description here

2. Research on technical solutions

2.1 Realization of the relationship between viewers and likes

2.1.1 Ideas

Since direct io is not enough, of course it is cached! So how do we set the key of this cache?

My idea is: maintain a user’s like queue for a video. In fact, for station b, a video is capped at 2 million likes, and most users of station b like to prostitute for nothing.

So this key can be designed like this:

video:like:user:{
    
    user_id}

Value corresponds to the id of the video that the user has liked

This is how it is implemented in go:

func VideoUserLikeKey(userId int64) string {
    
    
	return fmt.Sprintf("video:like:user:%d", userId)
}	

2.1.2 Storage

Then in service we only need to accept user_id and video id

videoInfo, err := json.Marshal(map[string]interface{
    
    }{
    
    
  "user_id":      req.UserId,
  "video_id":     req.VideoId,
  "created_time": req.CreatedTime,
})
if err != nil {
    
    
  log.LogrusObj.Infoln(err)
  return
}

// 推入redis,用户纬度,判断用户与video的纬度,保存用户最近点赞的500条
err = cache.GlobalRedisClient.LPush(cache.VideoUserLikeKey(req.UserId), req.VideoId).Err()
if err != nil {
    
    
  log.LogrusObj.Infoln(err)
  return
}

In this way, we have saved the relationship between the video and the user. After we judge who liked it and who didn’t, we can just check the cache directly~

For most people, in fact, they only click to watch a certain video when it is pushed, and then like it, but usually after watching it, they search for the video and cancel the like. Of course, there is such a situation, so how to do it in this situation? In fact, just traverse to see if there are any of the 600 most recently stored, and if so, just delete this entry. This operation of canceling likes is far less than that of likes.

Of course, the front end should also have some delayed sending, or optimization of delayed requests. I don’t know, but there should be (, we only focus on the back end here.

So after we have solved the first requirement, let's look at the second requirement.

2.2 Realization of the relationship between video and likes

So what if the number of likes for this video is updated? Is it directly the value of the database ++? This is definitely impossible, because the io of the database connection is very slow, not to mention the write operation!

We can have a situation of delayed consumption here. Didn’t we save the relationship between users and likes in the cache in the previous step?

Here we asynchronously send this message to MQ, and then MQ will backlog and consume regularly.

send to MQ

// 视频纬度,rabbitmq 接受积压,累计消费, 推送 mq,定时改变video的点赞数
err = rabbitmq.SendMessage(ctx, video_consts.RabbitMqLikeQueue, videoInfo)
if err != nil {
    
    
  log.LogrusObj.Infoln(err)
  return
}

Consume MQ and accumulate it in the middleware to complete the settings. This is the business of the middleware team, and it has nothing to do with our curd boy~

func (s *VideoSync) RunVideoLike(ctx context.Context) error {
    
    
	rabbitMqQueue := video_consts.RabbitMqLikeQueue
	likeVideoInfo, err := rabbitmq.ConsumeMessage(ctx, rabbitMqQueue)
	if err != nil {
    
    
		return err
	}
	var forever chan struct{
    
    }

	go func() {
    
    
		for d := range likeVideoInfo {
    
    
			log.LogrusObj.Infof("Received run story like : %s", d.Body)

			// 落库
			reqRabbitMQ := new(video_types.LikeVideoReq)
			err = json.Unmarshal(d.Body, reqRabbitMQ)
			if err != nil {
    
    
				log.LogrusObj.Infof("Received run story like : %s", err)
			}

			err = service.LikeVideoMQ2MySQL(ctx, reqRabbitMQ)
			if err != nil {
    
    
				log.LogrusObj.Infof("Received run story like : %s", err)
			}

		}
	}()

	log.LogrusObj.Infoln(err)
	<-forever

	return nil
}

Change the database regularly, because on the C side, the data is not very important, what is important is the user experience , and then if the above mentioned. What did he cancel? The same is to maintain an MQ to consume regularly, but subtract a value uniformly.

The general process is as follows

image-20230315012919406

of course! The actual scenario is much more complicated than this, because there are many machines, and there are many problems about the loss of message sending, repeated consumption, idempotency, read-write separation, load balancing, etc., we just simplified the very many.

Guess you like

Origin blog.csdn.net/weixin_45304503/article/details/129543643