Project: Echo Forum System Project

1. Login and registration module

1. Registration function

1.1. Registration flow chart

 

1.2. Registration code
/**
     * 用户注册
     * @param user
     * @return Map<String, Object> 返回错误提示消息,如果返回的 map 为空,则说明注册成功
     */
    public Map<String, Object> register(User user) {
        Map<String, Object> map = new HashMap<>();

        if (user == null) {
            throw new IllegalArgumentException("参数不能为空");
        }
        if (StringUtils.isBlank(user.getUsername())) {
            map.put("usernameMsg", "账号不能为空");
            return map;
        }

        if (StringUtils.isBlank(user.getPassword())) {
            map.put("passwordMsg", "密码不能为空");
            return map;
        }

        if (StringUtils.isBlank(user.getEmail())) {
            map.put("emailMsg", "邮箱不能为空");
            return map;
        }

        // 验证账号是否已存在
        User u = userMapper.selectByName(user.getUsername());
        if (u != null) {
            map.put("usernameMsg", "该账号已存在");
            return map;
        }

        // 验证邮箱是否已存在
        u = userMapper.selectByEmail(user.getEmail());
        if (u != null) {
            map.put("emailMsg", "该邮箱已被注册");
            return map;
        }

        // 注册用户
        user.setSalt(CommunityUtil.generateUUID().substring(0, 5)); // salt
        user.setPassword(CommunityUtil.md5(user.getPassword() + user.getSalt())); // 加盐加密
        user.setType(0); // 默认普通用户
        user.setStatus(0); // 默认未激活
        user.setActivationCode(CommunityUtil.generateUUID()); // 激活码
        // 随机头像(用户登录后可以自行修改)
        user.setHeaderUrl(String.format("http://images.nowcoder.com/head/%dt.png", new Random().nextInt(1000)));
        user.setCreateTime(new Date()); // 注册时间
        userMapper.insertUser(user);

        // 给注册用户发送激活邮件
        Context context = new Context();
        context.setVariable("email", user.getEmail());
        // http://localhost:8080/echo/activation/用户id/激活码
        String url = domain + contextPath + "/activation/" + user.getId() + "/" + user.getActivationCode();
        context.setVariable("url", url);
        String content = templateEngine.process("/mail/activation", context);
        mailClient.sendMail(user.getEmail(),"激活 Echo 账号", content);

        return map;
    }

1.3. Performance optimization

Use the thread pool to send emails asynchronously. When there is a need to send emails, you can enable threads in the thread pool to send emails asynchronously. Just add the @Async annotation to the method, and add the @EnableAsync annotation to the startup class.

 

2. Login module

2.1. Login page

2.2. Login verification code problem

        First of all, a verification code will be randomly generated when logging in. How to match this verification code with the current user to verify the verification code?

        Obviously, since the user has not logged in at this time, we have no way to uniquely correspond to the verification code through the user's ID. So at this time we consider generating a random ID to temporarily replace this user, and temporarily store its ID and corresponding verification code in Redis (60s). And temporarily store a random ID generated for this user in the cookie (60s).

        The verification code generation and verification are two URL request addresses respectively.

        In this way, when the user clicks the login button, the random ID and verification code will be obtained from Redis, and the corresponding verification code will be queried in Cookie to determine whether the verification code entered by the user is consistent.

2.3. Login authentication and user status issues

        After the user enters the username and password and verifies the verification code, the login is successful. So how do we save the user's status in one request? How to echo the user's information?

The approach can be to design a class as shown below:

        To explain, after each user successfully logs in, we will generate a random and unique login credential entity class object  LoginTicket(including user id, login credential string ticket, whether it is valid, and expiration time). We put this login credential entity class The object is permanently stored in Redis ( key is the login credential string ticket, value is LoginTicketthe class ), and the credential information ticket of the credential class is saved in the cookie. The so-called invalid login credentials means that the user does not request access, and the expiration time of the credential class is no longer refreshed. When access is requested again, the local time and the credential class time are compared to determine whether it has expired.

LoginTicket After        storage  , we can obtain the user's status based on it. We defined an interceptor LoginTicketInterceptor , which will get the ticket from Cookie before each request, and then go to Redis based on the ticket to check  LoginTicket whether the user's login credentials are expired and valid . The request will be executed only when the login credentials are valid and not expired, otherwise It will jump to the login interface.

        If the user's login credentials are valid and have not expired, then we can hold the user's information in this request. How to hold it? Here we consider saving  ThreadLocal user information and ThreadLocal creating a copy of user information in each thread. That is to say, each thread can access its own internal user information copy variable, thereby achieving thread isolation. Let's look at the following class  HostHolder :

        Therefore, the information to be saved after successful login is:

        Save the generated voucher to Redis, set the expiration time, and set state to 1, where key is the voucher and value is LoginTicket the class.

       Then each request first passes through the interceptor and obtains the ticket credentials through Cookie. LoginTicket Obtain class information from Redis with ticket credentials . If it exists, LoginTicket the user information will be queried through the user ID of the class and saved in ThreadLocal, otherwise it will be intercepted.

 

2.4. Exit function

        Delete information from Redis based on credentials LoginTicket and execute ThreadLocal's remove() method to clear user information.

2.5. Performance optimization

        Because each request needs to obtain the credentials through Cookie in the interceptor, and then go to Redis to obtain LoginTicket the class. If the verification is passed, the user information will be queried in the database every time, which will cause huge access pressure to the database for every access request.

        In order to avoid this situation, the interceptor first goes to Redis to query the user information, and if it exists, it is directly saved to ThreadLocal. Otherwise, it goes to the database to query the user information, and then saves it to Redis.

2.6. Process
  • User login -> Generate login credentials and store them in Redis as a key. Value is the credential information. A ticket credential is stored in the cookie.
  • Before each request is executed, the interceptor will query Redis through the cookie to see whether the user's login credentials have expired and are valid. Click Remember Me to extend the expiration time of the login credentials. When the user logs out, their login credentials become invalid.
  • According to the user ID corresponding to the login credentials, query the user information in the database.
  • Use ThreadLocal to hold this user information throughout this request.
  • Optimization point: Before each request, you need to go to the database to query the user information. The access frequency is relatively high, so we consider saving the successfully logged in user information in Redis for a while. The interceptor first queries Redis before each query.
 2.7. Flowchart

2. Post module

1. Publish a post

The front-end rich text compiler function is used here to allow users to upload images and videos as well as text functions.

Uploading pictures, deleting pictures, and downloading pictures use the Alibaba Cloud OSS function.

Uploading videos, deleting videos, and playing videos use the Alibaba Cloud video on demand function.

As shown in the picture above, posts have classification modules. You can choose the classification module when publishing posts.

As shown in FIG 

2. Entity class

        The user's ID, article content, image URL, video playback URL, text type, post like statistics, comment statistics, status, creation time, modification time, etc. are stored here.

3. Effect display

        Use MybatisPlus paging + Alibaba Cloud OSS picture function + Alibaba Cloud video on demand function to intuitively display the final content.

4. Popular post ranking function

The content entered on the forum homepage is ranked according to popularity. If you require the function of displaying the top 10 most popular posts in pages.

The first is the entity class as follows:

Post table

Like form

The statistical idea for the number of likes for this post:

Publish a scheduled task, and periodically query the likes table through the post's ID to save the number of likes to the post table in the database.

For all posts, sort them backwards by order by according to their number of likes, and output them in pages.

This way you can get popular post information.

5. Performance optimization

Redis+Canal+MySQL binlog achieves cache consistency

As shown in the picture above, because the number of visits to posts is relatively large, redis is used to cache popular posts. However, posts will have database and cache inconsistency issues. Here, redis+MySQL binlog is used to achieve cache consistency.

  • First enable binlog in mysql.
  • Then deploy the canal middleware on Linux, configure the IP address, port number and username and password of mysql in the configuration file, and start the canal service.
  • Then integrate canal in Springboot and use canal-spring-boot-starter to integrate.
  • Write a listener, implement the EntryHandler interface, and rewrite the addition, deletion, and modification methods inside. Once the database sends changes, you can monitor the MySQL binlog and then modify the cache.

For this project, the binlog of the database is monitored through canal. Whenever a post is sent for modification, the binlog of mysql will be monitored through canal, and then the data in redis will be updated.

Use guava bloom filter mechanism to solve cache penetration problem

Introduce guava into the java project, set the configuration class, and set the false positive rate, which is generally 0.05. Then set up a request interface specifically to fill the Bloom filter with data. The filled data is filled with all the data of the current post.

In subsequent request accesses, if the data can be found in the Bloom filter, the data will be returned directly. If not, the data will be searched from redis.

3. Comment module

1. Post comments

If you directly click on a comment below the post, if the comment is a post comment, it will be marked as a post comment in the database, and a unique comment identifier will be generated and associated with the id of the post.

2. User replies to comments

If you click the reply function on a post, then the comment is a user comment and will be marked as a user comment in the database, and a unique comment identifier will be generated and associated with the user's id.

There is also a function for a user to reply to a user in the comments. This function is the same as above. It only requires the front end to sort based on the current comment id and the saved user id, and use the paging function to sort by time.

3. Entity class

4. Private message module

1. Effect display

2. Detailed steps

2.1. Private message list

        This function displays how many users are currently chatting with it, and displays information about other users, as well as read and unread status, including a list function.

        To find out which user the user has directly chatted with, match directly through the conversation_id field. Because when this field is saved, the strings of two user IDs are separated by a separator "_", so string interception is used to obtain the two user IDs.

        In order to avoid repeated statistics, Set is used here to remove duplicates, and the naming rules of conversation_id are in dictionary order, with the smaller on the left and larger on the right. For example, the conversation between 102 and 101 is saved in the format of "101_102". In this way, all the data can be obtained from the database based on fuzzy matching, and then saved into the Set collection, and then the data inside can be obtained and string interception can be performed to obtain two user IDs, so that the number and ID of the conversation users can be obtained.

Each session saves the sender id, receiver id, session status, session representation, and session time.

2.2. Detailed dialogue

        There are form_id and to_id in the entity class. These two ids correspond to the sender form_id and the receiver to_id respectively. When the conversation_id is obtained, the conversation information of both parties can be obtained based on this conversation_id and the data can be displayed in order of time.

        For example, two user IDs are 101 and 102. For the current user 101, if he sends a message, the sender is himself and the receiver is 102, and vice versa. In this way, conversation messages can be obtained sorted by time according to the current user ID and displayed with a paging function.

2.3. Send message

        For sending messages, save the current login user id and the other party's session user id to conversation_id in the conversation table according to dictionary sorting, and save the sender id and receiver id, and the status status is set to 0 unread by default. The content of the sent message and the sent time.        

2.4. Statistics on the number of unread messages

        First, according to the login user ID, conversation_id is used to query the data related to the user's session through fuzzy matching. Conditionally query the data whose to_id is the logged-in user ID and status=0 from the obtained data, so that the corresponding data can be obtained for quantity statistics.

3. Entity class

  • id: the unique identifier of private message/system notification
  • from_id: sender id
  • to_id: receiver id
  • conversation_id: identifies the conversation between two users. For example, user ID 112 sends a message to 113, or 113 sends a message to 112, both of which are  conservation_id conversations  112_113. In this way, we can find out the private messages between 112 and 113 through this field. Of course, this field is redundant and we can deduce it through from_id and to_id, but having this field facilitates subsequent queries and other operations.
  • content: Content of private message/system notification
  • status: status of the receiver notification
    • 0 - Unread (default)
    • 1 - Read
  • create_time: the sending time of private messages/system notifications

5. Like module

1. Redis key-value pair design

  • key: (String) The target id and the like user id are spliced ​​together, and the separator is _
  • value: (HashMap) timestamp that stores like status (0: like 1: cancel like) and update time

key=target id_like user id

value={time: long}

Both users and posts are designed this way.

2. Count the number of likes

One advantage of this is that if you count the number of likes on a post or the number of likes on the user's comments, you only need to use redis to fuzzy match the target ID to know the number of likes.

Set keys = redisTemplate.keys("目标id_" + "*");
int size = keys.size();

3. Determine whether it is liked 

If you want to see whether the post or comment has been liked by the user, you only need to query key <target id, user id> to get its value and determine whether there is content to know whether it has been liked.

Set keys = redisTemplate.keys(RedisUtils.setKey(target id, user id));
if(keys.size()==0){ System.out.println("Not Liked"); }else{ System.out .println("Liked"); }



4. Like

Save to reids according to key<target id, user id>, value<timestamp>

redisTemplate.opsForSet().add(RedisUtils.setKey(target id, user id),RedisUtils.setValue());

Design a scheduled task, save it to mysql after a period of time, and count the total number of likes in the statistics table

5. Cancel likes

You can directly delete key <target id, user id>, which will cancel the likes and the total number of liked targets will decrease by 1.

Boolean delete = redisTemplate.delete(RedisUtils.setKey("target id", "user id"));
if (delete){ System.out.println("Cancellation successful"); }else{ System.out.println(" Cancellation failed"); }



6. The total number of likes obtained by the liked person

The total number of likes is composed of: the total number of likes for the post + the total number of comments or comments

Check the total number of likes in mysql entity class

This is the RedisUtils entity class

public class RedisUtils {
    /**
     * 点赞设置key
     * @param Id1 目标id
     * @param Id2 点赞用户id
     * @return
     */
    public static String setKey(String Id1,String Id2){
        return Id1+"_"+Id2;
    }

    /**
     * 点赞设置value
     * 设置点赞时间戳
     * @return
     */
    public static Map<String,Long> setValue(){
        Map<String,Long> map=new HashMap<>();
        Instant instant = LocalDateTime.now().toInstant(ZoneOffset.ofHours(8));
        long millisecond = instant.toEpochMilli();
        map.put("time",millisecond);
        return map;
    }
}

7. Entity class

7.1. Like statistics table

7.2. Like information table

6. System notification module

1 Overview

System notification is a very common and necessary requirement. When a like, follow, or comment operation occurs, the system will send a notification to the corresponding user.

For social networking sites with huge traffic, the demand for system notifications is very huge. It is obviously not enough to simply use Ajax to do asynchronous functions like private messaging or posting functions. Therefore, in order to ensure the performance of the system, it is very necessary to use message queues (the three major functions of message queues: decoupling, asynchronous, and peak elimination). Kafka is used in Echo.

Overall, there are only two requirements, sending system notifications and displaying system notifications:

1.1. Send system notification:
  • A gives B a like and sends a like type system notification to B ( TOPIC_LIKE)
  • A likes B and sends a follow-type system notification to B ( TOPIC_FOLLOW)
  • A likes B and sends a comment type system notification to B ( TOPIC_COMMNET)

The overall logic is that when a like operation occurs, the like event of the message queue will be triggered, and then the consumer consumes this event. The specific consumption logic is to insert a piece of data into the system notification table (system notifications also use private messages) table  message, but the system notification is  from_id hard-coded as 1 in the code, indicating that it is sent by the system, so this is why everyone must pay attention to storing a user with id = 1 in the user table when deploying) .

1.2. Display system notification:
  • System notification list (displaying three types of notifications: likes, comments, and follow)
  • System notification details (display system notifications included in a certain type in pages)
  • Show the number of unread messages

2. Encapsulate event objects

If the consumer wants to insert a record into the database table message by consuming this message, then this message or event should have all the fields in the message table, or these fields can be derived from the message.

In addition, MQ is a publish-subscribe model, one-to-many, messages are classified by Topic, the producer publishes the message to a Topic, and the consumer can subscribe to the Topic. Take the like event as an example, see the picture below:

When the sender is a like, follow, or comment operation, the receiver is the responding user.

The effect is similar to station b

When people like, comment and follow, data will be sent, and then the received data will be classified and identified.

Code:

/**
     * 消费评论、点赞、关注事件
     * @param record
     */
    @KafkaListener(topics = {TOPIC_COMMNET, TOPIC_LIKE, TOPIC_FOLLOW})
    public void handleMessage(ConsumerRecord record) {
        if (record == null || record.value() == null) {
            logger.error("消息的内容为空");
            return ;
        }
        Event event = JSONObject.parseObject(record.value().toString(), Event.class);
        if (event == null) {
            logger.error("消息格式错误");
            return ;
        }

        // 发送系统通知
        Message message = new Message();
        message.setFromId(SYSTEM_USER_ID);
        message.setToId(event.getEntityUserId());
        message.setConversationId(event.getTopic());
        message.setCreateTime(new Date());

        Map<String, Object> content = new HashMap<>();
        content.put("userId", event.getUserId());
        content.put("entityType", event.getEntityType());
        content.put("entityId", event.getEntityId());
        if (!event.getData().isEmpty()) { // 存储 Event 中的 Data
            for (Map.Entry<String, Object> entry : event.getData().entrySet()) {
                content.put(entry.getKey(), entry.getValue());
            }
        }
        message.setContent(JSONObject.toJSONString(content));

        messageService.addMessage(message);

    }

7. Project Difficulties

1. Use Bloom filters to solve the cache penetration problem of Redis

What is used here is to use guava to implement Bloom filter

First, judge whether the data exists from guava. If the data is returned, it means that the filter has data. If the data is not returned, it means that the data does not exist in the bloom filter. Then the data is obtained from redis.

Guess you like

Origin blog.csdn.net/weixin_55127182/article/details/131998298