Xuecheng online notes + stepping on the pit (9) - course release, xxl-job + message SDK to achieve distributed transactions, page static, Hystrix fuse downgrade

navigation:

[Dark Horse Java Notes + Stepping Pit Summary] JavaSE+JavaWeb+SSM+SpringBoot+Riji Takeaway+SpringCloud+Dark Horse Tourism+Grain Mall+Xuecheng Online+Nioke Interview Questions_java dark horse notes

Table of contents

1 Business process, warehousing + caching + ES + MinIO to store static pages

2 Distributed Transaction Technology Solution

2.1 Review of local transactions and distributed transactions

2.2 What is CAP theory

2.3 Distributed transaction control scheme

2.3.1 Strong Consistency of CP and Strong Availability of AP

2.3.2 BASE Theory: Basic Availability, Soft State, Final Consistency

2.3.3 Implementation methods of strong consistency and eventual consistency

2.4 Transaction control scheme for course publishing - xxl-job achieves final consistency

2.4.1 Analysis

2.4.2 Technical solution, message table

2.4.3 Business Process

3 Course Publishing Interface

3.1 Interface definition, publish courses according to the course id

3.3 Interface development

3.3.1 Create message table and message history table

3.3.2 Business process

3.3.3 Business implementation, publish courses according to course id and institution id 

3.3.3 Perfect interface

3.4 Testing

4 [Message Module] Message Processing SDK

4.1 Analysis

4.1.1 Why extract the message module?

4.1.2 Why is the message module a general code component instead of a general service?

4.1.3 SDK does not need to provide logic for executing tasks

4.1.4 How to ensure the idempotence of tasks?

4.1.5 How to ensure that tasks are not repeated?

4.1.6 How to prevent repeated execution of stage small tasks?

4.2 Realization of message service

4.2.0 Initialize message sdk module

4.2.1 Scanning and completion of messages, query and completion of stage small tasks

4.2.2 Task processing abstract class to facilitate task class inheritance

4.3 xxl-job integrates the message SDK to schedule the task of publishing courses

4.3.1 Environment preparation, importing dependencies + adding a message table of type "publish course"

4.3.2 [Content Module] Course Publishing Task Class

4.3.3 The xxl-job scheduling center starts the executor and tasks, and executes them every 10s

4.3.4 Testing

5 The template engine implements static pages (the static preview page of the course is uploaded to MinIO)

5.1 What is page static

5.2 Freemarker course preview page static test

5.3 Upload file test

5.3.1 Environment preparation, multi-file upload dependency, configuration class

5.3.2 [Media Assets Service] Add parameter "object name" to multi-file upload interface

5.3.3 Remote call test

5.4 Hystrix fuse downgrade processing

5.4.1 Avalanche, circuit breaker, degradation

5.4.2 Fuse degradation processing

5.4 Course static development

5.4.1 Static implementation

5.4.2 Testing

5.4.3 Browse the detailed page


1 business process, warehousing + caching + ES + MinIO to store static pages

The staff of the teaching institution can publish the course after the course review is passed, and the course will be publicly displayed on the website for students to view, choose courses and learn.

Displaying course information on the website needs to solve the performance problem of course information display . If the speed is slow (excluding network speed), it will affect the user experience.

How to quickly search courses? Is it feasible to open the course details page and still query the database?

In order to improve the speed of the website, the course information needs to be cached, and the course information should be added to the ES index library for easy search.

Course publishing business process:

1. Store the course release information in the course release table of the content management database, and update the release status in the course basic information table to released.

2. Store course cache information to Redis.

3. Store course index information to Elasticsearch.

4. Staticize the course preview page and store it in the file system minIO to quickly browse the course details page.

Detailed business process:

1. Insert a record into the course publishing table course_publish. The record comes from the course pre-publishing table. If it exists, it will be updated. The publishing status is: published.

2. Update the release status of this course in course_base: Published

3. Delete the corresponding record in the course pre-release table .

4. Insert a message into the message table mq_message ; the message type is: course_publish, that is, the message is for processing the course publishing task; "associated business information 1" is the course id.

5. The inserted message is a course publishing task, which will store course cache information to Redis, store course index information to Elasticsearch, staticize the course preview page and store it in the file system minIO

The data in the course release table comes from the course pre-release table, and their structures are basically the same, except that the status in the course release table is the course release status, as shown in the following figure:

The course cache information in redis is to convert the data in the course publishing table into json for storage.

The course index information in elasticsearch is to index and store the course name, course introduction and other information according to the search needs.

Static page files (html web pages) of courses are stored in MinIO, and the course details are viewed through the file system to browse the course details page.

2 Distributed Transaction Technology Solution

2.1 Review of local transactions and distributed transactions

Spring Cloud Foundation 6 - Distributed Transactions, Seata_seata-server.sh_vincewm's Blog - CSDN Blog

What are local affairs?

@Transactional annotation . Usually, we use spring to control transactions in the program by using the transaction characteristics of the database itself , so it is called database transaction. Since the application mainly relies on the relational database to control the transaction, this database only belongs to the application, so based on the relationship of the application itself Transactions of small databases are also called local transactions.

Local transactions have the four characteristics of ACID. When a database transaction is implemented, all operations involved in a transaction will be incorporated into an inseparable execution unit. All operations in this execution unit will either succeed or fail. As long as any of them Failure to execute the operation will result in the rollback of the entire transaction.

What is a distributed transaction?

A distributed transaction means that the participants of the transaction, the server supporting the transaction, the resource server, and the transaction manager are respectively located on different nodes of different distributed systems .

Scenarios for distributed transactions:

Cross-JVM process: under the microservice architecture, remote call

Cross-database: Single service operates multiple databases:

Note that we are talking about multiple databases here, not multiple tables under the same database

Multi-serve single database:

2.2 What is CAP theory

CAP is the abbreviation of Consistency, Availability, and Partition tolerance, which represent consistency, availability, and partition tolerance respectively.

  • Consistency C: Data Synchronization
  • Availability A: The node can be accessed normally
  • Partition fault tolerance P: When a partition occurs in the cluster, the entire system must continue to provide external services

The client accesses the two nodes of the user service through the gateway:

Consistency means that no matter which node the user accesses, the data obtained is the latest . For example, when querying Xiao Ming's information, the results of the two queries cannot be different when the data has not changed.

Availability means that the results can be queried at any time when user information is queried , but the latest data is not guaranteed to be queried.

Partition tolerance is also called partition fault tolerance. When the system adopts a distributed architecture, requests are interrupted or messages are lost due to abnormal network communication when partitions occur, but the system still needs to provide external services .

  • Partition: due to network failure or other reasons, some nodes in the distributed system lose connection with other nodes, forming an independent partition.
  • Fault tolerance: When the cluster is partitioned, the entire system must continue to provide external services

The CAP theory emphasizes that it is impossible to satisfy all these three points in a distributed system , either to guarantee CP or to guarantee AP .

Since it is a distributed system, partition tolerance must be met, because network abnormalities inevitably occur between services, and the entire system cannot be unavailable due to local network abnormalities.

If P is satisfied then C and A cannot be satisfied at the same time:

For example, if we add the information of a user Xiaoming, the information is first added to node 1, and then synchronized to node 2, as shown in the figure below:

If you want to meet the C consistency, you must wait for Xiaoming’s information synchronization to complete before the system can be used (otherwise, the data will not be queried when the request reaches node 2, which violates the consistency). The system is unavailable during the information synchronization process, so A cannot be satisfied while satisfying C.

If Availability is to be satisfied, the system must always be available without waiting for information synchronization to complete. At this time, the consistency of the system cannot be satisfied.

Therefore, for distributed transaction control in a distributed system, either CP or AP is guaranteed.

2.3 Distributed transaction control scheme

Distributed transaction control requires a trade-off between C and A. To ensure consistency, do not guarantee availability, and to ensure availability, do not ensure consistency. First of all, you need to confirm whether you want CP or AP, and you should judge according to the application scenario.

2.3.1 Strong Consistency of CP and Strong Availability of AP

CP scenario: Satisfying C and abandoning A, emphasizing consistency . Applicable to scenarios with high timeliness requirements.

Inter- bank transfer : A transfer request must wait for the banking systems of both parties to complete the entire transaction before it is considered complete, as long as one of them fails and the other party performs a rollback operation.

Account opening operation: When opening an account in the business system, you must also open an account with the operator. If any party fails to open an account, the user will not be able to use it, so CP must be satisfied.

AP scenario: Satisfying A and abandoning C, emphasizing usability . Applicable to scenarios with low timeliness requirements.

Order refund , the refund is successful today, and the account will be credited tomorrow, as long as the user can accept the credit within a certain period of time.

Sign up to send points, and the points will be credited in 24 minutes after successful registration.

For payment SMS communication, if the payment is successful and the SMS is sent, there may be a delay in sending the SMS, or it may not even be sent successfully.

In practical applications, there are many scenarios that conform to AP. In fact, although AP abandons C consistency, in fact, the final data is consistent, and the final consistency is satisfied. Therefore, the industry defines the BASE theory.

2.3.2  BASE Theory: Basic Availability, Soft State, Final Consistency

BASE is an acronym for the three phrases Basicly Available (basically available), Soft state (soft state) and Eventually consistent (final consistency).

Basic availability: When the system cannot meet all available requirements, it is enough to ensure that the core services are available. For example, a takeaway system has a high concurrent system at around 12:00 noon. At this time, it is necessary to ensure that the services involved in the order process are available, and other services are temporarily unavailable. use.

Soft state: It means that there can be an intermediate state, such as: printing your own social security statistics, the operation will not show the result immediately, but will remind you that printing is in progress, please check after XXX time. Although intermediate states occur, the final state is correct.

Final Consistency: After the refund operation is not done in time, the account will be credited after a certain period of time, abandoning strong consistency and satisfying final consistency.

2.3.3  Implementation methods of strong consistency and eventual consistency

Realizing CP is to achieve strong consistency:

Implementation based on AT mode using Seata framework

It is implemented based on the TCC mode using the Seata framework .

To implement AP, it is necessary to ensure the final data consistency:

Use the method of message queue notification to realize that the notification fails to be automatically retried, and manual processing is required to reach the maximum number of failures ;

Using the task scheduling scheme, start the task scheduling to synchronize the course information from the database to elasticsearch, MinIO, and redis.

2.4 Transaction control scheme for course publishing - xxl-job achieves final consistency

2.4.1 Analysis

Thinking: After learning so many theories, go back to the course release. After performing the course release operation, you need to write four pieces of data to the database, redis, elasticsearch, and MinIO. Which solution is used in this scenario?

Conclusion: Course publishing guarantees the final consistency of AP, using xxl-job task scheduling

Meet CP?

If CP is to be satisfied, it means that after the course publishing operation, four copies of data are written to the database, redis, elasticsearch, and MinIO. As long as one copy fails, all others will be rolled back.

Meet AP?

After the course publishing operation, first update the course publishing status in the database, and then write course information to redis, elasticsearch, MinIO after updating, as long as the data is finally written to redis, elasticsearch, MinIO within a certain period of time .

At present, we already have the technical accumulation of task scheduling. Here we choose the task scheduling scheme to realize distributed transaction control, and the course publishing can meet the final consistency of AP .

2.4.2 Technical solution, message table

The following figure is the specific technical solution:

1. Add a message table in the database of the content management service , which is in the same database as the course release table .

2. Click on the course release to write the course release information to the course release table through local affairs, and write the course release message to the message table at the same time . Controlled through the database, as long as the course release table is successfully inserted into the message table, the data in the message table will record the tasks released by a certain course.

3. Start the task scheduling system to regularly schedule the content management service to scan the records of the message table regularly .

4. When the message published by the course is scanned, the operation of synchronizing data to redis, elasticsearch, and MinIO will be completed.

5. Delete the message table record after the task of synchronizing data is completed.

Message table:

2.4.3 Business Process

The following figure shows the process of course release operation:

1. Execute the release operation, and add a "course release task" to the message table while storing the course release table in the content management service. Here, local transactions are used to ensure that the course release information is saved successfully, and the message table is also saved successfully.

2. The task scheduling service regularly schedules the content management service to scan the message table. Since a course release task is inserted into the message table after the course release operation, a task is scanned at this time.

3. Get the task and start executing the task, and store data in redis, elasticsearch and file system respectively.

4. Delete the message table record after the task is completed.

3Course Publishing Interface

3.1 Interface definition, publish courses according to the course id

According to the distributed transaction control scheme of course publishing, the course publishing operation first writes course publishing information to the course publishing table through local transactions and inserts a message into the message table. The course publishing interface defined here should realize this function.

Define the course publishing interface in the content management interface project.

 /**
 * @description 课程预览,发布
 * @author Mr.M
 * @date 2022/9/16 14:48
 * @version 1.0
 */
@Api(value = "课程预览发布接口",tags = "课程预览发布接口")
@Controller
public class CoursePublishController {
...
 @ApiOperation("课程发布")
 @ResponseBody
 @PostMapping ("/coursepublish/{courseId}")
public void coursepublish(@PathVariable("courseId") Long courseId){

}

3.3 Interface development

3.3.1 Create message table and message history table

1. Create the mq_message message table and message history message table in the content management database (the history table stores completed messages).

The message table structure is as follows:

2. Generate the mq_message message table, the po and mapper interface of the course_publish course release table

A general message processing component will be developed later , so no code will be generated here.

3.3.2 Business process

1. Insert a record into the course publishing table course_publish. The record comes from the course pre-publishing table. If it exists, it will be updated. The publishing status is: published.

2. Update the release status of this course in course_base: Published

3. Delete the corresponding record in the course pre-release table .

4. Insert a message into the message table mq_message ; the message type is: course_publish, that is, the message is for processing the course publishing task; "associated business information 1" is the course id.

5. The inserted message is a course publishing task, which will store course cache information to Redis, store course index information to Elasticsearch, staticize the html course preview page and store it in the file system minIO

constraint:

1. The course review can only be released after passing the review.

2. This institution is only allowed to publish courses of this institution.

3.3.3 Business implementation , publish courses according to course id and institution id 

CoursePublishServiceImpl
/**
 * @description 课程发布接口
 * @param companyId 机构id
 * @param courseId 课程id
*/ 
 @Transactional
 @Override
 public void publish(Long companyId, Long courseId) {

  //约束校验
  //查询课程预发布表
  CoursePublishPre coursePublishPre = coursePublishPreMapper.selectById(courseId);
  if(coursePublishPre == null){
     XueChengPlusException.cast("请先提交课程审核,审核通过才可以发布");
  }
  //本机构只允许提交本机构的课程
  if(!coursePublishPre.getCompanyId().equals(companyId)){
   XueChengPlusException.cast("不允许提交其它机构的课程。");
  }


  //课程审核状态
  String auditStatus = coursePublishPre.getStatus();
  //审核通过方可发布
  if(!"202004".equals(auditStatus)){
   XueChengPlusException.cast("操作失败,课程审核通过方可发布。");
  }

  //保存课程发布信息
  saveCoursePublish(courseId);

  //保存消息表
  saveCoursePublishMessage(courseId);

 //删除课程预发布表对应记录
  coursePublishPreMapper.deleteById(courseId);

 }

/**
 * @description 保存课程发布信息
 * @param courseId  课程id
*/
 private void saveCoursePublish(Long courseId){
   //整合课程发布信息
  //查询课程预发布表
  CoursePublishPre coursePublishPre = coursePublishPreMapper.selectById(courseId);
  if(coursePublishPre == null){
   XueChengPlusException.cast("课程预发布数据为空");
  }

  CoursePublish coursePublish = new CoursePublish();

  //拷贝到课程发布对象
  BeanUtils.copyProperties(coursePublishPre,coursePublish);
  coursePublish.setStatus("203002");
  CoursePublish coursePublishUpdate = coursePublishMapper.selectById(courseId);
  if(coursePublishUpdate == null){
   coursePublishMapper.insert(coursePublish);
  }else{
   coursePublishMapper.updateById(coursePublish);
  }
  //更新课程基本表的发布状态
  CourseBase courseBase = courseBaseMapper.selectById(courseId);
  courseBase.setStatus("203002");
  courseBaseMapper.updateById(courseBase);

 }

 /**
  * @description 保存消息表记录
  * @param courseId  课程id
  */
    private void saveCoursePublishMessage(Long courseId) {
//这里是直接注入service,不用远程调用,因为消息sdk是个通用的组件,不是服务
//消息类型为:course_publish,即该消息是处理课程发布任务的;“关联业务信息1”是课程id。
        MqMessage mqMessage = mqMessageService.addMessage("course_publish"
                , String.valueOf(courseId), null, null);
        if (mqMessage == null) {
            XueChengPlusException.cast(CommonError.UNKOWN_ERROR);
        }

    }


}

The message module adds message services:

    @Override
    public MqMessage addMessage(String messageType, String businessKey1, String businessKey2, String businessKey3) {
        MqMessage mqMessage = new MqMessage();
        mqMessage.setMessageType(messageType);
        mqMessage.setBusinessKey1(businessKey1);
        mqMessage.setBusinessKey2(businessKey2);
        mqMessage.setBusinessKey3(businessKey3);
        int insert = mqMessageMapper.insert(mqMessage);
        if(insert>0){
            return mqMessage;
        }else{
            return null;
        }

    }

3.3.3 Perfect interface

 @ApiOperation("课程发布")
 @ResponseBody
 @PostMapping ("/coursepublish/{courseId}")
public void coursepublish(@PathVariable("courseId") Long courseId){
//机构id先用假数据
     Long companyId = 1232141425L;
     coursePublishService.publish(companyId,courseId);

 }

3.4 Testing

First use the httpclient method to test:

### 课程发布
POST {
   
   {content_host}}/content/coursepublish/2

Test the constraints first:

1. Conduct a course release test when the review is not submitted.

2. Publish when the course is not approved.

Normal process test:

1. Submit the review course

2. Manually modify the review status of the course pre-release table and basic course information to pass the review.

3. Executing course release

4. Check whether the course release table records are normal, the course pre-release table records have been deleted, and the release status of the course basic information table and the course release table is "published".

Use the front-end and back-end joint debugging method to test.

4 [Message Module] Message Processing SDK

4.1 Analysis

After the course publishing operation is executed, it is necessary to scan the records of the message table. The operations related to message processing are:

1. Add message table

2. Scan the message table.

3. Update the message table.

4. Delete the message table.

4.1.1 Why extract the message module?

Because each service uses the same set of message tables , extracting the message module can improve code reusability

analyze

Using the message table in each business to achieve final transaction consistency is the same solution :

If a set of logic for timing scanning and processing of the message table is implemented in every place, it will basically be repeated, and the reusability of the software will be too low and the cost will be too high.

how to solve this problem?

To solve this problem, it is conceivable to make the logic related to message processing into a common code component .

4.1.2 Why is the message module a general code component instead of a general service?

Because it is not only an independent function, but also connects multiple service-related databases. 

analyze:

A common service is to complete a common independent function and provide an independent network interface, such as: the file system service in the project, which provides distributed storage services for files.

The code component also completes a common independent function, and usually provides an API for use by external systems , such as: fastjson, Apache commons toolkit, etc.

If message processing is made into a general service, the service needs to connect to multiple databases , because it needs to scan the message table under the microservice database and provide a network interface for communicating with the microservice, which is only developed for current needs The cost is a bit high.

If the message processing is made into an SDK toolkit, compared with the general service, it can not only solve the requirement of generalizing the message processing, but also reduce the cost.

Therefore, this project determines that the processing related to the message table will be made into an SDK component for use by various microservices, as shown in the following figure:

SDK generally refers to a software development kit. It is the abbreviation of Software Development Kit.

The following describes the design content of the message SDK:

4.1.3  SDK does not need to provide logic for executing tasks

No, use the "message type" field to distinguish different tasks, and use the "bussiness_key" to store parameters.

Take the course publishing task as an example. The execution of the course publishing task is to synchronize data to redis, index library, etc. The execution logic of other tasks is different, so the execution task does not need to implement the task logic in the sdk. It only needs to provide an abstract method to be controlled by the specific Execute the task to achieve it.

4.1.4  How to ensure the idempotence of tasks?

After the task execution is completed, the message status is set to "Completed", deleted from the message table and stored in the historical message table. If the status of the message is completed or does not exist in the message table, it does not need to be executed.

Review: In the video processing chapter, the idempotency scheme of video processing is optimistic lock. To enable task processing is to modify the task status to "processing" .

https://blog.csdn.net/qq_40991313/article/details/129766117

Review: How to ensure task idempotence

1) Database constraints , such as: unique index, primary key. The same primary key cannot be inserted successfully twice.

2) Optimistic locking (used). Add a version field to the database table, and judge whether it is equal to a certain version when updating. For example, when it is repeatedly submitted, it is judged that the database will not submit if it finds that the version has been changed.

3) Redis unique serial number . The Redis key is the task id, and the value is a random serialized uuid. Generate a unique serial number before the request, carry the serial number to the request, and record the serial number in redis during execution, indicating that the request with this serial number has been executed. If the same serial number is executed again, it means repeated execution.

4.1.5  How to ensure that tasks are not repeated?

In addition to ensuring the idempotence of tasks, task scheduling uses fragment broadcasting to obtain tasks according to fragment parameters, and the blocking scheduling strategy is to discard tasks.

Adopt the same scheme as the previous video processing chapter:

https://blog.csdn.net/qq_40991313/article/details/129766117

Note: This is an information synchronization task . Even if the task is executed repeatedly, it does not matter , there is no loss, and the method of preempting the task is no longer used to ensure that the task is not repeatedly executed.

4.1.6 How to prevent repeated execution of stage small tasks?

It is still an optimistic locking idea (the unique index of the distributed lock database), and the small task status fields of each stage are designed in the message table .

Another problem is to ensure the idempotency of the task according to whether the message table record exists or the task status in the message table. If a task has several small tasks, for example: the course publishing task needs to perform three synchronization operations: store the course to redis, store courses to the index library, and store course pages to the file system. If one of the small tasks has already been completed, it should not be repeated . How should it be designed here?

Use small tasks as different stages of the task, and design the stage status in the message table.

Every time a stage is completed, the completion mark is marked in the corresponding stage status field. Even if the large task is not completed and then re-executed, if the small stage task is completed, the task of a small stage will not be repeated.

4.2 Realization of message service

4.2.0 Initialize message sdk module

1. Create a message table and a message history table in the content management database

2. Copy xuecheng-plus-message-sdk in the course materials to the project directory, as shown below:

4.2.1 Scanning and completion of messages, query and completion of stage small tasks

To sum up, in addition to the basic interface of adding, deleting, modifying and checking the message table, the message SDK also has the following interface functions:

package com.xuecheng.messagesdk.service;

/**
 * <p>
 *  服务类
 * </p>
 */
public interface MqMessageService extends IService<MqMessage> {

    /**
     * @description 扫描消息表记录,采用与扫描视频处理表相同的思路
     * @param shardIndex xxl-job执行器分片序号
     * @param shardTotal 分片总数
     * @param count 扫描记录数
     * @return java.util.List 消息记录
     */
    public List<MqMessage> getMessageList(int shardIndex, int shardTotal,  String messageType,int count);

    /**
     * @description 完成任务
     * @param id 消息id
     * @return int 更新成功:1
     */
    public int completed(long id);

    /**
     * @description 完成阶段任务
     * @param id 消息id
     * @return int 更新成功:1
     */
    public int completedStageOne(long id);
    public int completedStageTwo(long id);
    public int completedStageThree(long id);
    public int completedStageFour(long id);

    /**
     * @description 查询阶段状态
     * @param id
     * @return int
    */
    public int getStageOne(long id);
    public int getStageTwo(long id);
    public int getStageThree(long id);
    public int getStageFour(long id);

}

Implementation class:

@Slf4j
@Service
public class MqMessageServiceImpl extends ServiceImpl<MqMessageMapper, MqMessage> implements MqMessageService {

    @Autowired
    MqMessageMapper mqMessageMapper;

    @Autowired
    MqMessageHistoryMapper mqMessageHistoryMapper;


    @Override
    public List<MqMessage> getMessageList(int shardIndex, int shardTotal, String messageType,int count) {
        return mqMessageMapper.selectListByShardIndex(shardTotal,shardIndex,messageType,count);
    }

    @Override
    public MqMessage addMessage(String messageType, String businessKey1, String businessKey2, String businessKey3) {
        MqMessage mqMessage = new MqMessage();
        mqMessage.setMessageType(messageType);
        mqMessage.setBusinessKey1(businessKey1);
        mqMessage.setBusinessKey2(businessKey2);
        mqMessage.setBusinessKey3(businessKey3);
        int insert = mqMessageMapper.insert(mqMessage);
        if(insert>0){
            return mqMessage;
        }else{
            return null;
        }

    }

    @Transactional
    @Override
    public int completed(long id) {
        MqMessage mqMessage = new MqMessage();
        //完成任务
        mqMessage.setState("1");
        int update = mqMessageMapper.update(mqMessage, new LambdaQueryWrapper<MqMessage>().eq(MqMessage::getId, id));
        if(update>0){

            mqMessage = mqMessageMapper.selectById(id);
            //添加到历史表
            MqMessageHistory mqMessageHistory = new MqMessageHistory();
            BeanUtils.copyProperties(mqMessage,mqMessageHistory);
            mqMessageHistoryMapper.insert(mqMessageHistory);
            //删除消息表
            mqMessageMapper.deleteById(id);
            return 1;
        }
        return 0;

    }

    @Override
    public int completedStageOne(long id) {
        MqMessage mqMessage = new MqMessage();
        //完成阶段1任务
        mqMessage.setStageState1("1");
        return mqMessageMapper.update(mqMessage,new LambdaQueryWrapper<MqMessage>().eq(MqMessage::getId,id));
    }

    @Override
    public int completedStageTwo(long id) {
        MqMessage mqMessage = new MqMessage();
        //完成阶段2任务
        mqMessage.setStageState2("1");
        return mqMessageMapper.update(mqMessage,new LambdaQueryWrapper<MqMessage>().eq(MqMessage::getId,id));
    }

    @Override
    public int completedStageThree(long id) {
        MqMessage mqMessage = new MqMessage();
        //完成阶段3任务
        mqMessage.setStageState3("1");
        return mqMessageMapper.update(mqMessage,new LambdaQueryWrapper<MqMessage>().eq(MqMessage::getId,id));
    }

    @Override
    public int completedStageFour(long id) {
        MqMessage mqMessage = new MqMessage();
        //完成阶段4任务
        mqMessage.setStageState4("1");
        return mqMessageMapper.update(mqMessage,new LambdaQueryWrapper<MqMessage>().eq(MqMessage::getId,id));
    }

    @Override
    public int getStageOne(long id) {
        return Integer.parseInt(mqMessageMapper.selectById(id).getStageState1());
    }

    @Override
    public int getStageTwo(long id) {
        return Integer.parseInt(mqMessageMapper.selectById(id).getStageState2());
    }

    @Override
    public int getStageThree(long id) {
        return Integer.parseInt(mqMessageMapper.selectById(id).getStageState3());
    }

    @Override
    public int getStageFour(long id) {
        return Integer.parseInt(mqMessageMapper.selectById(id).getStageState4());
    }


}

 dao:

public interface MqMessageMapper extends BaseMapper<MqMessage> {

    @Select("SELECT t.* FROM mq_message t WHERE t.id % #{shardTotal} = #{shardindex} and t.state='0' and t.message_type=#{messageType} limit #{count}")
    List<MqMessage> selectListByShardIndex(@Param("shardTotal") int shardTotal, @Param("shardindex") int shardindex, @Param("messageType") String messageType,@Param("count") int count);

}

4.2.2  Task processing abstract class to facilitate task class inheritance

The message SDK provides an abstract class for message processing . This abstract class is for the user to inherit and use, as follows:

package com.xuecheng.messagesdk.service;
/**
 * @description 消息处理抽象类
 */
@Slf4j
@Data
public abstract class MessageProcessAbstract {

    @Autowired
    MqMessageService mqMessageService;


    /**
     * @param mqMessage 执行任务内容
     * @return boolean true:处理成功,false处理失败
     * @description 任务处理
     * @author Mr.M
     * @date 2022/9/21 19:47
     */
    public abstract boolean execute(MqMessage mqMessage);


    /**
     * @description 扫描消息表多线程执行任务
     * @param shardIndex 分片序号
     * @param shardTotal 分片总数
     * @param messageType  消息类型
     * @param count  一次取出任务总数
     * @param timeout 预估任务执行时间,到此时间如果任务还没有结束则强制结束 单位秒
     * @return void
     * @author Mr.M
     * @date 2022/9/21 20:35
    */
    public void process(int shardIndex, int shardTotal,  String messageType,int count,long timeout) {

        try {
            //扫描消息表获取任务清单
            List<MqMessage> messageList = mqMessageService.getMessageList(shardIndex, shardTotal,messageType, count);
            //任务个数
            int size = messageList.size();
            log.debug("取出待处理消息"+size+"条");
            if(size<=0){
                return ;
            }

            //创建线程池
            ExecutorService threadPool = Executors.newFixedThreadPool(size);
            //计数器
            CountDownLatch countDownLatch = new CountDownLatch(size);
            messageList.forEach(message -> {
                threadPool.execute(() -> {
                    log.debug("开始任务:{}",message);
                    //处理任务
                    try {
                        boolean result = execute(message);
                        if(result){
                            log.debug("任务执行成功:{})",message);
                            //更新任务状态,删除消息表记录,添加到历史表
                            int completed = mqMessageService.completed(message.getId());
                            if (completed>0){
                                log.debug("任务执行成功:{}",message);
                            }else{
                                log.debug("任务执行失败:{}",message);
                            }
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                        log.debug("任务出现异常:{},任务:{}",e.getMessage(),message);
                    }
                    //计数
                    countDownLatch.countDown();
                    log.debug("结束任务:{}",message);

                });
            });

            //等待,给一个充裕的超时时间,防止无限等待,到达超时时间还没有处理完成则结束任务
            countDownLatch.await(timeout,TimeUnit.SECONDS);
            System.out.println("结束....");
        } catch (InterruptedException e) {
           e.printStackTrace();

        }

    }

}

4.3 xxl-job integrates the message SDK to schedule the task of publishing courses

4.3.1 Environment preparation, importing dependencies + adding a message table of type "publish course"

1. Create a message table and a message history table in the content management database (completed)

2. Copy xuecheng-plus-message-sdk in the course materials to the project directory, as shown below:

3. Add sdk dependencies to the content management service project

<dependency>
    <groupId>com.xuecheng</groupId>
    <artifactId>xuecheng-plus-message-sdk</artifactId>
    <version>0.0.1-SNAPSHOT</version>
</dependency>

4. Go back to the course publishing method and add a message table:

@Transactional
@Override
public void publish(Long companyId, Long courseId) {

。。。
 //保存消息表
 saveCoursePublishMessage(courseId);

。。。

}
 /**
  * @description 保存消息表记录
  * @param courseId  课程id
  */
private void saveCoursePublishMessage(Long courseId){
//消息类型为:course_publish,即该消息是处理课程发布任务的;“关联业务信息1”是课程id。
 MqMessage mqMessage = mqMessageService.addMessage("course_publish", String.valueOf(courseId), null, null);
 if(mqMessage==null){
  XueChengPlusException.cast(CommonError.UNKOWN_ERROR);
 }
}

test:

Publish a course and observe whether the message table is adding messages normally.

You need to manually modify the course review status to Approved and execute the release operation. After the release, you can modify the release status to be taken off the shelf and re-published for testing.

4.3.2 [Content Module] Course Publishing Task Class

Task flow: 

  1. The task class implements the MessageProcessAbstract class in the sdk
  2. Set the task scheduling entry, and call the method of the abstract class to execute the task. Query the task list, set the counter to prevent infinite waiting, and traverse the open thread to execute the execute method rewritten below.
  3. Override the execute method:
    1. Get message-related business information
    2. Store course cache information to Redis
    3. Store course index information to Elasticsearch
    4. Request distribution file system minIO to store course static pages (ie html pages)  
package com.xuecheng.content.service.jobhandler;

@Slf4j
@Component
public class CoursePublishTask extends MessageProcessAbstract {
    //任务调度入口
    @XxlJob("CoursePublishJobHandler")
    public void coursePublishJobHandler() throws Exception {

        // 分片参数
        int shardIndex = XxlJobHelper.getShardIndex();//执行器的序号,从0开始
        int shardTotal = XxlJobHelper.getShardTotal();//执行器总数
        //调用抽象类的方法执行任务。查询任务列表、设置计数器防止无限等待,遍历开启线程执行下面重写的execute方法
        process(shardIndex,shardTotal, "course_publish",30,60);


    }

    //课程发布任务处理
    @Override
    public boolean execute(MqMessage mqMessage) {
        //获取消息相关的业务信息
        String businessKey1 = mqMessage.getBusinessKey1();
        long courseId = Integer.parseInt(businessKey1);
        //课程静态化
        generateCourseHtml(mqMessage,courseId);
        //课程索引
        saveCourseIndex(mqMessage,courseId);
        //课程缓存
        saveCourseCache(mqMessage,courseId);
        return true;
    }


    //生成课程静态化页面并上传至文件系统
    public void generateCourseHtml(MqMessage mqMessage,long courseId){

        log.debug("开始进行课程静态化,课程id:{}",courseId);
        //消息id
        Long id = mqMessage.getId();
        //消息处理的service
        MqMessageService mqMessageService = this.getMqMessageService();
        //消息幂等性处理
        int stageOne = mqMessageService.getStageOne(id);
        if(stageOne >0){
            log.debug("课程静态化已处理直接返回,课程id:{}",courseId);
            return ;
        }
        try {
            TimeUnit.SECONDS.sleep(10);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
        //保存第一阶段状态
        mqMessageService.completedStageOne(id);

    }

    //TODO:将课程信息缓存至redis
    public void saveCourseCache(MqMessage mqMessage,long courseId){
        log.debug("将课程信息缓存至redis,课程id:{}",courseId);
        try {
            TimeUnit.SECONDS.sleep(2);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }


    }
    //TODO:保存课程索引信息
    public void saveCourseIndex(MqMessage mqMessage,long courseId){
        log.debug("保存课程索引信息,课程id:{}",courseId);
        try {
            TimeUnit.SECONDS.sleep(2);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }

    }

}

4.3.3 The xxl-job scheduling center starts the executor and tasks, and executes them every 10s

1. First add xxl-job dependency in the content management service project

<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
</dependency>

2. Configure the executor

Configure in content-service-dev.yaml in nacos

xxl:
  job:
    admin:
      addresses: http://192.168.101.65:8088/xxl-job-admin
    executor:
      appname: coursepublish-job
      address:
      ip:
      port: 8999
      logpath: /data/applogs/xxl-job/jobhandler
      logretentiondays: 30
    accessToken: default_token

3. Copy a XxlJobConfig configuration class from the media asset management service layer project to the content management service project.

Add executor in xxl-job-admin console

3. Write task scheduling entry

@Slf4j
@Component
public class CoursePublishTask extends MessageProcessAbstract {

    //任务调度入口
    @XxlJob("CoursePublishJobHandler")
    public void coursePublishJobHandler() throws Exception {
        // 分片参数
        int shardIndex = XxlJobHelper.getShardIndex();
        int shardTotal = XxlJobHelper.getShardTotal();
        log.debug("shardIndex="+shardIndex+",shardTotal="+shardTotal);
        //参数:分片序号、分片总数、消息类型、一次最多取到的任务数量、一次任务调度执行的超时时间
        process(shardIndex,shardTotal,"course_publish",30,60);
    }
    ....

4. Add tasks to xxl-job

The task configuration is as follows:

At this point, the SDK development and integration are complete, and the next step is to add tasks such as page static, course caching, and course indexing after the course is released.

4.3.4 Testing

Add a course publishing message in the message table, the message type is course_publish, and business_key1 is the ID of the published course

1. Test whether it can be scheduled and executed normally.

2. Test task idempotency

Break the point at saveCourseCache(mqMessage,courseId);, wait for the execution to come here and observe that the mark expected to be marked as 1 is completed in the first phase of the database.

End the process, restart it, and observe that the tasks in the first stage are no longer expected to be executed.

3. After the task execution is completed, delete the record in the message table, insert it into the history table, and the state field is 1

5 The template engine implements static pages (the static preview page of the course is uploaded to MinIO)

5.1 What is page static

Simple version: In the past, the page was rendered into an html page after the request was responded to, and after being statically created, the page was rendered and saved to the file system when it was created or modified.

According to the operation process of course publishing, after the course publishing is executed, the course detail information page should be staticized, and an html page should be generated and uploaded to the file system .

What is page static?

The course preview function uses the template engine technology to fill in the data in the page template to generate an html page. This process is that when the client requests the server, the server starts to render and generate the html page , and finally responds to the browser. The concurrency of server-side rendering is limited. .

The static page emphasizes that the process of generating html pages should be advanced, using template engine technology to generate html pages in advance , and directly requesting html pages when the client requests , because the static pages can use high-performance web servers such as nginx and apache, the concurrent performance high.

When can I use page static technology?

When the data changes infrequently , once a static page is generated and rarely changes for a long time, page static can be used at this time. Because if the data changes frequently, the static page needs to be regenerated once it changes, resulting in a heavy workload for maintaining the static page.

According to the business requirements of course publishing, although the course information can still be modified after the course is released, it needs to be reviewed by the course, and the modification frequency is not high, so it is suitable to use static page.

5.2 Freemarker course preview page static test

The following uses freemarker technology to statically generate html pages for the pages.

1. Add freemarker dependency in the content management service project

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-freemarker</artifactId>
</dependency>

2. Write the test method

Business Process:

  1. The Configuration object specifies the template directory and encoding
  2. Prepare course preview page data
  3. The template file is statically converted into an html string, freemarker template tool class
  4. Convert html characters to input stream, then convert output stream to write to file

The freemarker template tool class converts templates into html strings:

String FreeMarkerTemplateUtils.processTemplateIntoString(Template template, Object model);

package com.xuecheng.content;

/**
 * @description freemarker测试
 */
@SpringBootTest
public class FreemarkerTest {

    @Autowired
    CoursePublishService coursePublishService;


    //测试页面静态化
    @Test
    public void testGenerateHtmlByTemplate() throws IOException, TemplateException {
//Configuration对象指定模板目录和编码
        Configuration configuration = new Configuration(Configuration.getVersion());

        //加载模板
        //选指定模板路径,classpath下templates下
        //得到classpath路径
        String classpath = this.getClass().getResource("/").getPath();
        configuration.setDirectoryForTemplateLoading(new File(classpath + "/templates/"));
        //设置字符编码
        configuration.setDefaultEncoding("utf-8");

        //指定模板文件名称
        Template template = configuration.getTemplate("course_template.ftl");

//准备课程预览页数据
        CoursePreviewDto coursePreviewInfo = coursePublishService.getCoursePreviewInfo(2L);

        Map<String, Object> map = new HashMap<>();
        map.put("model", coursePreviewInfo);

//页面静态化,模板文件转html字符串
        //参数1:模板,参数2:数据模型
        String content = FreeMarkerTemplateUtils.processTemplateIntoString(template, map);
        System.out.println(content);
        //将静态化内容输出到文件中
        InputStream inputStream = IOUtils.toInputStream(content);
        //输出流
        FileOutputStream outputStream = new FileOutputStream("D:\\develop\\test.html");
        IOUtils.copy(inputStream, outputStream);

    }

}

test:

Execute the test method and observe whether D:\\develop\\test.html is successfully generated.

5.3 Upload file test

5.3.1 Environment preparation, multi-file upload dependency, configuration class

Statically generated files need to be uploaded to the distributed file system. According to the division of responsibilities of microservices, the media asset management service is responsible for maintaining the files in the file system, so the content management service needs to call the media asset management service to generate html files statically . Upload file interface . As shown below:

There will inevitably be remote calls between microservices. In Spring Cloud, Feign can be used for remote calls.

Feign is a declarative http client, official address: GitHub - OpenFeign/feign: Feign makes writing java http clients easier

Its role is to help us elegantly implement the sending of http requests and solve the problems mentioned above.

Next, prepare the development environment of Feign first:

1. Add dependencies to the content-service project of content management:

<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<!-- Spring Cloud 微服务远程调用 -->
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-httpclient</artifactId>
</dependency>
<!--feign支持Multipart格式传参-->
<dependency>
    <groupId>io.github.openfeign.form</groupId>
    <artifactId>feign-form</artifactId>
    <version>3.8.0</version>
</dependency>
<dependency>
    <groupId>io.github.openfeign.form</groupId>
    <artifactId>feign-form-spring</artifactId>
    <version>3.8.0</version>
</dependency>

2. Configure fuse in feign-dev.yaml of nacos

feign:
#Hystrix类似于Sentinel,用于微服务保护。隔离、熔断、降级等
  hystrix:
    enabled: true
#断路器
  circuitbreaker:
    enabled: true
hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 30000  #熔断超时时间
ribbon:
  ConnectTimeout: 60000 #连接超时时间
  ReadTimeout: 60000 #读超时时间
  MaxAutoRetries: 0 #重试次数
  MaxAutoRetriesNextServer: 1 #切换实例的重试次数

3. Introduce this configuration file in both the content management service project and the content management api project

shared-configs:
  - data-id: feign-${spring.profiles.active}.yaml
    group: xuecheng-plus-common
    refresh: true

4. Multi-file upload configuration class

Configure feign to support Multipart in the content management service project, and create MultipartSupportConfig under the config package under the content-service project.

@Configuration
public class MultipartSupportConfig {

    @Autowired
    private ObjectFactory<HttpMessageConverters> messageConverters;

    @Bean
    @Primary//注入相同类型的bean时优先使用
    @Scope("prototype")
    public Encoder feignEncoder() {
        return new SpringFormEncoder(new SpringEncoder(messageConverters));
    }

    //将file转为Multipart
    public static MultipartFile getMultipartFile(File file) {
        FileItem item = new DiskFileItemFactory().createItem("file", MediaType.MULTIPART_FORM_DATA_VALUE, true, file.getName());
        try (FileInputStream inputStream = new FileInputStream(file);
             OutputStream outputStream = item.getOutputStream();) {
            IOUtils.copy(inputStream, outputStream);

        } catch (Exception e) {
            e.printStackTrace();
        }
        return new CommonsMultipartFile(item);
    }
}

5.3.2 [Media Assets Service] Add parameter "object name" to multi-file upload interface

Now you need to upload the static file of the course to minio, and store it separately in the course directory. The objectname of the file is "course id.html", and the original file upload interface needs to add a parameter objectname.

Expand the upload file interface of the media resource service below

 The upload method of MediaFilesController adds the objectName parameter:

 @ApiOperation("上传图片")
 @RequestMapping(value = "/upload/coursefile",consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public UploadFileResultDto upload(@RequestPart("filedata")MultipartFile filedata,
                                  @RequestParam(value= "objectName",required=false) String objectName) throws IOException {

    //准备上传文件的信息
     UploadFileParamsDto uploadFileParamsDto = new UploadFileParamsDto();
     //原始文件名称
     uploadFileParamsDto.setFilename(filedata.getOriginalFilename());
     //文件大小
     uploadFileParamsDto.setFileSize(filedata.getSize());
     //文件类型
     uploadFileParamsDto.setFileType("001001");
     //创建一个临时文件
     File tempFile = File.createTempFile("minio", ".temp");
     filedata.transferTo(tempFile);
     Long companyId = 1232141425L;
    //文件路径
     String localFilePath = tempFile.getAbsolutePath();

//调用service上传图片
     UploadFileResultDto uploadFileResultDto = mediaFileService.uploadFile(companyId
, uploadFileParamsDto, localFilePath,objectName);

     return uploadFileResultDto;
 }

}

The service interface also adds the parameter "object name":

/**
 * 上传文件
 * @param companyId 机构id
 * @param uploadFileParamsDto 上传文件信息
 * @param localFilePath 文件磁盘路径
 * @param objectName 对象名
 * @return 文件信息
 */
public UploadFileResultDto uploadFile(Long companyId, UploadFileParamsDto uploadFileParamsDto, String localFilePath,String objectName);

Modify the original uploadFile method, if the objectName is empty, the path method of the year-month-day style will be adopted.

//存储到minio中的对象名(带目录)
if(StringUtils.isEmpty(objectName)){
    objectName =  defaultFolderPath + fileMd5 + extension;
}
//        String objectName = defaultFolderPath + fileMd5 + extension;

5.3.3 Remote call test

Write feign interface under content-service

package com.xuecheng.content.feignclient;

/**
 * @description 媒资管理服务远程接口
 */
 @FeignClient(value = "media-api",configuration = MultipartSupportConfig.class)
public interface MediaServiceClient {

 @RequestMapping(value = "/media/upload/coursefile",consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
 String uploadFile(@RequestPart("filedata") MultipartFile upload,@RequestParam(value = "objectName",required=false) String objectName);
}

Add the @EnableFeignClients annotation to the startup class

@EnableFeignClients(basePackages={"com.xuecheng.content.feignclient"})

write test method

package com.xuecheng.content;

/**
 * @description 测试使用feign远程上传文件
 */
@SpringBootTest
public class FeignUploadTest {

    @Autowired
    MediaServiceClient mediaServiceClient;

    //远程调用,上传文件
    @Test
    public void test() {
   
        MultipartFile multipartFile = MultipartSupportConfig.getMultipartFile(new File("D:\\develop\\test.html"));
        mediaServiceClient.uploadFile(multipartFile,"course","test.html");
    }

}

The next step is to test, start the media asset service, execute the test method, upload the file successfully, enter minIO to view the file

Visit: http://192.168.101.65:9000/mediafiles/course/74b386417bb9f3764009dc94068a5e44.html

Check whether it can be accessed normally.

5.4 Hystrix fuse downgrade processing

5.4.1 Avalanche, fuse, downgrade

Remote calls between services are unavoidable in microservices . For example, the content management service remotely calls the upload file interface of the media asset service. When the microservice is not running normally , the microservice cannot be called normally . Failure to handle exceptions may lead to an avalanche effect.

The avalanche effect of microservices is manifested in calls between services. When one of the services cannot provide services, other services may also die. For example: service B calls service A, and service B responds slowly due to service exception of A. Finally, B, Services such as C are not available. A series of multiple services caused by one service cannot provide services, which is the avalanche effect of microservices, as shown in the following figure:

How to solve the avalanche effect caused by microservice exceptions?

It can be solved by fusing and downgrading .

The same point of fuse downgrade is to solve the problem of microservice system collapse, but they are two different technical means, and there is a connection between the two.

Circuit breaker: The called downstream service triggers a circuit breaker abnormally. 

The upstream calls the downstream. When the downstream service is abnormal and the interaction with the upstream service is disconnected, it is equivalent to a fuse. The downstream service abnormality triggers the fuse, so as to ensure that the upstream service is not affected. 

Downgrade:

When the downstream service exception triggers the fuse , the upstream service no longer calls the abnormal microservice but executes the downgrade processing logic . This downgrade processing logic can be a separate local method.

Both are to protect the system. Fusing is a means of protecting the system when downstream services are abnormal, and downgrading is a method for upstream services to handle fusing after fusing.

5.4.2 Fuse downgrade processing

The project uses the Hystrix framework to implement fuse and downgrade processing, which is configured in feign-dev.yaml.

1. Turn on the Feign fuse protection

feign:
  hystrix:
    enabled: true
  circuitbreaker:
    enabled: true

2. Set the timeout time of the fuse . In order to prevent the fuse from being triggered by a long processing time, it is also necessary to set the timeout time of the request and connection, as follows:

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 30000  #熔断超时时间
ribbon:
  ConnectTimeout: 60000 #连接超时时间
  ReadTimeout: 60000 #读超时时间
  MaxAutoRetries: 0 #重试次数
  MaxAutoRetriesNextServer: 1 #切换实例的重试次数

3. Define the downgrade logic

Two methods:

1) Method 1: fallback

@FeignClient(value = "media-api",configuration = MultipartSupportConfig.class
,fallback = MediaServiceClientFallback.class)
@RequestMapping("/media")
public interface MediaServiceClient{
...

Define a fallback class MediaServiceClientFallback, which implements the MediaServiceClient interface.

The first method cannot remove the exception thrown by the fuse, and the second method can solve this problem by defining MediaServiceClientFallbackFactory. 

2) Method 2: fallbackFactory (recommended, you can remove the fuse exception)

The second method specifies fallbackFactory in FeignClient, as follows:

@FeignClient(value = "media-api",configuration = MultipartSupportConfig.class
,fallbackFactory = MediaServiceClientFallbackFactory.class)

Define MediaServiceClientFallbackFactory as follows:

@Slf4j
@Component
public class MediaServiceClientFallbackFactory implements FallbackFactory<MediaServiceClient> {
    @Override
    public MediaServiceClient create(Throwable throwable) {
        return new MediaServiceClient(){
            @Override
            public String uploadFile(MultipartFile upload, String objectName) {
                //降级方法
                log.debug("调用媒资管理服务上传文件时发生熔断,异常信息:{}",throwable.toString(),throwable);
                return null;
            }
        };
    }
}

Downgrade processing logic:

A null object is returned, and the upstream service request interface gets a null indicating that the downgrade process has been performed.

test:

Stop the media asset management service or artificially create anomalies to observe whether to execute the downgrade logic.

5.4 Course static development

The course page static and static page remote upload test passed, the next step is to develop the course static function, and finally use the message processing SDK to schedule execution.

5.4.1 Static implementation

Course static includes two parts: generating course static pages and uploading static pages to the file system.

Write these two parts in the service published by the course, and finally schedule and execute them through messages.

1. Interface definition

/**
 * @description 课程静态化
 * @param courseId  课程id
 * @return File 静态化文件
 * @author Mr.M
 * @date 2022/9/23 16:59
*/
public File generateCourseHtml(Long courseId);
/**
 * @description 上传课程静态化页面
 * @param file  静态化文件
 * @return void
 * @author Mr.M
 * @date 2022/9/23 16:59
*/
public void  uploadCourseHtml(Long courseId,File file);

2. Interface implementation

Copy the previously written static test code and upload static file test code to use

@Override
    public File generateCourseHtml(Long courseId) {

        //静态化文件
        File htmlFile  = null;

        try {
            //配置freemarker
            Configuration configuration = new Configuration(Configuration.getVersion());

            //加载模板
            //选指定模板路径,classpath下templates下
            //得到classpath路径
            String classpath = this.getClass().getResource("/").getPath();
            configuration.setDirectoryForTemplateLoading(new File(classpath + "/templates/"));
            //设置字符编码
            configuration.setDefaultEncoding("utf-8");

            //指定模板文件名称
            Template template = configuration.getTemplate("course_template.ftl");

            //准备数据
            CoursePreviewDto coursePreviewInfo = this.getCoursePreviewInfo(courseId);

            Map<String, Object> map = new HashMap<>();
            map.put("model", coursePreviewInfo);

            //静态化
            //参数1:模板,参数2:数据模型
            String content = FreeMarkerTemplateUtils.processTemplateIntoString(template, map);
//            System.out.println(content);
            //将静态化内容输出到文件中
            InputStream inputStream = IOUtils.toInputStream(content);
            //创建静态化文件
            htmlFile = File.createTempFile("course",".html");
            log.debug("课程静态化,生成静态文件:{}",htmlFile.getAbsolutePath());
            //输出流
            FileOutputStream outputStream = new FileOutputStream(htmlFile);
            IOUtils.copy(inputStream, outputStream);
        } catch (Exception e) {
            log.error("课程静态化异常:{}",e.toString());
            XueChengPlusException.cast("课程静态化异常");
        }

        return htmlFile;
    }

    @Override
    public void uploadCourseHtml(Long courseId, File file) {
        MultipartFile multipartFile = MultipartSupportConfig.getMultipartFile(file);
        String course = mediaServiceClient.uploadFile(multipartFile, "course/"+courseId+".html");
        if(course==null){
            XueChengPlusException.cast("上传静态文件异常");
        }
    }

Improve the code of the course publishing task CoursePublishTask class:

//生成课程静态化页面并上传至文件系统
public void generateCourseHtml(MqMessage mqMessage,long courseId){
    log.debug("开始进行课程静态化,课程id:{}",courseId);
    //消息id
    Long id = mqMessage.getId();
    //消息处理的service
    MqMessageService mqMessageService = this.getMqMessageService();
    //消息幂等性处理
    int stageOne = mqMessageService.getStageOne(id);
    if(stageOne == 1){
        log.debug("课程静态化已处理直接返回,课程id:{}",courseId);
        return ;
    }

    //生成静态化页面
    File file = coursePublishService.generateCourseHtml(courseId);
    //上传静态化页面
    if(file!=null){
        coursePublishService.uploadCourseHtml(courseId,file);
    }
    //保存第一阶段状态
    mqMessageService.completedStageOne(id);

}

5.4.2 Testing

1. Start the gateway and media assets management service project.

2. Configure FeignClient on the startup class of the content management api project

@EnableFeignClients(basePackages={"com.xuecheng.content.feignclient"})

Reference feign-dev.yaml in bootstrap.yml

- data-id: feign-${spring.profiles.active}.yaml
  group: xuecheng-plus-common
  refresh: true  #profiles默认为dev

Start the content management interface project.

Put a breakpoint in the execute method of the CoursePublishTask class.

3. Publish a course, save the unprocessed processing in the message table.

4. Start the xxl-job scheduling center, start the course publishing task, and wait for the scheduled scheduling.

5. Observe the task scheduling log to see if the task can be processed normally.

6. Enter the file system after the processing is completed, and check whether there is an html file named after the course id in the mediafiles bucket

If it does not exist, it means that there is a problem with the static course, and then carefully check the execution log to troubleshoot the problem.

If it exists, it means that the course is static and uploaded to minio successfully.

5.4.3 Browse the detailed page

After the course is successfully staticized, you can use a browser to access the html file to see if it can be browsed normally. The figure below shows that it can be browsed normally.

The page has no style yet, you need to configure the virtual directory in nginx, and configure it under www.51xuecheng.cn:

location /course/ { 
        proxy_pass http://fileserver/mediafiles/course/;
}

Load nginx configuration file

Visit: http://www.51xuecheng.cn/course/2.html

2.html is an html file named after the course id.

Guess you like

Origin blog.csdn.net/qq_40991313/article/details/130063816