Detailed explanation of the background implementation of the spike system

Spike background implementation

This article mainly explains how to solve the following problems in the actual project spike:

1) Realize the asynchronous ordering of seckill, and master how to ensure that producer & consumer messages are not lost

2) Realize the prevention of malicious brushing

3) Realize to prevent repeated spikes of the same product

4) Realize the hidden order interface

5) Realize the current limit of the order interface

1 second kill asynchronous order

When a user places an order, he needs to perform login information authentication based on the JWT token information to determine who the current order belongs to.

For the special business scenarios of spikes, it is not enough to rely on technologies such as object caching or page statics to solve the server pressure.

The pressure on the database is still very large, so you need to place an order asynchronously. Asynchronous is the best solution, but it will bring some additional procedures.

Complexity.

1.1 seckill service-order realization
1) Put the tokenDecode tool class config into the spike service and declare the Bean

public static void main(String[] args){
	SpringApplication.run(SeckillApplication,class,args);
}
@Bean
public TokenDecode tokenDecode(){
	return new TokenDecode();
}
2) Update the spike service startup class and add redis configuration
/**
	* 设置 redisTemplate 的序列化设置 
	* @param redisConnectionFactory 
	* @return 
	*/ 
@Bean 
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) { 
	// 1.创建 redisTemplate 模版 
	RedisTemplate<Object, Object> template = new RedisTemplate<>(); 
	// 2.关联 redisConnectionFactory
     template.setConnectionFactory(redisConnectionFactory);
	// 3.创建 序列化类 
	GenericToStringSerializer genericToStringSerializer = new 	GenericToStringSerializer(Object.class); 
	// 4.序列化类,对象映射设置 
	// 5.设置 value 的转化格式和 key 的转化格式
    template.setValueSerializer(genericToStringSerializer);
    template.setKeySerializer(new StringRedisSerializer()); 
    template.afterPropertiesSet(); 
	return template; 
}
2) Create a new order controller and declare the method
@RestController 
@CrossOrigin 
@RequestMapping("/seckillorder") 
public class SecKillOrderController { 
	@Autowired 
	private TokenDecode tokenDecode; 
	@Autowired 
	private SecKillOrderService secKillOrderService; 
	/**
	* 秒杀下单 
	* @param time 当前时间段 
	* @param id 秒杀商品id 
	* @return 
	*/ 
	@RequestMapping("/add")
	//获取当前登陆人 
	String username = tokenDecode.getUserInfo().get("username"); 
	boolean result = secKillOrderService.add(id,time,username); 
	if (result){ 
		return new Result(true, StatusCode.OK,"下单成功"); 
	}else{
		return new Result(false,StatusCode.ERROR,"下单失败"); 
		} 
	} 
}
3) New service interface
public interface SecKillOrderService { 
/**
* 秒杀下单 
* @param id 商品id 
* @param time 时间段 
* @param username 登陆人姓名 
* @return 
*/ 
boolean add(Long id, String time, String username); 
} 
4) Change the preloaded spike product

When preloading spike products, the inventory information of each product is loaded in advance, and subsequent inventory reduction operations will first deduct the inventory in the cache and then asynchronously deduct the mysql data.

Pre-deduction of inventory will be realized based on redis atomic operation

for (SeckillGoods seckillGoods : seckillGoodsList) { 
	redisTemplate.boundHashOps(SECKILL_GOODS_KEY + 	redisExtName).put(seckillGoods.getId(),seckillGoods); 
		//预加载库存信息 
redisTemplate.OpsForValue(SECKILL_GOODS_STOCK_COUNT_KEY+seckillGoods.getId(),se 
ckillGoods.getStockCount()); 
} 
6) Realization of the seckill order business layer

Business logic:

Get seckill product data and inventory data, if there is no inventory, throw an exception to execute redis pre-deduction inventory, and obtain the deducted inventory value. If the deducted inventory value <=0, delete the corresponding product information in redis Synchronize with mysql data based on mq asynchronous method with inventory information (final consistency)

Note : Inventory data is taken from redis and converted to String

@Service 
public class SecKillOrderServiceImpl implements SecKillOrderService { 
	@Autowired 
	private RedisTemplate redisTemplate; 
	@Autowired 
	private IdWorker idWorker; 
	@Autowired 
	private CustomMessageSender customMessageSender; 
	/**
	* 秒杀下单 
	* @param id 商品id 
	* @param time 时间段 
	* @param username 登陆人姓名 
	* @return 
	*/ 
	@Override 
	public boolean add(Long id, String time, String username) { 
	//获取商品数据 
	SeckillGoods goods = (SeckillGoods) 
redisTemplate.boundHashOps("SeckillGoods_" + time).get(id); 
	String redisStock = (String) redisTemplate.boundValueOps("StockCount_" + 
goods.getId()).get(); 
	if(StringUtils.isEmpty(redisStock)){ 
		return false; 
}
	int value=Integer.parseInt(redisStock); 
	//如果没有库存,则直接抛出异常 
	if(goods==null || value<=0){ 
		return false; 
}
	//redis预扣库存 
	Long stockCount = redisTemplate.boundValueOps("StockCount_" + 
id).decrement(); 
	if (stockCount<=0){ 
	//库存没了 
	//删除商品信息 
	redisTemplate.boundHashOps("SeckillGoods_" + time).delete(id); 
	//删除对应的库存信息 
	redisTemplate.delete("StockCount_" + goods.getId()); 
}
	//有库存 
	//如果有库存,则创建秒杀商品订单 
	SeckillOrder seckillOrder = new SeckillOrder(); 
	seckillOrder.setId(idWorker.nextId());
	seckillOrder.setUserId(username); 	
	seckillOrder.setSellerId(goods.getSellerId());
    seckillOrder.setCreateTime(new Date()); 
    seckillOrder.setStatus("0"); 
    //发送消息 
    return true; 
    } 
 }

1.2 The producer guarantees that the message is not lost

According to existing rabbitMQ related knowledge, the producer will send a message to the message server. However, in an actual production environment, the message sent by the message producer is likely to be lost due to the problem of the message server after reaching the message server, such as downtime. Because the message server will store messages in memory by default. Once the message server goes down, the message will be lost. Therefore, to ensure that the producer's message is not lost, a persistence strategy must be started.

rabbitMQ持久化: 交换机持久化 队列持久化 消息持久化

But if only the persistence of these two parts is turned on, it is also likely to cause message loss. Because the message server is likely to be down during the persistence process. Therefore, a data protection mechanism is needed to ensure that the message will be successfully persisted, otherwise the message will always be sent.

事务机制 
	事务机制采用类数据库的事务机制进行数据保护,当消息到达消息服务器,首先会开启一个事务,接着进 行数据磁盘持久化,只有持久化成功才会进行事务提交,向消息生产者返回成功通知,消息生产者一旦接收成 功通知则不会再发送此条消息。当出现异常,则返回失败通知.消息生产者一旦接收失败通知,则继续发送该 条消息。
	事务机制虽然能够保证数据安全,但是此机制采用的是同步机制,会产生系统间消息阻塞,影响整个系统 的消息吞吐量。从而导致整个系统的性能下降,因此不建议使用。 
	confirm机制 
		confirm模式需要基于channel进行设置, 一旦某条消息被投递到队列之后,消息队列就会发送一个确 认信息给生产者,如果队列与消息是可持久化的, 那么确认消息会等到消息成功写入到磁盘之后发出. 			confirm的性能高,主要得益于它是异步的.生产者在将第一条消息发出之后等待确认消息的同时也可以 继续发送后续的消息.当确认消息到达之后,就可以通过回调方法处理这条确认消息. 如果MQ服务宕机了,则会 返回nack消息. 生产者同样在回调方法中进行后续处理。
1.2.1 Enable confifirm mechanism
1) Change the spike service configuration file
rabbitmq: 
	host: 192.168.200.128 
	publisher-confirms: true #开启confirm机制
2) Turn on queue persistence
@Configuration 
public class RabbitMQConfig { 
	//秒杀商品订单消息 
	public static final String SECKILL_ORDER_KEY="seckill_order"; 
	@Bean 
	public Queue queue(){ 
		//开启队列持久化 
		return new Queue(SECKILL_ORDER_KEY,true); 
		} 
	}
3) View the source code of message persistence

4) Enhance rabbitTemplate
@Component 
public class CustomMessageSender implements RabbitTemplate.ConfirmCallback { 
	static final Logger log = LoggerFactory.getLogger(CustomMessageSender.class); 
	private static final String MESSAGE_CONFIRM="message_confirm"; 
	@Autowired 
	private RabbitTemplate rabbitTemplate; 
	@Autowired 
	private RedisTemplate redisTemplate; 
	public CustomMessageSender(RabbitTemplate rabbitTemplate) { 
	this.rabbitTemplate = rabbitTemplate; 
	rabbitTemplate.setConfirmCallback(this); 
}
	@Override 
	public void confirm(CorrelationData correlationData, boolean ack, String 
cause) {
	if (ack){ 
	//返回成功通知 
	//删除redis中的相关数据 
	redisTemplate.delete(correlationData.getId()); 
	redisTemplate.delete(MESSAGE_CONFIRM_+correlationData.getId()); 
	}else{
	//返回失败通知 
	Map<String,String> map = 
(Map<String,String>)redisTemplate.opsForHash().entries(MESSAGE_CONFIRM_+correlationData.getId()); 
	String exchange = map.get("exchange"); 
	String routingKey = map.get("routingKey"); 
	String sendMessage = map.get("sendMessage"); 
	//重新发送 
	rabbitTemplate.convertAndSend(exchange,routingKey, 
JSON.toJSONString(sendMessage)); 
	} 
}
	//自定义发送方法 
	public void sendMessage(String exchange,String routingKey,String message){ 
	//设置消息唯一标识并存入缓存 
	CorrelationData correlationData = new 
CorrelationData(UUID.randomUUID().toString()); 
	redisTemplate.opsForValue().set(correlationData.getId(),message);
	Map<String, String> map = new HashMap<>(); 
	map.put("exchange", exchange); 
	map.put("routingKey", routingKey); 
	map.put("sendMessage", message); 
redisTemplate.opsForHash().putAll(MESSAGE_CONFIRM_+correlationData.getId(),map) 
; 
//携带唯一标识发送消息 
rabbitTemplate.convertAndSend(exchange,routingKey,message,correlationData); 
	} 
}
5) Send a message

Change the order business layer implementation

@Autowired 
private CustomMessageSender customMessageSender;

1.3 seckill order service update inventory

1.3.1 Asynchronous ordering service service_consume
1) Add dependency
<dependencies> 
<dependency> 
<groupId>com.changgou</groupId> 
<artifactId>changgou_common_db</artifactId> 
<version>1.0-SNAPSHOT</version> 
</dependency> 
<dependency> 
<groupId>org.springframework.cloud</groupId> 
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> 
</dependency> 
<dependency> 
<groupId>com.changgou</groupId> 
<artifactId>changgou_service_order_api</artifactId> 
<version>1.0-SNAPSHOT</version> 
</dependency> 
<dependency> 
<groupId>com.changgou</groupId> 
<artifactId>changgou_service_seckill_api</artifactId> 
<version>1.0-SNAPSHOT</version> 
</dependency> 
<dependency> 
<groupId>com.changgou</groupId> 
<artifactId>changgou_service_goods_api</artifactId> 
<version>1.0-SNAPSHOT</version>
</dependency> 
<dependency> 
<groupId>org.springframework.amqp</groupId> 
<artifactId>spring-rabbit</artifactId> 
</dependency> 
</dependencies> 
2) New application.yml
server: 
	port: 9022 
spring: 
	jackson: 
		time-zone: GMT+8 
	application: 
		name: sec-consume 
	datasource: 
		driver-class-name: com.mysql.jdbc.Driver 
		url: jdbc:mysql://192.168.200.128:3306/changgou_seckill? 
useUnicode=true&characterEncoding=utf- 
8&useSSL=false&allowMultiQueries=true&serverTimezone=GMT%2b8 
		username: root 
		password: root 
	main: 
		allow-bean-definition-overriding: true #当遇到同样名字的时候,是否允许覆盖注册 
	redis: 
		host: 192.168.200.128 
	rabbitmq: 
		host: 192.168.200.128 
	eureka: 
		client: 
			service-url: 
				defaultZone: http://127.0.0.1:6868/eureka 
		instance: 
			prefer-ip-address: true 
feign: 
	hystrix: 
		enabled: true 
	client: 
		config: 
			default: #配置全局的feign的调用超时时间 如果 有指定的服务配置 默认的配置不会生效 
			connectTimeout: 60000 # 指定的是 消费者 连接服务提供者的连接超时时间 是否能连接 
单位是毫秒
			readTimeout: 20000 # 指定的是调用服务提供者的 服务 的超时时间() 单位是毫秒 
#hystrix 配置 
hystrix: 
	command: 
		default: 
			execution: 
				timeout: 
				#如果enabled设置为false,则请求超时交给ribbon控制 
				enabled: true 
				isolation: 
					strategy: SEMAPHORE 
					thread: 
					# 熔断器超时时间,默认:1000/毫秒 
						timeoutInMilliseconds: 20000
3) Create a new startup class
@SpringBootApplication 
@EnableDiscoveryClient 
@MapperScan(basePackages = {"com.changgou.consume.dao"}) 
public class OrderConsumerApplication { 
	public static void main(String[] args) { 
		SpringApplication.run(OrderConsumerApplication.class,args); 
	} 
}
1.3.2 Implementation of manual ACK orders placed by consumers

According to the existing RabbitMQ knowledge, it can be known that when the message consumer successfully receives the message, it will consume it and automatically notify the message server to delete the message. The realization of this method uses the consumer automatic response mechanism. But this method is very unsafe. In a production environment, when a message consumer receives a message, it is very likely that an unexpected situation may occur in the process of processing the message and cause the message to be lost, because it is very unsafe to use the automatic response mechanism. We need to ensure that the message server will delete the message after the consumer has successfully processed the message. To achieve this effect at this time, it is necessary to convert the automatic response to a manual response . Only after the message consumer has processed the message, the message server will be notified to delete the message.

1) Change the configuration file
rabbitmq: 
	host: 192.168.200.128 
	listener: 
		simple: 
			acknowledge-mode: manual #手动
2) Define the monitor class
@Component 
public class ConsumeListener { 
	@Autowired 
	private SecKillOrderService secKillOrderService; 
	@RabbitListener(queues = RabbitMQConfig.SECKILL_ORDER_KEY) 
	public void receiveSecKillOrderMessage(Channel channel, Message message){ 
	//转换消息 
	SeckillOrder seckillOrder = JSON.parseObject(message.getBody(), 
SeckillOrder.class); 
	//同步mysql订单 
	int rows = secKillOrderService.createOrder(seckillOrder); 
	if (rows>0){ 
		//返回成功通知 
		try { 
channel.basicAck(message.getMessageProperties().getDeliveryTag(),false); 
		} catch (IOException e) { 
			e.printStackTrace();
	} 
	}else{
	//返回失败通知 
	try {
    //第一个boolean true所有消费者都会拒绝这个消息,false代表只有当前消费者拒 
绝 
	//第二个boolean true当前消息会进入到死信队列,false重新回到原有队列中,默
认回到头部 
channel.basicNack(message.getMessageProperties().getDeliveryTag(),false,false); 
	} catch (IOException e) { 
		e.printStackTrace(); 
			} 
		} 
	} 
} 

3) Define the business layer interface and implementation class

public interface ConsumeService { 
	int handleCreateOrder(SeckillOrder order); 
}
@Service
public class SecKillOrderServiceImpl implements SecKillOrderService { 		 	@Autowired 
	private SeckillGoodsMapper seckillGoodsMapper; 
	@Autowired 
	private SeckillOrderMapper seckillOrderMapper; 
	/**
	* 添加订单 
	* @param seckillOrder 
	* @return 
	*/ 
	@Override 
	@Transactional 
	public int createOrder(SeckillOrder seckillOrder) { 
	int result =seckillGoodsMapper.updateStockCount(seckillOrder.getSeckillId()); 
	if (result<=0){ 
	return result; 
	}
	result =seckillOrderMapper.insertSelective(seckillOrder);
    if (result<=0){ 
    return result;
    }return 1;
数据库字段unsigned介绍 
unsigned-----无符号,修饰int 、char 
ALTER TABLE tb_seckill_goods MODIFY COLUMN stock_count int(11) UNSIGNED DEFAULT NULL COMMENT '剩余库存数';

1.5 Traffic peak clipping

In the high-concurrency scenario of spike, tens of thousands or even hundreds of thousands of messages may be generated per second. If there is no restriction on the amount of message processing, it is likely to cause consumer downtime due to excessive accumulation of messages. Machine situation. Therefore, the official website recommends setting the total number of processed messages (total number of message grabs ) for each message consumer .

It is not good to set the value of the total number of messages to be fetched too large or too small. If it is too small, the message throughput capacity of the entire system will decrease, resulting in a waste of performance. If it is too large, it is likely to cause too many messages, leading to OOM of the entire system. Therefore, the official website recommends that each consumer set this value between 100-300.

1) Update consumers.

//设置预抓取总数 
channel.basicQos(300);

1.6 Spike Rendering Service-Order Implementation

1) Define the feign interface
@FeignClient(name="seckill") 
public interface SecKillOrderFeign { 
	/**
	 * 秒杀下单 
	 * @param time 当前时间段 
	 * @param id 秒杀商品id 
	 * @return 
	 */ 
	 @RequestMapping("/seckillorder/add") 
	 public Result add(@RequestParam("time") String time, @RequestParam("id") Long id); 
	}
2) Define the controller
@Controller 
@CrossOrigin 
@RequestMapping("/wseckillorder")
public class SecKillOrderController { 
	@Autowired 
	private SecKillOrderFeign secKillOrderFeign; 
	/**
		* 秒杀下单 
		* @param time 当前时间段 
		* @param id 秒杀商品id 
		* @return 
		*/ 
		@RequestMapping("/add") 
		@ResponseBody 
		public Result add(String time,Long id){ 
		Result result = secKillOrderFeign.add(time, id); 
		return result; 
		}
	}

2 prevent malicious brush ticket resolution

In a production scenario, it is very likely that some users maliciously swipe orders. For the system, such an operation can cause business errors, dirty data, and back-end access pressure.

Generally, to solve this problem, the front-end needs to be controlled, and the back-end also needs to be controlled. The back-end implementation can be solved by Redisincrde atomic increment.

2.1 Update the order of the spike service

2.2 Implementation of anti-weight method
//防止重复提交 
private String preventRepeatCommit(String username,Long id) { 
	String redisKey = "seckill_user_" + username+"_id_"+id; 
	long count = redisTemplate.opsForValue().increment(redisKey, 1); 
	if (count == 1){ 
		//设置有效期五分钟 
		redisTemplate.expire(redisKey, 5, TimeUnit.MINUTES); 
		return "success"; 
		}
		if (count>1){
        	return "fail"; 
        	}
        	return "fail";
            }

3 Prevent repeated spikes of the same product

3.1 Modify the implementation of the order business layer

3.2 New query method for dao layer
public interface SeckillOrderMapper extends Mapper<SeckillOrder> { 
	/**
	 * 查询秒杀订单信息 
	 * @param username 
	 * @param id 
	 * @return 
	 */ 
	 @Select("select * from tb_seckill_order where user_id=#{username} and seckill_id=#{id}") 
	 SeckillOrder getSecKillOrderByUserNameAndGoodsId(String username, Long id); }

4 seconds kill order interface hidden

At present, although it can be ensured that users can only place an order when they are logged in, there is no way for some malicious users to guess the interface address of the order placed after logging in to maliciously swipe the order. Therefore, it is necessary to hide the spike interface address.

Every time the user clicks to buy, he first generates a random number and saves it in redis, and then the user carries the random number to visit seckill to place an order. The order interface will first obtain the random number from redis for matching. If the match is successful, the subsequent order operation will be performed. If the match is unsuccessful, it will be deemed as illegal access.

4.1 Put the random number tool class into the common project
public class RandomUtil { 
	public static String getRandomString() { 
	int length = 15; 
	String base = "abcdefghijklmnopqrstuvwxyz0123456789"; 
	Random random = new Random(); 
	StringBuffer sb = new StringBuffer(); 
	for (int i = 0; i < length; i++) { 
		int number = random.nextInt(base.length()); 
		sb.append(base.charAt(number)); 
}
	return sb.toString(); 
}
public static void main(String[] args) { 
	String randomString = RandomUtil.getRandomString();
}
4.2 seckill rendering service defines random number interface
/** 
* 接口加密 
* 生成随机数存入redis,10秒有效期 
*/
@GetMapping("/getToken") 
@ResponseBody
public String getToken(){ 
	String randomString = RandomUtil.getRandomString(); 
	String cookieValue = this.readCookie(); 
    redisTemplate.boundValueOps("randomcode_"+cookieValue).set(randomString,10, TimeUnit.SECONDS);
	return randomString; 
	}
//读取cookie private String readCookie(){
	HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()).getRequest(); 
	String cookieValue = CookieUtil.readCookie(request, "uid").get("uid"); 	 
	return cookieValue; 
}
4.3 js modification

Modify the js order method

//秒杀下单 
add:function(id){
	app.msg ='正在下单'; 
	//获取随机数 
	axios.get("/api/wseckillorder/getToken").then(function (response) { 
	var random=response.data; 
	axios.get("/api/wseckillorder/add? time="+moment(app.dateMenus[0]).format("YYYYMMDDHH")+"&id="+id+"&random="+random ).then(function (response) { 
	if (response.data.flag){ 
	app.msg='抢单成功,即将进入支付!'; 
	}else{app.msg='抢单失败'; 
		}
    })
 }) 
}

4.4 seckill rendering service changes

Modify the order interface of the spike rendering service

/** 
 * 秒杀下单 
 * @param time 当前时间段 
 * @param id 秒杀商品id 
 * @return 
 */ 
 @RequestMapping("/add") 
 @ResponseBody 
 public Result add(String time,Long id,String random){ 
 //校验密文有效 
 String randomcode = (String) redisTemplate.boundValueOps("randomcode").get(); 	if (StringUtils.isEmpty(randomcode) || !random.equals(randomcode)){ 
 	return new Result(false, StatusCode.ERROR,"无效访问"); 
 }
 	Result result = secKillOrderFeign.add(time, id);
    return result; 
   }

5 seconds to kill a single interface current limit

Because of the special business scenario of seckill, in the production scenario, it may be necessary to control the access flow of the seckill order interface to prevent excessive requests from entering the back-end server. For the implementation of current limiting, we have already touched on current limiting through nginx and gateway current limiting. But they are all restricting access to a large service. What if you only want to restrict the interface method in a certain service? It is recommended to use the RateLimiter in the guava toolkit provided by google for implementation. Its internal is based on the token bucket algorithm for current limiting calculation

1 ) Add dependency
<dependency> 
	<groupId>com.google.guava</groupId> 
	<artifactId>guava</artifactId> 
	<version>28.0-jre</version> 
</dependency>
2 ) Custom current limit annotation
@Documented
@Target({ElementType.METHOD, ElementType.FIELD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME)
public @interface AccessLimit {}
3 ) Custom aspect
@Component 
@Scope 
@Aspect 
public class AccessLimitAop { 
	@Autowired 
	private HttpServletResponse httpServletResponse; 
	private RateLimiter rateLimiter = RateLimiter.create(20.0); 		  		@Pointcut("@annotation(com.changgou.webSecKill.aspect.AccessLimit)") 
	public void limit(){} @Around("limit()") 
	public Object around(ProceedingJoinPoint proceedingJoinPoint){ 
	boolean flag = rateLimiter.tryAcquire(); 
	Object obj = null; 
	try{
		if (flag){ 
		obj=proceedingJoinPoint.proceed(); 
		}else{
		String errorMessage = JSON.toJSONString(new Result(false,StatusCode.ERROR,"fail")); 	  		
		outMessage(httpServletResponse,errorMessage); 
		} 
		}catch (Throwable throwable) { throwable.printStackTrace(); 
		}return obj; 
		}
private void outMessage(HttpServletResponse response, String errorMessage) { 	ServletOutputStream outputStream = null; 
	try {
	response.setContentType("application/json;charset=UTF-8");
    outputStream = response.getOutputStream();
    outputStream.write(errorMessage.getBytes("UTF-8")); 
    } catch (IOException e) { 
    e.printStackTrace(); 
    }finally { 
    try {outputStream.close(); 
    } catch (IOException e) { 
    e.printStackTrace();
    }
  }
4 ) Use custom current limit annotations

Welcome to watch and write your own opinions! Discuss together!

Finally, recently many friends asked me for the Linux learning roadmap , so based on my experience, I spent a month staying up late in my spare time and compiled an e-book. Whether you are in an interview or self-improvement, I believe it will help you! The directory is as follows:

Give it to everyone for free, just ask you to give me a thumbs up!

Ebook | Linux development learning roadmap

I also hope that some friends can join me to make this e-book more perfect!

Gain? I hope the old irons will have a three-strike combo so that more people can read this article

Recommended reading:

Guess you like

Origin blog.csdn.net/yychuyu/article/details/108477629