[Distributed ID generation system—leaf source code]

Distributed id generation system-leaf source code


Leaf, a distributed ID generation system, has two ways to generate IDs:

  • Number segment mode
  • Snowflake mode

Number segment mode

Since the number segment mode depends on the database table, let's take a look at the relevant database tables first:
Insert image description here

biz_tag: For different business needs, use the biz_tag field to isolate. If you need to expand the capacity in the future, you only need to divide the biz_tag database into tables.
max_id: The current business number segment Maximum value, used to calculate the next number segment
step: step length, that is, the number of IDs obtained each time
random_step: randomly increased each time getid Length

The corresponding entity classes are as follows

import java.util.concurrent.atomic.AtomicLong;

/**
 * @author left
 */
public class Segment {
    
    

	private AtomicLong value = new AtomicLong(0); //对 long 类型的变量进行原子操作,这里就是产生的id值

	private volatile long max; //当前号段起始id

	private volatile int step;  //每次缓存数量

	private volatile int randomStep; //随机增长

	private final SegmentBuffer buffer; //双buffer
}

If you look at it this way, you are putting the auto-increment method of the database into the memory, which is equivalent to adding a layer of cache and reducing the number of database accesses. But it actually does better than this. The program caches the next Segment in advance through a double Buffer optimization method to reduce the time of network requests.

Double buffer optimization

The idea is as follows
Insert image description here

The entity classes corresponding to the database tables are as follows:

/**
 * @author leaf
 */
public class LeafAlloc {
    
    

	private String key;

	private long maxId;

	private int step;

	private String updateTime;

	private int randomStep;
}

This class is used for caching

public class SegmentBuffer {
    
    

	private String key;  //对应了数据库中的biz_tag

	/**
	 * 双buffer
	 */
	private final Segment[] segments;

	/**
	 * 当前的使用的segment的index
	 */
	private volatile int currentPos;

	/**
	 * 下一个segment是否处于可切换状态
	 */
	private volatile boolean nextReady;

	/**
	 * 是否初始化完成
	 */
	private volatile boolean initOk;

	/**
	 * 线程是否在运行中
	 */
	private final AtomicBoolean threadRunning;

	private final ReadWriteLock lock;

	private volatile int step;

	private volatile int minStep;

	private volatile long updateTimestamp;
}

The following is reflected in the specific code:
First, after the program is started, SegmentService is instantiated. When instantiating SegmentService through the construction method, IDAllocDao is first created, here The dao layer does not directly use the @Mapper annotation to create the implementation, but implements the IDAllocDaoImpl method through the implementation class (the main reason should be that as an introduced package, the mapper annotation may not be able to scan the corresponding sql).

@Service("SegmentService")
public class SegmentService {
    
    

	private final Logger logger = LoggerFactory.getLogger(SegmentService.class);

	private final IDGen idGen;

	public SegmentService(DataSource dataSource) throws InitException {
    
    
		// Config Dao
		IDAllocDao dao = new IDAllocDaoImpl(dataSource);

		// Config ID Gen
		idGen = new SegmentIDGenImpl();
		((SegmentIDGenImpl) idGen).setDao(dao);
		if (idGen.init()) {
    
    
			logger.info("Segment Service Init Successfully");
		}
		else {
    
    
			throw new InitException("Segment Service Init Fail");
		}

	}
public IDAllocDaoImpl(DataSource dataSource) {
    
    
    // 创建事务工厂
    TransactionFactory transactionFactory = new JdbcTransactionFactory();

    // 创建MyBatis环境
    Environment environment = new Environment("development", transactionFactory, dataSource);

    // 创建MyBatis配置对象
    Configuration configuration = new Configuration(environment);

    // 添加IDAllocMapper映射
    configuration.addMapper(IDAllocMapper.class);

    // 构建SqlSessionFactory
    sqlSessionFactory = new SqlSessionFactoryBuilder().build(configuration);
}

The next step hereidGen.init() is to initialize the dispatcher, and the method calls updateCacheFromDb() and updateCacheFromDbAtEveryMinute() to cache the data. In updateCacheFromDb(), SegmentBuffer buffer = new SegmentBuffer(); creates a SegmentBuffer. During initialization, a Segment[] array is created to save the current SegmentBuffer. The others are the values ​​required for initialization
Insert image description here
The main code and comments are as follows:

@Override
	public boolean init() {
    
    
		logger.info("Init ...");
		// 确保加载到kv后才初始化成功
		updateCacheFromDb();
		initOk = true;
		//60s的定时更新号段
		updateCacheFromDbAtEveryMinute();
		return initOk;
	}

private void updateCacheFromDb() {
    
    
		logger.info("update cache from db");
		try {
    
    
			List<String> dbTags = dao.getAllTags();
			if (dbTags == null || dbTags.isEmpty()) {
    
    
				return;
			}
			List<String> cacheTags = new ArrayList<String>(cache.keySet());
			Set<String> insertTagsSet = new HashSet<>(dbTags);
			Set<String> removeTagsSet = new HashSet<>(cacheTags);
			// db中新加的tags灌进cache
			for (int i = 0; i < cacheTags.size(); i++) {
    
    
				String tmp = cacheTags.get(i);
				insertTagsSet.remove(tmp);
			}
			for (String tag : insertTagsSet) {
    
    
				SegmentBuffer buffer = new SegmentBuffer();
				buffer.setKey(tag);
				//取当前位置的Segment,第一次取第一个位置的
				Segment segment = buffer.getCurrent();
				//初始化为0
				segment.setValue(new AtomicLong(0));
				segment.setMax(0);
				segment.setStep(0);
				//缓存
				cache.put(tag, buffer);
				logger.info("Add tag {} from db to IdCache, SegmentBuffer {}", tag, buffer);
			}
			//遍历数据库中的tags,如果数据库中的存在,removeTagsSet就不保存
			for (int i = 0; i < dbTags.size(); i++) {
    
    
				String tmp = dbTags.get(i);
				removeTagsSet.remove(tmp);
			}
			// cache中已失效的tags从cache删除
			for (String tag : removeTagsSet) {
    
    
				cache.remove(tag);
				logger.info("Remove tag {} from IdCache", tag);
			}
		}
		catch (Exception e) {
    
    
			logger.warn("update cache from db exception", e);
		}
	}

updateCacheFromDbAtEveryMinute() performs a scheduled task and refreshes updateCacheFromDb() regularly;

private void updateCacheFromDbAtEveryMinute() {
    
    
		ScheduledExecutorService service = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
    
    
			@Override
			//指定线程内容
			public Thread newThread(Runnable r) {
    
    
				Thread t = new Thread(r);
				t.setName("check-idCache-thread");
				//设置为守护线程,如果主线程结束,跟着结束
				t.setDaemon(true);
				return t;
			}
		});
		//定时任务执行,60s后每60s执行一次
		service.scheduleWithFixedDelay(new Runnable() {
    
    
			@Override
			public void run() {
    
    
				updateCacheFromDb();
			}
		}, 60, 60, TimeUnit.SECONDS);
	}

id get

Here, a user ID is obtained by requesting a user registration interface. Call the getId service through fegin call
Insert image description here
The code to obtain the id is as follows:

@Override
	public Result get(final String key) {
    
    
		if (!initOk) {
    
    
			return new Result(EXCEPTION_ID_IDCACHE_INIT_FALSE, Status.EXCEPTION);
		}
		//通过key获取缓存
		SegmentBuffer buffer = cache.get(key);
		if (buffer != null) {
    
    
			if (buffer.isInitOk()) {
    
    
			//未初始化,锁住这个buffer,其他线程不可修改
				synchronized (buffer) {
    
    
					if (buffer.isInitOk()) {
    
    
						try {
    
    
						    //从数据库中更新号段
							updateSegmentFromDb(key, buffer.getCurrent());
							logger.info("Init buffer. Update leafkey {} {} from db", key, buffer.getCurrent());
							buffer.setInitOk(true);
						}
						catch (Exception e) {
    
    
							logger.warn("Init buffer {} exception", buffer.getCurrent(), e);
						}
					}
				}
			}
			return getIdFromSegmentBuffer(cache.get(key));
		}
		return new Result(EXCEPTION_ID_KEY_NOT_EXISTS, Status.EXCEPTION);
	}

The first core code: update the number segment from the database
Insert image description here
①: If the buffer is not initialized, first update the maximum number segment of the database. After the update is completed, obtain As a result, this is equivalent to obtaining a cache. The update method is to increase max_id by one step. The current step in the library is 100, that is, 100 numbers are taken each time
Insert image description here

②: The second case is that the buffer has been initialized but has not been updated. Here, an additional thread is used to obtain the second layer cache.
③④: The expiration time of a segment is 15 minutes and has not expired. NextStep is normally expanded to 2 times and the database is updated.
⑤:

Insert image description here
⑥⑦: After the step operation is completed, max_id is updated according to the key, and the max_id value is updated to max_id+step, which is equivalent to unused ids being discarded after 15 minutes.
⑧: After the above three ifs are completed, the value is obtained and the current segment is set.

Second core code: Get id from cache
Insert image description here
①:
Insert image description here

おすすめ

転載: blog.csdn.net/qq_40454136/article/details/134061663