The previous analysis TaskExecuteEngine
and its two subcategories NacosDelayTaskExecuteEngine
and NacosExecuteTaskExecuteEngine
. Friends who have not seen it can click here to view 5. Nacos service registration server source code analysis (4) . This article analyzes from NacosTask
the beginning and analyzes the subsequent logic.
Task Analysis
NacosTask
There is only one method boolean shouldProcess()
, which is to judge whether it should be executed.
It has two abstract subclasses, namely AbstractExecuteTask
andAbstractDelayTask
AbstractExecuteTask
The implementation is very simple, just true
execute as needed.
@Override
public boolean shouldProcess() {
return true;
}
AbstractDelayTask
It is delayed execution, so it has a delay mechanism judgment.
@Override
public boolean shouldProcess() {
// 如果当前时间减去上次时间还未到达任务执行的间隔时间则返回false
return (System.currentTimeMillis() - this.lastProcessTime >= this.taskInterval);
}
// 抽象类,由子类判断添加的任务是否可以合并
public abstract void merge(AbstractDelayTask task);
Here we still use the previous analysis PushDelayTask
to analyze the effect of this method
@Override
public void merge(AbstractDelayTask task) {
if (!(task instanceof PushDelayTask)) {
return;
}
PushDelayTask oldTask = (PushDelayTask) task;
if (isPushToAll() || oldTask.isPushToAll()) {
// 可以push到所有的的这一类,无需特殊处理
pushToAll = true;
targetClients = null;
} else {
// 设置个集合,将连接放入,后续要一个个的推,这部分是针对失败的数据
targetClients.addAll(oldTask.getTargetClients());
}
setLastProcessTime(Math.min(getLastProcessTime(), task.getLastProcessTime()));
Loggers.PUSH.info("[PUSH] Task merge for {}", service);
}
NacosDelayTaskExecuteEngine
This also corresponds to the previous analysis NacosExecuteTaskExecuteEngine
. Because NacosExecuteTaskExecuteEngine
there is no need to delay, the default is to execute, one by one. And NacosDelayTaskExecuteEngine
there will be a delayed processing, mainly for failed, delayed processing. Since it is a failure, 100ms is still a short interval for failed processing, so it is put into the collection, and then processed uniformly after the delay time is up. The delay time here is 1s.
Let's continue the analysis of the last code
delayTaskEngine.getPushExecutor().doPushWithCallback(each, subscriber, wrapper,
new ServicePushCallback(each, subscriber, wrapper.getOriginalData(), delayTask.isPushToAll()));
The first step is to obtain delayTaskEngine.getPushExecutor()
, track and analyze this class, and find that it is passed in by the constructor. Tracking up, you can see that this class is managed spring
. injected PushExecutorDelegate
.
The translation of this Delegate
word is entrusted, indicating that this is the entrusted mode. Take a look at some member variables and core methods of this class.
// rpc执行类,V2版本使用
private final PushExecutorRpcImpl rpcPushExecuteService;
// udp执行类,V1版本使用
private final PushExecutorUdpImpl udpPushExecuteService;
public PushExecutorDelegate(PushExecutorRpcImpl rpcPushExecuteService, PushExecutorUdpImpl udpPushExecuteService) {
this.rpcPushExecuteService = rpcPushExecuteService;
this.udpPushExecuteService = udpPushExecuteService;
}
private PushExecutor getPushExecuteService(String clientId, Subscriber subscriber) {
Optional<SpiPushExecutor> result = SpiImplPushExecutorHolder.getInstance()
.findPushExecutorSpiImpl(clientId, subscriber);
if (result.isPresent()) {
return result.get();
}
// use nacos default push executor
// 根据连接的客户端id识别是由upd推送还是rpc推送
return clientId.contains(IpPortBasedClient.ID_DELIMITER) ? udpPushExecuteService : rpcPushExecuteService;
}
The source code analyzed in this article 2.2.0
is naturally the push of the rpc version. As for why it is abandoned , you can read the citations in upd
the first article 1, Nacos service registration client source code analysis , and I will not explain it here.
We continue to analyze along the train of thoughtcom.alibaba.nacos.naming.push.v2.executor.PushExecutorRpcImpl#doPushWithCallback
@Override
public void doPushWithCallback(String clientId, Subscriber subscriber, PushDataWrapper data,
NamingPushCallback callBack) {
// 获取服务信息
ServiceInfo actualServiceInfo = getServiceInfo(data, subscriber);
callBack.setActualServiceInfo(actualServiceInfo);
// 构建一个NotifySubscriberRequest,通过grpc向客户端发送信息
pushService.pushWithCallback(clientId, NotifySubscriberRequest.buildNotifySubscriberRequest(actualServiceInfo),
callBack, GlobalExecutor.getCallbackExecutor());
}
public void pushWithCallback(String connectionId, ServerRequest request, PushCallBack requestCallBack,
Executor executor) {
// 拿到客户端的连接
Connection connection = connectionManager.getConnection(connectionId);
if (connection != null) {
try {
// 发送异步请求
connection.asyncRequest(request, new AbstractRequestCallBack(requestCallBack.getTimeout()) {
@Override
public Executor getExecutor() {
return executor;
}
@Override
public void onResponse(Response response) {
if (response.isSuccess()) {
requestCallBack.onSuccess();
} else {
requestCallBack.onFail(new NacosException(response.getErrorCode(), response.getMessage()));
}
}
@Override
public void onException(Throwable e) {
requestCallBack.onFail(e);
}
});
} catch (ConnectionAlreadyClosedException e) {
connectionManager.unregister(connectionId);
requestCallBack.onSuccess();
} catch (Exception e) {
Loggers.REMOTE_DIGEST
.error("error to send push response to connectionId ={},push response={}", connectionId,
request, e);
requestCallBack.onFail(e);
}
} else {
requestCallBack.onSuccess();
}
}
One is built here NotifySubscriberRequest
. Recall the code we analyzed earlier. One Request
represents a type of network request, and there must be one Handler
processing. Here com.alibaba.nacos.client.naming.remote.gprc.NamingPushRequestHandler
comes the deal.
You can see this package in client
the package. That is, grpc
the processing of bidirectional streams. The service request can be accepted directly, and the client can process it directly. Let's take a brief look at the client's processing.
public class NamingPushRequestHandler implements ServerRequestHandler {
?
? ?private final ServiceInfoHolder serviceInfoHolder;
? ?public NamingPushRequestHandler(ServiceInfoHolder serviceInfoHolder) {
? ? ? ?this.serviceInfoHolder = serviceInfoHolder;
? }
? ?
? ?@Override
? ?public Response requestReply(Request request) {
? ? ? ?if (request instanceof NotifySubscriberRequest) {
? ? ? ? ? ?NotifySubscriberRequest notifyRequest = (NotifySubscriberRequest) request;
? ? ? ? ? ?// 处理逻辑
? ? ? ? ? ?serviceInfoHolder.processServiceInfo(notifyRequest.getServiceInfo());
? ? ? ? ? ?return new NotifySubscriberResponse();
? ? ? }
? ? ? ?return null;
? }
}
?
public ServiceInfo processServiceInfo(ServiceInfo serviceInfo) {
? ?String serviceKey = serviceInfo.getKey();
? ?if (serviceKey == null) {
? ? ? ?return null;
? }
? ?// 获取老的服务
? ?ServiceInfo oldService = serviceInfoMap.get(serviceInfo.getKey());
? ?if (isEmptyOrErrorPush(serviceInfo)) {
? ? ? ?//empty or error push, just ignore
? ? ? ?return oldService;
? }
? ?// 放入客户端的缓存
? ?serviceInfoMap.put(serviceInfo.getKey(), serviceInfo);
? ?// 对比下是否发生改变
? ?boolean changed = isChangedServiceInfo(oldService, serviceInfo);
? ?if (StringUtils.isBlank(serviceInfo.getJsonFromServer())) {
? ? ? ?serviceInfo.setJsonFromServer(JacksonUtils.toJson(serviceInfo));
? }
? ?MetricsMonitor.getServiceInfoMapSizeMonitor().set(serviceInfoMap.size());
? ?if (changed) {
? ? ? ?NAMING_LOGGER.info("current ips:({}) service: {} -> {}", serviceInfo.ipCount(), serviceInfo.getKey(),
? ? ? ? ? ? ? ? ? ? ? ? ? JacksonUtils.toJson(serviceInfo.getHosts()));
? ? ? ?// 若改变,发送改变事件
? ? ? ?NotifyCenter.publishEvent(new InstancesChangeEvent(notifierEventScope, serviceInfo.getName(), serviceInfo.getGroupName(),
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? serviceInfo.getClusters(), serviceInfo.getHosts()));
? ? ? ?// 磁盘缓存也写一份
? ? ? ?DiskCache.write(serviceInfo, cacheDir);
? }
? ?return serviceInfo;
}
Summarize
The content of this article is not much, it is mainly about the content of the last article. In the next article, I will sort out the registration server again. After all, it is divided into several chapters to explain, and the knowledge points are relatively fragmented. You can click the link below to view directly.
7. Nacos service registration server source code analysis (summary)