[Source learn together - Micro Services] Nexflix Eureka Source eight: EurekaClient registry crawl exquisite design and analysis!

Foreword

Antecedent Review

We talk on a unit test to sort out how EurekaClient is registered to the server end, and how the server receives the request is processed, where the most important concern is a data structure of the registry:ConcurrentHashMap<String, Map<String, Lease<InstanceInfo>>>()

This lecture directory

I looked back before the next blog, not a total catalog description of each source is directly analyzed. Since the beginning of articles will add directories, and last, will add experience summary and read this post source. I hope this blog series of articles will get better.

Contents are as follows:

  1. client-side logic full amount of first registration crawl registry
  2. server response multi-level caching mechanism registry information collection
  3. server-side registry multi-level cache expires mechanisms: active + passive timed +
  4. client-side logic registry incremental crawls

Technical highlights:

  1. Registry crawl multi-level caching mechanism
  2. The total amount of data returned by the incremental crawls hashCode, hashCode contrast and local data, to ensure data consistency

After a little long-winded here, until Tucao logic EurekaClient registered, today saw the logic EurekaClient registry crawl, could not help lamenting subtlety of design, said here refers EurekaServer end exquisite design for registry read logic , caching logic and consistency of judgment Hash incremental acquisition, really wonderful, I feel they learned a lot. After reading this code early in the morning very excited, ha ha ha, take a look at it.

Explanation

The original is not easy, should the reprint please indicate the source: a flower count romantic

EurekaClient full amount crawl registry logic

I have been thinking about how to put their understanding after reading the code words to express, this uses a new model it, first drawing, then the source code, and interpretation.

04_EurekaClient registry whole amount crawl logic .png

The picture looks very simple, Client Http request is sent to the Server side, Server-side registry information return to the full amount of Client-side. The next step is to follow a step by step analysis of the code, here to have a general impression

Source resolve

  1. Client sends request for obtaining the total amount of registry
@Inject
DiscoveryClient(ApplicationInfoManager applicationInfoManager, EurekaClientConfig config, AbstractDiscoveryClientOptionalArgs args,
                Provider<BackupRegistry> backupRegistryProvider, EndpointRandomizer endpointRandomizer) {

    // 省略很多无关代码

    if (clientConfig.shouldFetchRegistry() && !fetchRegistry(false)) {
        fetchRegistryFromBackup();
    }

}

private boolean fetchRegistry(boolean forceFullRegistryFetch) {
    Stopwatch tracer = FETCH_REGISTRY_TIMER.start();

    try {
        // If the delta is disabled or if it is the first time, get all
        // applications
        Applications applications = getApplications();

        if (clientConfig.shouldDisableDelta()
                || (!Strings.isNullOrEmpty(clientConfig.getRegistryRefreshSingleVipAddress()))
                || forceFullRegistryFetch
                || (applications == null)
                || (applications.getRegisteredApplications().size() == 0)
                || (applications.getVersion() == -1)) //Client application does not have latest library supporting delta
        {
            logger.info("Disable delta property : {}", clientConfig.shouldDisableDelta());
            logger.info("Single vip registry refresh property : {}", clientConfig.getRegistryRefreshSingleVipAddress());
            logger.info("Force full registry fetch : {}", forceFullRegistryFetch);
            logger.info("Application is null : {}", (applications == null));
            logger.info("Registered Applications size is zero : {}",
                    (applications.getRegisteredApplications().size() == 0));
            logger.info("Application version is -1: {}", (applications.getVersion() == -1));
            getAndStoreFullRegistry();
        } else {
            getAndUpdateDelta(applications);
        }
        applications.setAppsHashCode(applications.getReconcileHashCode());
        logTotalInstances();
    } catch (Throwable e) {
        logger.error(PREFIX + "{} - was unable to refresh its cache! status = {}", appPathIdentifier, e.getMessage(), e);
        return false;
    } finally {
        if (tracer != null) {
            tracer.stop();
        }
    }

    // 删减掉一些代码

    // registry was fetched successfully, so return true
    return true;
}

private void getAndStoreFullRegistry() throws Throwable {
    long currentUpdateGeneration = fetchRegistryGeneration.get();

    logger.info("Getting all instance registry info from the eureka server");

    Applications apps = null;
    EurekaHttpResponse<Applications> httpResponse = clientConfig.getRegistryRefreshSingleVipAddress() == null
            ? eurekaTransport.queryClient.getApplications(remoteRegionsRef.get())
            : eurekaTransport.queryClient.getVip(clientConfig.getRegistryRefreshSingleVipAddress(), remoteRegionsRef.get());
    if (httpResponse.getStatusCode() == Status.OK.getStatusCode()) {
        apps = httpResponse.getEntity();
    }
    logger.info("The response status is {}", httpResponse.getStatusCode());

    if (apps == null) {
        logger.error("The application is null for some reason. Not storing this information");
    } else if (fetchRegistryGeneration.compareAndSet(currentUpdateGeneration, currentUpdateGeneration + 1)) {
        localRegionApps.set(this.filterAndShuffle(apps));
        logger.debug("Got full registry with apps hashcode {}", apps.getAppsHashCode());
    } else {
        logger.warn("Not updating applications as another thread is updating it already");
    }
}

Client terminal will not repeat here is to follow step by step how requesting the code, because the code before the unit tests have cleared the end of the Server class is accepted request ApplicationsResource.java, Client-side main core code are DiscoveryClient.javain.

Code or read many times before ancestral codes, but omitted a lot of content, only show where we need to analyze.
clientConfig.shouldFetchRegistry()The default configuration is true, then fetchRegistrythe method getAndStoreFullRegistry(), because the first is to obtain the full amount registry information, continue to the next.

getAndStoreFullRegistry The method can be seen Http request is sent to the Server side, and then wait for the end to return the full amount Server registry information.

Here for the full amount of the request is executedeurekaTransport.queryClient.getApplications(remoteRegionsRef.get())

Then all the way down to the track AbstractJersey2EurekaHttpClient.javain getApplicationsInternalthe method, the transmission is sent GETa request, then the Server side ApplicationsResource.javaof GETa method getContainersof viewing the logic

server response multi-level caching mechanism registry information collection

As already seen crawling logic Client sends the whole amount of the registry, to the Server side view ApplicationsResource.javaof the GETmethod getContainers, then look at the source of this section

private final ResponseCache responseCache;

@GET
public Response getContainers(@PathParam("version") String version,
                              @HeaderParam(HEADER_ACCEPT) String acceptHeader,
                              @HeaderParam(HEADER_ACCEPT_ENCODING) String acceptEncoding,
                              @HeaderParam(EurekaAccept.HTTP_X_EUREKA_ACCEPT) String eurekaAccept,
                              @Context UriInfo uriInfo,
                              @Nullable @QueryParam("regions") String regionsStr) {

    // 省略部分代码

    Key cacheKey = new Key(Key.EntityType.Application,
            ResponseCacheImpl.ALL_APPS,
            keyType, CurrentRequestVersion.get(), EurekaAccept.fromString(eurekaAccept), regions
    );

    Response response;
    if (acceptEncoding != null && acceptEncoding.contains(HEADER_GZIP_VALUE)) {
        response = Response.ok(responseCache.getGZIP(cacheKey))
                .header(HEADER_CONTENT_ENCODING, HEADER_GZIP_VALUE)
                .header(HEADER_CONTENT_TYPE, returnMediaType)
                .build();
    } else {
        response = Response.ok(responseCache.get(cacheKey))
                .build();
    }
    CurrentRequestVersion.remove();
    return response;
}

After receiving this request Client terminal, we will go responseCachein to take the total amount of data.
From the property name can be seen, this is to obtain data from the cache.

ResponseCacheImpl.java

String get(final Key key, boolean useReadOnlyCache) {
    Value payload = getValue(key, useReadOnlyCache);
    if (payload == null || payload.getPayload().equals(EMPTY_PAYLOAD)) {
        return null;
    } else {
        return payload.getPayload();
    }
}

Value getValue(final Key key, boolean useReadOnlyCache) {
    Value payload = null;
    try {
        if (useReadOnlyCache) {
            final Value currentPayload = readOnlyCacheMap.get(key);
            if (currentPayload != null) {
                payload = currentPayload;
            } else {
                payload = readWriteCacheMap.get(key);
                readOnlyCacheMap.put(key, payload);
            }
        } else {
            payload = readWriteCacheMap.get(key);
        }
    } catch (Throwable t) {
        logger.error("Cannot get value for key : {}", key, t);
    }
    return payload;
}

The main concern here getValuemethod, here there are two main map, one is readOnlyCacheMapthe other readWriteCacheMap, where we look at the names can know is a read-only cache, a read-write cache, where the cache structure with two layers, if the read-only cache is not empty then return directly readable if the query cache is empty.

We explain about cache continue to look down.

server-side registry multi-level cache expires mechanisms: active + passive timed +

Continue to look at the relevant cache, uses a multi-level cache here you might have some questions:

  1. How two cache data synchronization?
  2. How to cache data expired?

With questions we have to continue to look at the source code

private final ConcurrentMap<Key, Value> readOnlyCacheMap = new ConcurrentHashMap<Key, Value>();
private final LoadingCache<Key, Value> readWriteCacheMap;

ResponseCacheImpl(EurekaServerConfig serverConfig, ServerCodecs serverCodecs, AbstractInstanceRegistry registry) {
    // 省略部分代码

    long responseCacheUpdateIntervalMs = serverConfig.getResponseCacheUpdateIntervalMs();
    this.readWriteCacheMap =
            CacheBuilder.newBuilder().initialCapacity(serverConfig.getInitialCapacityOfResponseCache())
                    .expireAfterWrite(serverConfig.getResponseCacheAutoExpirationInSeconds(), TimeUnit.SECONDS)
                    .removalListener(new RemovalListener<Key, Value>() {
                        @Override
                        public void onRemoval(RemovalNotification<Key, Value> notification) {
                            Key removedKey = notification.getKey();
                            if (removedKey.hasRegions()) {
                                Key cloneWithNoRegions = removedKey.cloneWithoutRegions();
                                regionSpecificKeys.remove(cloneWithNoRegions, removedKey);
                            }
                        }
                    })
                    .build(new CacheLoader<Key, Value>() {
                        @Override
                        public Value load(Key key) throws Exception {
                            if (key.hasRegions()) {
                                Key cloneWithNoRegions = key.cloneWithoutRegions();
                                regionSpecificKeys.put(cloneWithNoRegions, key);
                            }
                            Value value = generatePayload(key);
                            return value;
                        }
                    });

    // 省略部分代码

}
  1. readOnlyCacheMapUsing a ConcurrentHashMap, thread-safe.
    readWriteCacheMapUsing a GuavaCache, do not understand the small partners can read the following myself, my previous blog also have to explain this, the structure is Map Google open source projects Guava memory-based cache, but also its internal implementation.

  2. The main focus of our look at the GuavaCache, here is the initial size serverConfig.getInitialCapacityOfResponseCache()defaults to 1000, also Map original size.
    expireAfterWriteRefresh time is the serverConfig.getResponseCacheAutoExpirationInSeconds()default time is 180s.
    Followed by a build method, get the registry information is used here generatePayloadmethod, if the query readWriteCacheMap information in the registry is empty, it will perform the build method.

Continue to follow up generatePayloadmethod:

private Value generatePayload(Key key) {
    Stopwatch tracer = null;
    try {
        String payload;
        switch (key.getEntityType()) {
            case Application:
                boolean isRemoteRegionRequested = key.hasRegions();

                if (ALL_APPS.equals(key.getName())) {
                    if (isRemoteRegionRequested) {
                        tracer = serializeAllAppsWithRemoteRegionTimer.start();
                        payload = getPayLoad(key, registry.getApplicationsFromMultipleRegions(key.getRegions()));
                    } else {
                        tracer = serializeAllAppsTimer.start();
                        payload = getPayLoad(key, registry.getApplications());
                    }
                } else if (ALL_APPS_DELTA.equals(key.getName())) {
                    if (isRemoteRegionRequested) {
                        tracer = serializeDeltaAppsWithRemoteRegionTimer.start();
                        versionDeltaWithRegions.incrementAndGet();
                        versionDeltaWithRegionsLegacy.incrementAndGet();
                        payload = getPayLoad(key,
                                registry.getApplicationDeltasFromMultipleRegions(key.getRegions()));
                    } else {
                        tracer = serializeDeltaAppsTimer.start();
                        versionDelta.incrementAndGet();
                        versionDeltaLegacy.incrementAndGet();
                        payload = getPayLoad(key, registry.getApplicationDeltas());
                    }
                }
                break;
        }
        return new Value(payload);
    } finally {
        if (tracer != null) {
            tracer.stop();
        }
    }
}

This code is part of the deletion of incremental crawl to the registry will go to this logic, ALL_APPSthat is, the total amount of capture, ALL_APPS_DELTAis the meaning of the incremental crawl, where the first insert a eye, a registry will be incremental crawl logic look back.

The above logic we only need to pay attention registry.getApplicationsFromMultipleRegionsto, this is to obtain a logical registry. Then continue down with the code:

AbstractInstanceRegistry.java

public Applications getApplicationsFromMultipleRegions(String[] remoteRegions) {

    Applications apps = new Applications();
    apps.setVersion(1L);
    for (Entry<String, Map<String, Lease<InstanceInfo>>> entry : registry.entrySet()) {
        Application app = null;

        if (entry.getValue() != null) {
            for (Entry<String, Lease<InstanceInfo>> stringLeaseEntry : entry.getValue().entrySet()) {
                Lease<InstanceInfo> lease = stringLeaseEntry.getValue();
                if (app == null) {
                    app = new Application(lease.getHolder().getAppName());
                }
                app.addInstance(decorateInstanceInfo(lease));
            }
        }
        if (app != null) {
            apps.addApplication(app);
        }
    }
    if (includeRemoteRegion) {
        for (String remoteRegion : remoteRegions) {
            RemoteRegionRegistry remoteRegistry = regionNameVSRemoteRegistry.get(remoteRegion);
            if (null != remoteRegistry) {
                Applications remoteApps = remoteRegistry.getApplications();
                for (Application application : remoteApps.getRegisteredApplications()) {
                    if (shouldFetchFromRemoteRegistry(application.getName(), remoteRegion)) {
                        logger.info("Application {}  fetched from the remote region {}",
                                application.getName(), remoteRegion);

                        Application appInstanceTillNow = apps.getRegisteredApplications(application.getName());
                        if (appInstanceTillNow == null) {
                            appInstanceTillNow = new Application(application.getName());
                            apps.addApplication(appInstanceTillNow);
                        }
                        for (InstanceInfo instanceInfo : application.getInstances()) {
                            appInstanceTillNow.addInstance(instanceInfo);
                        }
                    } else {
                        logger.debug("Application {} not fetched from the remote region {} as there exists a "
                                        + "whitelist and this app is not in the whitelist.",
                                application.getName(), remoteRegion);
                    }
                }
            } else {
                logger.warn("No remote registry available for the remote region {}", remoteRegion);
            }
        }
    }
    apps.setAppsHashCode(apps.getReconcileHashCode());
    return apps;
}

Here to see registry.entrySet()it is not it will be particularly cordial? Map<String, Map<String, Lease<InstanceInfo>>We talk on a Client is registered when the registration information into the registry that corresponds to this data structure, sure enough, here to get all the registration information, and then encapsulated Applicationsobject.

Here Finally apps.setAppsHashCode()logic behind the eye to insert a similar stresses incremental synchronization logic, look back later. Then look back after returning the data readWriteCacheMapoperation logic.

if (shouldUseReadOnlyResponseCache) {
    timer.schedule(getCacheUpdateTask(),
            new Date(((System.currentTimeMillis() / responseCacheUpdateIntervalMs) * responseCacheUpdateIntervalMs)
                    + responseCacheUpdateIntervalMs),
            responseCacheUpdateIntervalMs);
}

private TimerTask getCacheUpdateTask() {
    return new TimerTask() {
        @Override
        public void run() {
            logger.debug("Updating the client cache from response cache");
            for (Key key : readOnlyCacheMap.keySet()) {
                if (logger.isDebugEnabled()) {
                    logger.debug("Updating the client cache from response cache for key : {} {} {} {}",
                            key.getEntityType(), key.getName(), key.getVersion(), key.getType());
                }
                try {
                    CurrentRequestVersion.set(key.getVersion());
                    Value cacheValue = readWriteCacheMap.get(key);
                    Value currentCacheValue = readOnlyCacheMap.get(key);
                    if (cacheValue != currentCacheValue) {
                        readOnlyCacheMap.put(key, cacheValue);
                    }
                } catch (Throwable th) {
                    logger.error("Error while updating the client cache from response cache for key {}", key.toStringCompact(), th);
                } finally {
                    CurrentRequestVersion.remove();
                }
            }
        }
    };
}

Here a scheduled task is played, timing comparator will go to a level two cache and if, and if not it will be covered with a secondary cache buffer. This answer to the first question above, two cache coherency problem, perform a default 30s. So there will still be a problem, the cache may exist inconsistencies in the 30s, here is the idea of ​​eventually consistent.

image.png

Then get to read and write cache data and then go back to the write-read-only cache, which is above ResponseCacheImpl.javalogic, to grab registry here full amount of the code has been read, the main highlight here is the use of two levels of caching strategy to return the corresponding data.

Then several mechanisms under the expired finishing, but also in response to the second question thrown above.

To conclude with a picture:

05_EurekaServer多节缓存过期机制.png

  1. Active expired
    readWriteCacheMap, read and write cache

    New service instances occur registration, offline, fault and they will go to refresh readWriteCacheMap (at the time of registration of the Client, AbstractInstanceRegistry in the register method will last a invalidateCache () method)

    For example, now there is a service A, ServiceA, there is a new service instance, Instance010 be registered, after registration is over, in fact, must have a refresh of the cache, and then will call ResponseCache.invalidate (), previously cached good ALL_APPS the corresponding cache key, to give him away expired

    The ALL_APPS cache key readWriteCacheMap in the corresponding cache to expire out

  2. Timing expired

    readWriteCacheMap at build time, an automatic expiration of the specified time, the default value is 180 seconds, so you put a data into readWriteCacheMap in later, after the automatic will wait for 180 seconds, this data will give him expired

  3. Passive expired

    readOnlyCacheMap怎么过期呢?
    默认是每隔30秒,执行一个定时调度的线程任务,TimerTask,有一个逻辑,会每隔30秒,对readOnlyCacheMap和readWriteCacheMap中的数据进行一个比对,如果两块数据是不一致的,那么就将readWriteCacheMap中的数据放到readOnlyCacheMap中来。

    比如说readWriteCacheMap中,ALL_APPS这个key对应的缓存没了,那么最多30秒过后,就会同步到readOnelyCacheMap中去。

client端增量抓取注册表逻辑

上面抓取全量注册表的代码已经说了,这里来讲一下增量抓取,入口还是在DiscoverClient.java
中,当初始化完DiscoverClient.java 后会执行一个初始化定时任务的方法initScheduledTasks(), 其中这个里面就会每隔30s 增量抓取一次注册表信息。

这里就不跟着这里的逻辑一步步看了,看过上面的代码后 应该会对这里比较清晰了,这里我们直接看Server端代码了。

还记的我们上面插过的眼,获取全量用的是ALL_APPS 增量用的是ALL_APPS_DELTA, 所以我们这里只看增量的逻辑就行了。

else if (ALL_APPS_DELTA.equals(key.getName())) {
    if (isRemoteRegionRequested) {
        tracer = serializeDeltaAppsWithRemoteRegionTimer.start();
        versionDeltaWithRegions.incrementAndGet();
        versionDeltaWithRegionsLegacy.incrementAndGet();
        payload = getPayLoad(key,
                registry.getApplicationDeltasFromMultipleRegions(key.getRegions()));
    } else {
        tracer = serializeDeltaAppsTimer.start();
        versionDelta.incrementAndGet();
        versionDeltaLegacy.incrementAndGet();
        payload = getPayLoad(key, registry.getApplicationDeltas());
    }
}

上面只是截取了部分代码,这里直接看主要的逻辑registry.getApplicationDeltasFromMultipleRegions即可,这个和全量的方法名只有一个Deltas的区别。

public Applications getApplicationDeltasFromMultipleRegions(String[] remoteRegions) {
    if (null == remoteRegions) {
        remoteRegions = allKnownRemoteRegions; // null means all remote regions.
    }

    boolean includeRemoteRegion = remoteRegions.length != 0;

    if (includeRemoteRegion) {
        GET_ALL_WITH_REMOTE_REGIONS_CACHE_MISS_DELTA.increment();
    } else {
        GET_ALL_CACHE_MISS_DELTA.increment();
    }

    Applications apps = new Applications();
    apps.setVersion(responseCache.getVersionDeltaWithRegions().get());
    Map<String, Application> applicationInstancesMap = new HashMap<String, Application>();
    try {
        write.lock();
        Iterator<RecentlyChangedItem> iter = this.recentlyChangedQueue.iterator();
        logger.debug("The number of elements in the delta queue is :{}", this.recentlyChangedQueue.size());
        while (iter.hasNext()) {
            Lease<InstanceInfo> lease = iter.next().getLeaseInfo();
            InstanceInfo instanceInfo = lease.getHolder();
            logger.debug("The instance id {} is found with status {} and actiontype {}",
                    instanceInfo.getId(), instanceInfo.getStatus().name(), instanceInfo.getActionType().name());
            Application app = applicationInstancesMap.get(instanceInfo.getAppName());
            if (app == null) {
                app = new Application(instanceInfo.getAppName());
                applicationInstancesMap.put(instanceInfo.getAppName(), app);
                apps.addApplication(app);
            }
            app.addInstance(new InstanceInfo(decorateInstanceInfo(lease)));
        }

        if (includeRemoteRegion) {
            for (String remoteRegion : remoteRegions) {
                RemoteRegionRegistry remoteRegistry = regionNameVSRemoteRegistry.get(remoteRegion);
                if (null != remoteRegistry) {
                    Applications remoteAppsDelta = remoteRegistry.getApplicationDeltas();
                    if (null != remoteAppsDelta) {
                        for (Application application : remoteAppsDelta.getRegisteredApplications()) {
                            if (shouldFetchFromRemoteRegistry(application.getName(), remoteRegion)) {
                                Application appInstanceTillNow =
                                        apps.getRegisteredApplications(application.getName());
                                if (appInstanceTillNow == null) {
                                    appInstanceTillNow = new Application(application.getName());
                                    apps.addApplication(appInstanceTillNow);
                                }
                                for (InstanceInfo instanceInfo : application.getInstances()) {
                                    appInstanceTillNow.addInstance(new InstanceInfo(instanceInfo));
                                }
                            }
                        }
                    }
                }
            }
        }

        Applications allApps = getApplicationsFromMultipleRegions(remoteRegions);
        apps.setAppsHashCode(allApps.getReconcileHashCode());
        return apps;
    } finally {
        write.unlock();
    }
}

这里代码还是比较多的,我们只需要抓住重点即可:

  1. recentlyChangedQueue中获取注册信息,从名字可以看出来 这是最近改变的client注册信息的队列
  2. 使用writeLock,因为这里是获取增量注册信息,是从队列中获取,如果不加写锁,那么获取的时候又有新数据加入队列中,新数据会获取不到的

基于上面第一点,我们来看看这个队列怎么做的:

  1. 数据结构:ConcurrentLinkedQueue<RecentlyChangedItem> recentlyChangedQueue
  2. AbstractInstanceRegistry.java初始化的时候会启动一个定时任务,默认30s中执行一次。如果注册时间小于当前时间的180s,就会放到这个队列中

AbstractInstanceRegistry.javaSpecific code as follows:

protected AbstractInstanceRegistry(EurekaServerConfig serverConfig, EurekaClientConfig clientConfig, ServerCodecs serverCodecs) {
    this.serverConfig = serverConfig;
    this.clientConfig = clientConfig;
    this.serverCodecs = serverCodecs;
    this.recentCanceledQueue = new CircularQueue<Pair<Long, String>>(1000);
    this.recentRegisteredQueue = new CircularQueue<Pair<Long, String>>(1000);

    this.renewsLastMin = new MeasuredRate(1000 * 60 * 1);

    this.deltaRetentionTimer.schedule(getDeltaRetentionTask(),
            serverConfig.getDeltaRetentionTimerIntervalInMs(),
            serverConfig.getDeltaRetentionTimerIntervalInMs());
}

private TimerTask getDeltaRetentionTask() {
    return new TimerTask() {

        @Override
        public void run() {
            Iterator<RecentlyChangedItem> it = recentlyChangedQueue.iterator();
            while (it.hasNext()) {
                if (it.next().getLastUpdateTime() <
                        System.currentTimeMillis() - serverConfig.getRetentionTimeInMSInDeltaQueue()) {
                    it.remove();
                } else {
                    break;
                }
            }
        }

    };
}

Here Nengkanmingbai, that incremental crawl will get saved within a 3-minute changes EurekaServer end of Client information.
Finally, there is a bright spot, we said above, whether full or incremental amount crawl crawl, and finally returns a hash of the whole amount of the registry, the code is apps.setAppsHashCode(allApps.getReconcileHashCode());where apps is the return of Applicationsthe property, and finally we look at usage of this hashCode.

Back DiscoveryClient.java, find refreshRegistrya method, then all the way to the tracking getAndUpdateDeltamethod, the specific code here I will not put up, the process is as follows:

  1. Incremental data acquisition delta
  2. According to merge incremental data and the local registry data
  3. HashCode calculation of the value of the local registry information
  4. If the value is inconsistent with local hashCode hashCode value returned by the server and then get the full amount once the registry information

Last Photo summarizes the incremental crawl registry logic:

06_EurekaClient增量抓取注册表流程.png

Summary & insights

The article is a bit long, and indeed he is very hard to write, I feel this multi-level caching mechanism + incremental data consistency Hash contrast protocol done excellent work, if I do a full data volume increment + synchronous I will benefit from this scheme.

See the source code can learn is someone else's design. The summary section you can see some of the above chart, the registry grab the source code to learn on this, and also prepared to look behind heartbeat mechanism to protect the source of some of the mechanisms, clusters and so on.

Here after reading the source code will be sent to the next question:

Suppose there are registered service instance, off the assembly line, the failure to call the service of other services that may be perceived to fall after 30 seconds, why? Since then get here when the service registry, a multi-level caching mechanism, at most, only 30 seconds to update the cache.

Declare

This article starting from my blog: https://www.cnblogs.com/wang-meng and public numbers: One ramiflorous be considered romantic , should reprint please indicate the source!

Interested partner may be concerned about the small number of individual public: One branch count romantic flowers

22.jpg

Guess you like

Origin www.cnblogs.com/wang-meng/p/12118203.html