Netflix Eureka源码分析(14)——eureka client每隔30秒增量抓取注册表的源码剖析(一致性hash比对机制)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/A_Story_Donkey/article/details/82908741

eureka client启动的时候,会去抓取一次全量的注册表,整个这套机制,在源码看的比较清晰了,启动的时候,同时会启动一个定时调度的线程,每隔30秒,会发送一次请求到eureka server,抓取增量的注册表

什么叫做增量的注册表呢?就是说跟上一次抓取的注册表相比,有变化的部分,给抓取过来就可以了,不需要每次都抓取全量的注册表

(1)定时任务,每隔30秒来一次

@Singleton
public class DiscoveryClient implements EurekaClient {
    
    /**
     * Initializes all scheduled tasks.
     */
    private void initScheduledTasks() {
        if (clientConfig.shouldFetchRegistry()) {
            // registry cache refresh timer
            int registryFetchIntervalSeconds = clientConfig.getRegistryFetchIntervalSeconds();
            int expBackOffBound = clientConfig.getCacheRefreshExecutorExponentialBackOffBound();
            scheduler.schedule(
                    new TimedSupervisorTask(
                            "cacheRefresh",
                            scheduler,
                            cacheRefreshExecutor,
                            registryFetchIntervalSeconds,
                            TimeUnit.SECONDS,
                            expBackOffBound,
                            new CacheRefreshThread()
                    ),
                    registryFetchIntervalSeconds, TimeUnit.SECONDS);
        }
    }
    
}
@Singleton
@ProvidedBy(DefaultEurekaClientConfigProvider.class)
public class DefaultEurekaClientConfig implements EurekaClientConfig {
   
    @Override
    public int getRegistryFetchIntervalSeconds() {
        return configInstance.getIntProperty(
                namespace + REGISTRY_REFRESH_INTERVAL_KEY, 30).get();
    }
}

(2)因为本地已经有了缓存的Applications,所以再次抓取注册表的时候,走的是增量抓取的策略

(3)这块会走EurekaHttpClient的getDelta()方法和接口,http://localhost:8080/v2/apps/delta,get请求

 public abstract class AbstractJersey2EurekaHttpClient implements EurekaHttpClient {

    @Override
    public EurekaHttpResponse<Applications> getDelta(String... regions) {
        return getApplicationsInternal("apps/delta", regions);
    }

}

(4)在eureka server端,会走多级缓存的机制,缓存的Key,ALL_APPS_DELTA,唯一的区别在哪儿呢?就是在那个readWriteCacheMap的从注册表获取数据那里是不一样的,registry.getApplicationDeltasFromMultipleRegions()获取增量的注册表,就是从上一次拉取注册表之后,有变化的注册表

@Path("/{version}/apps")
@Produces({"application/xml", "application/json"})
public class ApplicationsResource {    

    @Path("delta")
    @GET
    public Response getContainerDifferential(
            @PathParam("version") String version,
            @HeaderParam(HEADER_ACCEPT) String acceptHeader,
            @HeaderParam(HEADER_ACCEPT_ENCODING) String acceptEncoding,
            @HeaderParam(EurekaAccept.HTTP_X_EUREKA_ACCEPT) String eurekaAccept,
            @Context UriInfo uriInfo, @Nullable @QueryParam("regions") String regionsStr) {

        boolean isRemoteRegionRequested = null != regionsStr && !regionsStr.isEmpty();

        // If the delta flag is disabled in discovery or if the lease expiration
        // has been disabled, redirect clients to get all instances
        if ((serverConfig.shouldDisableDelta()) || (!registry.shouldAllowAccess(isRemoteRegionRequested))) {
            return Response.status(Status.FORBIDDEN).build();
        }

        String[] regions = null;
        if (!isRemoteRegionRequested) {
            EurekaMonitors.GET_ALL_DELTA.increment();
        } else {
            regions = regionsStr.toLowerCase().split(",");
            Arrays.sort(regions); // So we don't have different caches for same regions queried in different order.
            EurekaMonitors.GET_ALL_DELTA_WITH_REMOTE_REGIONS.increment();
        }

        CurrentRequestVersion.set(Version.toEnum(version));
        KeyType keyType = Key.KeyType.JSON;
        String returnMediaType = MediaType.APPLICATION_JSON;
        if (acceptHeader == null || !acceptHeader.contains(HEADER_JSON_VALUE)) {
            keyType = Key.KeyType.XML;
            returnMediaType = MediaType.APPLICATION_XML;
        }

        //cacheKey 为增量获取的缓存key
        Key cacheKey = new Key(Key.EntityType.Application,
                ResponseCacheImpl.ALL_APPS_DELTA,
                keyType, CurrentRequestVersion.get(), EurekaAccept.fromString(eurekaAccept), regions
        );

        if (acceptEncoding != null
                && acceptEncoding.contains(HEADER_GZIP_VALUE)) {
            return Response.ok(responseCache.getGZIP(cacheKey))
                    .header(HEADER_CONTENT_ENCODING, HEADER_GZIP_VALUE)
                    .header(HEADER_CONTENT_TYPE, returnMediaType)
                    .build();
        } else {
            return Response.ok(responseCache.get(cacheKey))
                    .build();
        }
    }
}
public class ResponseCacheImpl implements ResponseCache {

    ResponseCacheImpl(EurekaServerConfig serverConfig, ServerCodecs serverCodecs, AbstractInstanceRegistry registry) {
        this.readWriteCacheMap =
                CacheBuilder.newBuilder().initialCapacity(1000)
                        .expireAfterWrite(serverConfig.getResponseCacheAutoExpirationInSeconds(), TimeUnit.SECONDS)
                        .removalListener(new RemovalListener<Key, Value>() {
                            @Override
                            public void onRemoval(RemovalNotification<Key, Value> notification) {
                                Key removedKey = notification.getKey();
                                if (removedKey.hasRegions()) {
                                    Key cloneWithNoRegions = removedKey.cloneWithoutRegions();
                                    regionSpecificKeys.remove(cloneWithNoRegions, removedKey);
                                }
                            }
                        })
                        .build(new CacheLoader<Key, Value>() {
                            @Override
                            public Value load(Key key) throws Exception {
                                if (key.hasRegions()) {
                                    Key cloneWithNoRegions = key.cloneWithoutRegions();
                                    regionSpecificKeys.put(cloneWithNoRegions, key);
                                }
                                Value value = generatePayload(key);
                                return value;
                            }
                        });
    }


    /*
     * Generate pay load for the given key.
     */
    private Value generatePayload(Key key) {
        Stopwatch tracer = null;
        try {
            String payload;
            switch (key.getEntityType()) {
                case Application:
                    boolean isRemoteRegionRequested = key.hasRegions();

                    if (ALL_APPS.equals(key.getName())) {
                        if (isRemoteRegionRequested) {
                            tracer = serializeAllAppsWithRemoteRegionTimer.start();
                            payload = getPayLoad(key, registry.getApplicationsFromMultipleRegions(key.getRegions()));
                        } else {
                            tracer = serializeAllAppsTimer.start();
                            payload = getPayLoad(key, registry.getApplications());
                        }
                    } else if (ALL_APPS_DELTA.equals(key.getName())) {
                        if (isRemoteRegionRequested) {
                            tracer = serializeDeltaAppsWithRemoteRegionTimer.start();
                            versionDeltaWithRegions.incrementAndGet();
                            versionDeltaWithRegionsLegacy.incrementAndGet();
                            payload = getPayLoad(key,
                                    registry.getApplicationDeltasFromMultipleRegions(key.getRegions()));
                        } else {
                            tracer = serializeDeltaAppsTimer.start();
                            versionDelta.incrementAndGet();
                            versionDeltaLegacy.incrementAndGet();
                            //getApplicationDeltas()获取最近3分钟服务实例的变更记录
                            payload = getPayLoad(key, registry.getApplicationDeltas());
                        }
                    } else {
                        tracer = serializeOneApptimer.start();
                        payload = getPayLoad(key, registry.getApplication(key.getName()));
                    }
                    break;
                case VIP:
                case SVIP:
                    tracer = serializeViptimer.start();
                    payload = getPayLoad(key, getApplicationsForVip(key, registry));
                    break;
                default:
                    logger.error("Unidentified entity type: " + key.getEntityType() + " found in the cache key.");
                    payload = "";
                    break;
            }
            return new Value(payload);
        } finally {
            if (tracer != null) {
                tracer.stop();
            }
        }
    }
}

(5)recentlyChangedQueue,代表的含义是,最近有变化的服务实例,比如说,新注册、下线的,或者是别的什么什么,在Registry构造的时候,有一个定时调度的任务,默认是30秒一次,看一下,服务实例的变更记录,是否在队列里停留了超过180s(3分钟),如果超过了3分钟,就会从队列里将这个服务实例变更记录给移除掉。也就是说,这个queue,就保留最近3分钟的服务实例变更记录。delta,增量。

    /**
     * Create a new, empty instance registry.
     */
    protected AbstractInstanceRegistry(EurekaServerConfig serverConfig, EurekaClientConfig clientConfig, ServerCodecs serverCodecs) {
        this.serverConfig = serverConfig;
        this.clientConfig = clientConfig;
        this.serverCodecs = serverCodecs;
        this.recentCanceledQueue = new CircularQueue<Pair<Long, String>>(1000);
        this.recentRegisteredQueue = new CircularQueue<Pair<Long, String>>(1000);

        this.renewsLastMin = new MeasuredRate(1000 * 60 * 1);

        //Registry构造时添加的定时调度任务
        this.deltaRetentionTimer.schedule(getDeltaRetentionTask(),
                serverConfig.getDeltaRetentionTimerIntervalInMs(),
                serverConfig.getDeltaRetentionTimerIntervalInMs());
    }
@Singleton
public class DefaultEurekaServerConfig implements EurekaServerConfig {

    @Override
    public long getDeltaRetentionTimerIntervalInMs() {
        return configInstance.getLongProperty(
                namespace + "deltaRetentionTimerIntervalInMs", (30 * 1000))
                .get();
    }
}

(6)eureka client每次30秒,去抓取注册表的时候,就会返回最近3分钟内发生过变化的服务实例

(7)抓取到的delta的注册表,就会跟本地的注册表进行合并,完成服务实例的增删改

(8)对更新完合并完以后的注册表,会计算一个hash值;delta,带了一个eureka server端的全量注册表的hash值;此时会将eureka client端的合并完的注册表的hash值,跟eureka server端的全量注册表的hash值进行一个比对;如果说不一样的话,说明本地注册表跟server端不一样了,此时就会重新从eureka server拉取全量的注册表到本地来更新到缓存里去

 

总结:

(1)增量数据的设计思路:如果你要保存一份增量的最新变更数据,可以基于LinkedQuueue,将最新变更的数据放入这个queue中,然后后台来一个定时任务,每隔一定时间,将在队列中存放超过一定时间的数据拿掉,保持这个队列中就是最近几分钟内的变更的增量数据

(2)数据同步的hash值比对机制:如果你要在两个地方,一个分布式系统里,进行数据的同步,可以采用Hash值的思想,从一个地方的数据计算一个hash值,到另外一个地方,计算一个hash值,保证两个hash值是一样的,确保这个数据传输过程中,没有出什么问题

(3)eureka client增量抓取注册表 流程图

 

猜你喜欢

转载自blog.csdn.net/A_Story_Donkey/article/details/82908741
今日推荐