Article Directory
1. Self-protection optimization
Self-protection common configuration
eureka:
server:
# 自我保护开关,默认开启
enable-self-preservation: true
# 自我保护阈值,默认0.85
renewal-percent-threshold: 0.85
# 自我保护剔除时间间隔,单位毫秒,默认60s
eviction-interval-timer-in-ms: 60000
# 客户端续约时间间隔,默认30s
expected-client-renewal-interval-seconds: 30
# 阈值更新的时间间隔,单位为毫秒,默认为15 * 60 * 1000
renewal-threshold-update-interval-ms: 900000
Self-protection scene optimization
Scenes | Number of services | Lose heartbeat | Threshold after culling | Default threshold | Do you want to protect yourself? |
---|---|---|---|---|---|
scene one | 10 / less | 3 | 70% | 85% | Not open |
Scene two | 1000/large | 3 | 99.7% | 85% | Turn on |
scene one:
Since the number of services is originally only 10, if three are broken, if the self-protection mechanism is turned on, a large number of requests may access the three broken services. This is definitely not possible, so to close it is equivalent to broken. It must be removed; although the self-protection mechanism is turned off, the service interval will be removed to ensure that the service is reconnected. You can set the eviction-interval-timer-in-ms parameter to be shorter to prevent the client from releasing it. Ask the disconnected service, this parameter defaults to 60s, which is equivalent to fast offline.
Scene two:
The number of services is relatively large, and it is not a big problem if three are disconnected. Therefore, it is recommended to turn on the protection mechanism. If the three services that lose their heartbeat are caused by network jitter, they will be given a chance to reconnect after being turned on.
Second and third level cache optimization
What is the third level cache
There are three variables in Eureka Server: registry, readWriteCacheMap, and readOnlyCacheMap to save service registration information.
public abstract class AbstractInstanceRegistry implements InstanceRegistry {
// 三级
private final ConcurrentHashMap<String, Map<String, Lease<InstanceInfo>>> registry
= new ConcurrentHashMap<String, Map<String, Lease<InstanceInfo>>>();
}
public class ResponseCacheImpl implements ResponseCache {
// 二级
private final ConcurrentMap<Key, Value> readOnlyCacheMap = new ConcurrentHashMap<Key, Value>();
// 一级
private final LoadingCache<Key, Value> readWriteCacheMap;
}
Three-level cache workflow
By default, the scheduled task synchronizes readWriteCacheMap to readOnlyCacheMap every 30s, and cleans up nodes that have not been renewed for more than 90s every 60s. Eureka Client pulls service registration information from readOnlyCacheMap every 30s, and service registration updates the information in the registry.
Advantages of three-level cache
As much as possible to ensure that the data in the memory registry will not have frequent read-write conflicts, and further ensure that a large number of requests to EurekaServer are quickly fetched from the memory, with extremely high performance.
Optimization in production environment
Since we read from readOnlyCacheMap by default when we fetch the service, since readWriteCacheMap is synchronized to readOnlyCacheMap every 30s, the data is not strongly consistent, so this is the reason why Eureka only implements AP and does not implement C.
CAP:
- Consistency: It is equivalent to all nodes accessing the same copy of the latest data.
- Availability: The correct response can be obtained for each request, but the data obtained is not guaranteed to be the latest data.
- Partition tolerance (partition compatibility): In terms of actual effect, partition is equivalent to the time limit of communication. If the system cannot achieve data consistency within the time limit, it means that a partition has occurred, and the current operation must be in C and Choose between A.
optimization:
eureka:
server:
# 关闭从readOnly读注册表,直接去readWriteCacheMap中读
use-read-only-response-cache: false
# readWrite 和 readOnly 减少同步时间间隔。
response-cache-update-interval-ms: 1000
Use-read-only-response-cache defaults to this parameter to be true, go to readOnlyCacheMap to read, set to false and then go to readWriteCache to read, it will be faster.
Three, Timer optimization
The Eureka source code uses a large number of Timer timing tasks, due to the following defects in the Timer timer:
Timer defect:
- Timer will only create one thread when executing all timing tasks. When there are multiple tasks, its tasks are executed serially.
- Since the Timer will only create one thread, an undetected exception is thrown in the TimerTask, then the Timer thread will be terminated, causing all other tasks to stop.
- Timer depends on the system time when executing periodic tasks. If the current system time changes, there will be some changes in execution.