First, the architecture diagram
Second, the core module
-
ConfigService1) providing an interface configuration acquisition2) provides configuration push an interface (service end use Spring DeferredResult asynchronous, thereby greatly increasing the length of the number of connections, tomcat embed currently using the default configuration is up to 10,000 connections (may be adjusted), the use of virtual machines Found 4C8G may be supported 10,000 connections, so to meet the needs of (an application example will initiate a long connection)3) serving the Apollo client
-
AdminService1) provides configuration management interface2) provides configuration modifications publishing interface3) service interface to manage Portal
-
Client1) to obtain configuration application that supports real-time updates2) obtain a list of service ConfiigService by MetaServer3) using a client load SLB invoke ConfigService
-
Portal1) configuration management interface2) obtain a list of services AdminService by MetaServer3) using a client load SLB invoke AdminService
Third, ancillary services discovery module
-
Eureka1) provides service discovery and registration based on the Eureka and Spring Cloud Netflix2) Config / AdminService registered and regularly report instances heartbeat3) and ConfigService live with deployment (Why? To simplify deployment)4) Config Service and Admin Service are multi-instance, stateless deployment, you need to register yourself and to keep the heartbeat Eureka
-
MetaServer1) Portal access by domain name MetaServer get AdminService address list2) Client Access MetaServer domain by acquiring ConfigService address list3) corresponds to a Eureka Proxy (Eureka we stand on the floor Meta Server service package for Eureka discovery interface for Portal and Client, you will never get through a Http Interface Admin Service and Config4) logical roles, and ConfigService live together and deployed in the same JVM process5) MetaServer + Eureka = k8s.service (see endpoint functionality to provide real access by endpoint)
-
NginxLB1) and Domain Name System cooperate and assist Portal access MetaServer get AdminService address list2) and Domain Name System cooperate and assist the Client Access MetaServer get ConfigService address list3) and Domain Name System cooperate and assist the user to access the Portal configuration management
Fourth, the server-side design
-
After the release of real-time push1) user posted operating configuration Portal2) Portal Admin Service call interface operations release3) Admin Service after the release configuration, send ReleaseMessage to each Config Service4) Config Service after receiving ReleaseMessage, notice the corresponding client
-
Implementation ReleaseMessage transmission (via database implements a simple message queue)1) Admin Service will go ReleaseMessage table insert a recorded message released after the configuration, it is to configure message content posted AppId + Cluster + Namespace2) Config Service has a thread will scan every second ReleaseMessage table to see if there is a new message record3) Config Service If you find a new message record, it will inform the listeners all the news, such as NotificationControllerV2After 4) NotificationControllerV2 released configuration obtained AppId + Cluster + Namespace, notifies the corresponding client
-
Config Service to inform the client implementation1) The client initiates a Http request to the Config Service of notifications / v2 Interface2) NotificationControllerV2 not return results immediately, but by the pending request Spring DeferredResult3) If you do not configure the client concerned released within 60 seconds, then returns Http status code 304 to the client4) If you have configured the client concerned about the release, NotificationControllerV2 calls setResult method DeferredResult of incoming information has namespace configuration changes, and the request will return immediately. After the client obtains from the results returned to the namespace configuration changes, we will immediately request Config Service to obtain the latest configuration of the namespace
Fifth, the client design
-
The client and server to maintain a persistent connection, so that it can push the first time to obtain the updated configuration
-
The client also timed pull the latest configuration of the application server from Apollo distribution center1) This is a fallback mechanism, resulting in the failure to prevent the push mechanism is not configured to update2) The client will report timing of a local version of pulling, so that under normal circumstances, the timing for the operation of the pull, the server returns 304 - Not Modified3) The default timing frequency pull once every 5 minutes, clients can also specify the runtime is System Property: apollo.refreshInterval covered, in minutes (ie: do not allow to release configuration, will be automatically updated to the line )
-
After the client obtains from the Apollo configuration center server to the latest configuration of the application, it will be kept in memory
-
The client will get from the server to the configuration restore configuration in a local file system cache, in the face of service is not available, or when the network does not make sense, still from the local
-
Applications for the latest configuration from Apollo clients, subscribe to configure update notification (apollo initiate the configuration change)
Sixth, the relevant monitoring
-
Apollo client and server is currently supporting CAT automatically runs, so if your company has deployed an internal CAT words, as long as the introduction of cat-client Apollo will automatically enable CAT RBI.
-
If you do not use CAT, then do not worry, as long as no introduction cat-client, Apollo is not enabled CAT RBI
Seven, usability design
Scenes
|
influences
|
Demote
|
the reason
|
A Taiwan config service offline
|
no effect
|
|
Config service stateless client other config service reconnect
|
All config service offline
|
The client can not read the latest configuration, without affecting Portal
|
When a client restart can be read locally cached profile
|
|
A admin service station off the assembly line
|
no effect
|
|
Admin service stateless, Portal other admin service reconnection
|
All admin service offline
|
The client had no effect, portal configuration can not be updated
|
|
|
A Taiwan portal offline
|
no effect
|
|
Portal domain by slb bind multiple servers, try again pointing to an available server
|
All portal offline
|
The client had no effect, portal configuration can not be updated
|
|
|
A data center offline
|
no effect
|
|
Multi-data center deployment, data is fully synchronized, Meta Server / Portal domain by slb switch automatically to the other surviving data center
|