Spring-Cloud components: eureka

Spring-Cloud components:

eureka:

What eureka that?

eureka is one Netflix submodules, a module is a core, eureka, there are two components, one EurekaServer (a separate item) This is a location service to achieve load balancing and failover server transfer of the intermediate layer, the other is a EurekaClient (our micro-service) which is used to interact with the Server, you can make the interaction becomes very simple: just need to get the service through the service identifier.
The relationship with the spring-cloud:
the Spring Cloud encapsulates Netflix Eureka module developed to implement the service registration and discovery (you can compare Zookeeper).
Eureka using the CS design architecture. Eureka Server as a service registration function of the server, which is a service registry.
And other micro-services system, use the client to connect to Eureka Eureka Server and maintain a heartbeat connection. Such systems maintenance personnel can monitor each micro system service is functioning properly by Eureka Server. SpringCloud some of the other modules (such as Zuul) can be found in other micro-services system through Eureka Server, and implementation of relevant logic.
Role diagram:

how to use?

In the spring-cloud project which joined the dependence:

eureka client:

		<dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
  eureka服务端:
		<dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
        </dependency>

eureka server project which joined the following configuration:
Server:
Port: 3000
eureka:
Server:
enable-Self-Preservation: false # turn off self-protection mechanism of
eviction-interval-timer-in- ms: 4000 # Set cleanup interval (Unit: ms Default * 1000 is 60)
instance:
hostname: localhost

Client:
registerWithEureka: false # do not register itself as a client to himself
fetchRegistry: false # is not required to obtain registration information from the server (because the server where he is, and has been disabled themselves registered)
serviceUrl:
defaultzone: HTTP : // e in r e k a . i n s t a n c e . h o s t n a m e : {eureka.instance.hostname}: {} Server.port / Eureka
Of course, not all necessary, here I just came here to configure copy
and add comments on the spring-boot startup items: @EnableEurekaServer you can start a project
/ **

  • Want advice vip courses related to students add about Magnolia teacher QQ: 2746251334

  • Want to video of the students add about Ahn'Qiraj teacher QQ: 3164703201

  • author: Luban School - Teacher Shang Yang
    * /
    @EnableEurekaServer
    @SpringBootApplication
    public class AppEureka {

    public static void main(String[] args) {
    SpringApplication.run(AppEureka.class);

    }
    }
    If you see this picture, then you will build a good explanation:

This warning just say you put him in self-protection mechanism closed
eureka client configuration:
Server:
Port: 6000
eureka:
Client:
serviceUrl:
defaultzone: HTTP: // localhost: 3000 / eureka / registered address #eureka server provides reference server configuration this path
instance:

instance-id: power-1 #此实例注册到eureka服务端的唯一的实例ID 
prefer-ip-address: true #是否显示IP地址
leaseRenewalIntervalInSeconds: 10 #eureka客户需要多长时间发送心跳给eureka服务器,表明它仍然活着,默认为30 秒 (与下面配置的单位都是秒)
leaseExpirationDurationInSeconds: 30 #Eureka服务器在接收到实例的最后一次发出的心跳后,需要等待多久才可以将此实例删除,默认为90秒

the Spring:
the Application:
name: Server-Power # this instance is registered to the name server eureka
then start adding annotations on projects in the client's spring-boot: @EnableEurekaClient you can start a project here is not directly look at our screenshots renderings:

Here we can see the name of the server-power (Figure it will be capitalized) id for the power-1 registration services to our Eureka came above this point, a simple eureka has been set up better.

Cluster eureka:
eureka principle Cluster
service starts after registration with the Eureka, Eureka Server registration information will be synchronized to other Eureka Server, when the service consumer to call the service provider, the service provider address to get service registry, and then will service provider address cached locally, the next time you call, you get directly from the local cache, complete a call.
eureka cluster configuration
we have just learned that Eureka Server will be registered information to other Eureka Server synchronization server so what we have to declare it have?
It is assumed that we have three Eureka Server Figure:

Now how the server cluster environment declared it? We look at a map:

May look a bit abstract, we look at the specific configuration
Server:
Port: 3000
Eureka:
Server:
enable-Self-Preservation: false
eviction-interval The-in-the Timer-MS: 4000
instance:
hostname: eureka3000.com

Client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultzone: http://eureka3001.com:3001/eureka,http://eureka3002.com:3002/eureka
here to facilitate understanding of the cluster we have done mapping of a domain name (not the special condition I support the use of three notebooks to test ...) As for how domain name mapping, then briefly mention it here to modify your hosts file (win10 directory in C: \ Windows \ System32 \ drivers \ etc other systems, then put it on their own Baidu) I enclose my hosts file:
127.0.0.1 eureka3000.com
127.0.0.1 eureka3001.com
127.0.0.1 eureka3002.com

Us back to the subject, we found that the cluster configuration and monomer in a point that turned out to be registered to the service himself, but is now registered to another service body
as to why not register themselves of it? Back to the top we said, eureka's server to make their registration information synchronized with the other server, so here we do not need to register themselves, because the other two servers will be configured this station server. (There may be a bit around, you can refer to chart just goes cluster environment, or configure it yourself, the other two eureka configuration is similar to this, not sent to, as long as attention is registered to the other services above just fine)
after three eureka configured, all start clicking you can see the effect:
of course, we are only here to configure the server, then the client how to configure it? Man of few words said on the code:
Client:
serviceUrl:
defaultzone: HTTP: // localhost: 3000 / Eureka /, HTTP: //eureka3001.com: 3001 / Eureka, HTTP: //eureka3002.com: 3002 / Eureka
us here only interception of the part to be altered. That turned out to be registered to that address above, it is to write three eureka registered address, but not on behalf of his three registers, as registration information we eureka server is synchronized, here only need to register once on it, but why three write address it. Because of this you can do high-availability configurations: an analogy there are three servers. But suddenly goes down one, but the other two still alive, can still register for our services, in other words is concerned, as long as a service is also built in, so you can sign up for the service, here need to understand it.
Here renderings not made, and before the single is no different, but the service you just registered the information on which eureka server on the other eureka server has the service.

CAP theorem means:

In 1998, a computer scientist at the University of California, Eric Brewer proposed a distributed system has three indicators.
Consistency - Consistency
Availability - Availability
Partition tolerance - partitions fault tolerance
they are the first letter C, A, P
Eric Brewer said that the three indicators impossible at the same time. This conclusion is called CAP theorem.
Partition tolerance
Chinese called "partitions fault tolerance."
Most distributed systems are distributed across multiple sub-networks. Each subnetwork is called a zone (partition). Fault-tolerant partition mean, range communication may fail. For example, a server on the local, the other server on the field (possibly other provinces, and even foreign), which are two areas may be unable to communicate between them.

Figure above, S1, and S2 is the cross of two servers. S1 sends a message to S2, S2 may not be received. System design, it must take into account this situation.
In general, fault-tolerant partition can not be avoided, it can be considered the P-CAP always true. CAP theorem tells us that the rest of the C and A can not be done at the same time.

Consistency
Consistency Chinese called "consistency." You mean, read after write operation, must return the value. For example, a record is v0, S1 a user initiates a write operation, to read v1.

Next, the user will get a read operation v1. This is called consistency.

The problem is that the user is likely to initiate a read operation to S2, G2 because the value does not change, so the return is v0, so inconsistent S1 and S2 read operation, which does not meet the consistency

In order to return the same value S2 and S1, so we need to perform S1 when a write operation, let S1 to S2 also sends a message asking G2 also becomes v1

This way the user initiates a read operation to the G2, you can get v1

Availability
Availability Chinese called the "availability", meaning that as long as the user's request is received, the server must be given a response.
The user can choose to initiate a read operation S1 or S2. No matter which server, as long as the request is received, it must tell the user, in the end it is v0 or v1, or do not meet the availability.

Consistency and Availability of contradictions
consistency and availability, why can not simultaneously set up? The answer is simple, because there may be a communication failure (fault-tolerant partition appears).
If ensure consistency S2, then S1 must write, read and write operations locked S2. Only after data synchronization, to re-open to read and write. During the lock, S2 can not read and write, there is no availability is not.
If you ensure the availability of S2, it is bound S2 can not be locked, so consistency is not established.
In summary, S2 consistency and availability can not be done at the same time. You can only select a target system design. If the pursuit of consistency, we can not guarantee the availability of all nodes; if the pursuit of availability of all nodes, it can not do consistency.

eureka contrast Zookeeper:
Zookeeper in the design follows the principle of CP, that consistency, Zookeeper will be a situation where, because of a network failure when the master node with other nodes lost contact when the remaining nodes will be re-leader election, the question is, election leader for too long: 30 ~ 120s, Zookeeper and the entire cluster is unavailable during the elections, during the election which led to the registrar in a paralyzed state, under a cloud deployment environment, depending on the network environment, the loss of master node cluster Zookeeper is a greater probability of things happening, although the service will eventually recover, but a long time leads to long-term electoral registration service is not available it will not be tolerated.
Eureka in the design follows the principle AP, namely availability. Eureka each node (service) are equal, there is no master-slave points, several nodes goes down will not affect the normal operation, the remaining node (service) can still provide registration and inquiry services, and Eureka to a client in Eureka registration or discover the connection fails, it will automatically switch to the other node, that is, as long as there is still a Eureka, will be able to register can be used (to ensure availability), but the query information is not the latest (no guarantee strong consistency) addition, Eureka as well as self-protection mechanism, if more than 85% in 15 minutes nodes are not normal heartbeat, then eureka considered the client and the registry of a network failure, appears about the situation:
1: Eureka no longer remove because for a long time did not receive heartbeat expired registration service from the list.
2: Eureka still capable of receiving a query request and registration of a new service, but will not be synchronized to the other nodes (i.e., to ensure that the current node is available)
3: When the network is stable, the current instance of the new registration information will be synchronized to the other nodes

Guess you like

Origin blog.csdn.net/weixin_42840066/article/details/90106749