Consul research

 

 

1. Build a cluster environment

192.168.32.144                          n1     server1

192.168.32.192                          n2     server2

192.168.21.120 local n3 server3                    

 

Experimental steps:

1. Start the 192.168.32.144 environment in -bootstrap mode:

./consul agent -server -bootstrap -data-dir /opt/consul/data -config-dir=./conf -node=n1 -bind=192.168.32.144

 

2. Start the 192.168.32.192 environment and join the cluster:

./consul agent -server -data-dir /opt/consul/data -config-dir=./conf -node=n2 -bind=192.168.32.192

./consul join 192.168.32.144

 

3. Start the 192.168.21.120 environment and join the cluster:

consul agent -server -data-dir e:\consul\data -node=n3 -bind=192.168.21.120

consul join 192.168.32.144

 

4. Execute the query command on each node to check the cluster status and which node is the leader :

./consul members

./consul info

 

5. Verify automatic election:

Stop the first node through consul leave and observe the election and state changes of the second and third nodes.

 

6. Then add the first node to the cluster. Note that the -bootstrap option cannot be used at this time:

./consul agent -server -data-dir /opt/consul/data -node=n1 -bind=192.168.32.144

./consul join 192.168.32.192

 

 

2. Configuration service

192.168.32.144                          n1     server1

192.168.32.192                          n2     server2

192.168.21.120 native c1 client1                    

 

Experimental steps:

1. Install tomcat and rcp-webclient on 192.168.32.144 and 192.168.32.192 respectively (omitted)

 

2. Create a conf directory in the opt/consul directory , and create a service.json file in the conf directory , as shown below:

{

  "service": {

    "name": "web1",

    "tags": ["master"],

    "address": "127.0.0.1",

    "port": 8080,

    "checks": [

      {

        "http": "http://localhost:8080/rcp-webclient/ping.txt",

        "interval": "10s"

      }

    ]

  }

}

Add the -config-dir option at startup , such as:

./consul agent -server -bootstrap -data-dir /opt/consul/data -config-dir=./conf -node=n1 -bind=192.168.32.144

./consul agent -server -data-dir /opt/consul/data -config-dir=./conf -node=n2 -bind=192.168.32.192

 

3. View nodes and services:

curl -s http://localhost:8500/v1/catalog/nodes

curl -s http://localhost:8500/v1/catalog/services

 

4. Start the local consul in client mode and join the cluster:

consul agent -data-dir e:\consul\data -node=c1 -bind=192.168.21.120

consul join 192.168.32.144

 

Access via native browser:

http://localhost:8500/v1/catalog/nodes

http://localhost:8500/v1/catalog/services

http://localhost:8500/v1/catalog/service/web1

 

5. Health check

Check if the service is available normally

http://localhost:8500/v1/health/state/any

After the local tomcat runs normally and stops, the query will yield different results.

 

 

3. Other

3.2 Notes

1. linux adopts the 0.6.3 64 -bit version, and it has been verified by experiments that the suse 9 32 -bit environment cannot be started normally;

2. To verify the election of the cluster environment, only three servers can take effect, and two servers cannot succeed;

3. Use the consul leave command when exiting to avoid extreme exceptions;

4. When the cluster environment is restarted, the data directory of each node needs to be cleared, otherwise the cluster environment may not be successfully built.

3.1 Test web ui

Download the UI program, place it in the /opt/consul/consul_0.6.3_web_ui directory, and add the -ui-dir option to start consul , as shown below:

./consul agent -server -bootstrap -data-dir / opt / consul / data -config-dir =. / conf -node = n1 -bind = 192.168.32.144 -ui-dir = / opt / consul / consul_0.6.3_web_ui

Browser access example:

http://192.168.32.144:8500/

3.3 Verify that port 8500 can only be accessed locally, not remotely

After Consul is started, it is found that it cannot be accessed through the browser on other servers. For example, the service on 192.168.32.144 cannot be accessed on 192.168.21.120 . The following example:

http://192.168.32.144:8500/v1/catalog/services

Through netstat -ano|grep 8500 , it is found that the listening address of port 8500 is the local address ( 127.0.0.1 ), which is the reason why other machines cannot be accessed.

 

3.4 How to enable remote web access to consul

The operation steps are as follows:

Add a config.json to the conf directory , as shown below:

{

  "addresses": {

    "http": "192.168.32.144"

  }

}

Restart with the following command:

./consul agent -server -bootstrap -data-dir /opt/consul/data -config-dir=./conf -node=n1 -bind=192.168.32.144

 

Viewed through netstat -ano|grep 8500 , it is found that the listening address of port 8500 is 192.168.32.144 , allowing other machines to access

 

To analyze the reason, the machine starts an agent client , which is the most reliable one to monitor the machine. If it monitors the remote server address, which one should be monitored, in case the monitored server goes down?

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327026020&siteId=291194637