Microservice registration center, configuration center-nacos-2.0.4 cluster deployment

The middleware used by each company is actually very similar, but as far as the microservice registration center is concerned, the companies that have experienced it are different. The first company used zookeeper, which is actually a very old registration center. The previous company used eurake and consul. Most of the current company's projects use nacos.

For nacos, both the registration center and the configuration center can be implemented. There is no need to build a new configuration center separately. Because I am currently working on a new project, I need to redeploy a set of nacos, so I will simply record it. In fact, different systems can It is divided through different namespaces. However, due to network isolation problems, a new environment needs to be redeployed.

Required software and version;

nacos 2.0.4jdk 1.8nginx 2.16mysql 8.0

The method of obtaining the package is no longer listed. If necessary, you can directly send a private message in the background.

There is a point that needs attention, which is also a problem I encountered later. After version 2.0, nacos opened two new ports 9848 and 9849. If nginx is used for load, you need to ensure that the port can be accessed, otherwise it will not work later. Will not be able to register.

Deployment is actually very simple. First configure the jdk environment, unzip the jdk package, and add specific variables in /etc/profile.

export JAVA_HOME=/opt/jdk-1.8/jdk1.8.0_261export PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

Then decompress the nacos tar package. After decompression, you only need to modify two configuration files

cd ./nacos/confvim cluster.conf#加入集群配置ip1:8848ip2:8848ip3:8848

Then there is the service configuration

vim application.propertiesserver.servlet.contextPath=/nacosserver.contextPath=/nacosserver.port=8848spring.datasource.platform=mysqldb.num=1db.url.0=jdbc:mysql://mysqlip:port/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTCdb.user.0=mysqluserdb.password.0=mysqlpasswddb.pool.config.connectionTimeout=30000db.pool.config.validationTimeout=10000db.pool.config.maximumPoolSize=20db.pool.config.minimumIdle=2management.metrics.export.elastic.enabled=falsemanagement.metrics.export.influx.enabled=falseserver.tomcat.accesslog.enabled=trueserver.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}iserver.tomcat.basedir=nacos.security.ignore.urls=/,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/v1/auth/login,/v1/console/health/**,/v1/cs/**,/v1/ns/**,/v1/cmdb/**,/actuator/**,/v1/console/server/**nacos.istio.mcp.server.enabled=falsenacos.core.auth.system.type=nacosnacos.core.auth.enabled=truenacos.istio.mcp.server.enabled=false

After the configuration is complete, the other two configurations are the same, directly in the bin directory, ./startup.sh is fine

After getting up, access the 8848 of one of them through the web. The default password is nacos/nacos, just change the password.

Another point to note is that it is recommended to configure different users for different projects. The password cannot contain special characters, otherwise the program will report an error and fail to read the password.

The last word is to configure the load and proxy on ng, a simple configuration, you can refer to it.

    upstream nacos    {
   
         server ip1:8848;      server ip2:8848;      server ip3:8848;    }    server {
   
           listen 8848;        location / {
   
           proxy_pass http://nacos;        }        }

Then you can access it through ng's IP. When registering, just specify the IP as ng's IP. If there are multiple ngs, you need to hang another slb or F5 in front.

Encounter problems:

During use, I encountered a problem. Here is a brief record. When starting version 2.0, it will first start in the 1.x mode, and then detect the versions of each node in the cluster to confirm that they are all versions after 2.0. If you do, you will succeed.

Because at the beginning, there was a problem with the cluster configuration, so the cluster did not succeed. One of the node instances was in the DOWN state.

So when the microservice was registered, it kept reporting that the version was still 1.

The last word is to clear the data directories of the three machines, and then restart, and then you can register normally.

Guess you like

Origin blog.csdn.net/smallbird108/article/details/125466672
Recommended