Tidb 2.0 GA version cluster installation, deployment and processing related error information

Hosts can resolve each other


[root@tidb1 ~]# cat /etc/hosts 
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.137.161 tidb1
192.168.137.162 tidb2
192.168. 137.163 tidb3
192.168.137.164 tidb4
192.168.137.165 tidb5
192.168.137.166 tidb6








useradd tidb 
passwd tidb


only needs to create a user on the central control machine
[root@tidb1 tidb-ansible]# cd /etc/yum.repos.d/
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo


[root@tidb1 ~]# yum install -y ansible 






vim /etc/sudoers add The following:
# visudo
tidb ALL=(ALL) NOPASSWD: ALL
su - tidb
ssh-keygen -t rsa 




[tidb@tidb1 ~]$ git clone -b release-2.0 https://github.com/pingcap/tidb-ansible .git  


cd tidb-ansible 
[root@tidb1 ~]# cd /home/tidb/


[root@tidb1 ~]# chmod -R 777 tidb-ansible
















reference documentation
https://github.com/pingcap/docs-cn/blob /master/op-guide/ansible-deployment.md#How to use-ansible-deploy-ntp-service


How to use Ansible to deploy NTP service


Refer to Download TiDB-Ansible on the central control machine to download TiDB-Ansible, add your deployment target machine IP to the [servers] block, the value of the ntp_server variable pool.ntp.org can be replaced with another NTP server, and start NTP Before the service, the system will ntpdate the NTP server. The NTP service installed by Ansible uses the default server list of the installation package. See the server parameter in the configuration file cat /etc/ntp.conf.


$ vi hosts.ini
[servers]
192.168.137.161
192.168.137.162
192.168.137.163
192.168.137.164
192.168.137.165
192.168.137.166






[all:vars]
username = tidb
ntp_server =
execute the following command as prompted to deploy the target pool.ntp Machine root password.


$ ansible-playbook -i hosts.ini deploy_ntp.yml -k This command can be executed by both root and tidb users of the central control machine




$ansible-playbook -i hosts.ini create_users.yml -k This command can only be executed by the central control machine tidb user


ssh localhost
ssh tidb1 
ssh tidb2 
ssh tidb3 
ssh tidb4 
ssh tidb5 
ssh tidb6 








[root@tidb1 tidb-ansible]# cat inventory.ini 
## TiDB Cluster Part
[tidb_servers]
192.168.137.161
192.168.137.162


[tikv_servers]


192.168.137.164
192.168.137.165
192.168.137.166
[pd_servers]


192.168.137.161
192.168.137.162
192.168.137.163


[spark_master]
192.168.137.164


[spark_slaves]


192.168.137.165
192.168.137.166
## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]


192.168.137.161
[grafana_servers]
192.168.137.161
# node_exporter and blackbox_exporter servers
[monitored_servers]
192.168.137.161
192.168.137.162
192.168.137.163
192.168.137.164
192.168.137.165
192.168.137.166


[alertmanager_servers]
192.168.137.161
## Binlog Part
[pump_servers:children]
tidb_servers


## Group variables
[pd_servers:vars]
# location_labels = ["zone","rack","host"]


## Global variables
[all:vars]
deploy_dir = /home/tidb/deploy


## Connection
# ssh via normal user
#ansible_user = tidb


# ssh via root:
 ansible_user = root
 ansible_become = true
 ansible_become_user = tidb


cluster_name = test-cluster


tidb_version = v2.0.0-rc.3


# deployment methods, [binary, docker]
deployment_method = binary


# process supervision, [systemd, supervise]
process_supervision = systemd


# timezone of deployment region
timezone = Asia/Shanghai
set_timezone = True


enable_firewalld = False
# check NTP service
enable_ntpd = True
set_hostname = False


## binlog trigger
enable_binlog = False
# zookeeper address of kafka cluster, example:
# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
zookeeper_addrs = ""


# store slow query log into seperate file
enable_slow_query_log = False


# enable TLS authentication in the TiDB cluster
enable_tls = False


# KV mode
deploy_without_tidb = False


# Optional. Set if you already have a alertmanager server.
# Format: alertmanager_host:alertmanager_port
alertmanager_target = ""




Use local_prepare.yml, download tidb binary to the central control machine
ansible-playbook local_prepare.yml This command can only be used in Executed by the tidb user of the control machine


Initialize system environment and modify the kernel parameters
If the service user has not been established, this initialization operation will automatically create the user
ansible-playbook bootstrap.yml This command can only be executed by the tidb user of the control machine






If ansible uses the root user to connect remotely and requires a password, use the -k parameter to execute the playbook, and the same is true: 
ansible-playbook bootstrap.yml This command can only be executed by the tidb user of the central control machine












because it is done in the test environment of the virtual machine,
execute During the process of ansible-playbook bootstrap.yml, the following error will be reported




[tidb_servers]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {" changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname tidb_servers: Name or service not known\r\n", "unreachable": true}


[tikv_servers]: Ansible UNREACHABLE ! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname tikv_servers: Name or service not known\r\n", "unreachable": true}


[pd_servers]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname pd_servers: Name or service not known\r\n", "unreachable": true}


[spark_master]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname spark_master: Name or service not known\r\n", "unreachable": true}


[spark_slaves]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname spark_slaves: Name or service not known\r\n", "unreachable": true}


[192.168.137.161]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's CPU; message: {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}


[192.168.137.162]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's CPU; message: {"changed": false, "msg":"This machine does not have sufficient CPU to run TiDB, at least 8 cores."}






TASK [check_system_optional : Preflight check - Check TiDB server's CPU] ************************************************************************
fatal: [192.168.137.162]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}


NO MORE HOSTS LEFT ******************************************************************************************************************************
to retry, use: --limit @/root/tidb-ansible/retry_files/bootstrap.retry


PLAY RECAP **************************************************************************************************************************************
192.168.137.161            : ok=0    changed=0    unreachable=1    failed=0   
192.168.137.162            : ok=30   changed=8    unreachable=0    failed=1   
192.168.137.163            : ok=30   changed=8    unreachable=0    failed=0   
192.168.137.164            : ok=33   changed=10   unreachable=0    failed=0   
192.168.137.165            : ok=33   changed=10   unreachable=0    failed=0   
192.168.137.166            : ok=33   changed=10   unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   




ERROR MESSAGE SUMMARY ***************************************************************************************************************************
[192.168.137.161]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}


[192.168.137.162]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's CPU; message: {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}






ERROR MESSAGE SUMMARY ***************************************************************************************************************************
[192.168.137.161]: Ansible UNREACHABLE! => playbook: bootstrap.yml; TASK: pre-ansible : disk space check - fail when disk is full; message: {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}










vim bootstrap.yml Modify the file and comment out the following


- name: check system
  hosts: all
  any_errors_fatal: true
  roles:
    - check_system_necessary
# - { role: check_system_optional, when: not dev_mode }


If it is a non-SSD test, it is best to comment out the following content 
- name: tikv_servers machine benchmark
  hosts: tikv_servers
  gather_facts : false
  roles:
# - { role: machine_benchmark, when: not dev_mode }










Deploy TIDB cluster software
ansible-playbook deploy.yml This command can only be executed by the tidb user of the central control machine


Start  TIDB cluster


ansible-playbook start.yml This command can only be executed by the tidb user of the central control machine Execute


Close the entire cluster
ansible-playbook stop.yml This command can only be executed by the tidb user of the central control machine.




Test cluster
Test the connection to the TiDB cluster. It is recommended to configure load balancing before TiDB to provide a unified SQL interface to the outside world.


Using the MySQL client connection test, the TCP port 4000 is the default port of the TiDB service.


netstat -anp | grep 4000 




mysql -u root -h 192.168.137.161 -P 4000




[root@tidb1 ~]# mysql -h 127.0.0.1 -uroot -P 4000 -D mysql
bash: mysql: command not found.






rpm -ivh mysql57-community-release-el7-11.noarch.rpm 
yum install -y mysql-community-client.x86_64 




mysql -h 127.0.0.1 -uroot -P 4000
mysql -h 127.0.0.1 -uroot -P 4000 -D mysql
mysql -h 192.168.137.161 -uroot -P 4000 
mysql -h 192.168.137.161 -uroot -P 4000 -D mysql 
  
  
  
  
  
[root@tidb1 ~]# mysql -h 127.0.0.1 -uroot -P 4000
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.10-TiDB-v2.0.0-rc.3 MySQL Community Server (Apache License 2.0)


Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.


Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
4 rows in set (0.00 sec)





























Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325746837&siteId=291194637