TiDB database from entry to proficiency series 3: Simulate deployment of production environment clusters on a single machine
- 1. Prepare the environment
- 2. Implementation and deployment
-
- 1. Download and install TiUP
- 2. Declare global environment variables
- 3. Install the cluster component of TiUP
- 4. If the TiUP cluster has been installed on the machine, the software version needs to be updated
- 5. Due to the simulated multi-machine deployment, it is necessary to increase the connection limit of the sshd service through the root user
- 6. Create and start the cluster
- 7. Execute the cluster deployment command
- 8. Start the cluster
- 9. Access the cluster
Applicable scenario: You want to use a single Linux server to experience the smallest complete topology cluster of TiDB, and simulate the deployment steps in the production environment.
1. Prepare the environment
Prepare a deployment host and make sure its software meets the requirements:
- It is recommended to install CentOS 7.3 and above
- The operating environment can support Internet access for downloading TiDB and related software installation packages
The smallest TiDB cluster topology:
Deployment host software and environment requirements:
- Deployment needs to use the root user and password of the deployment host
- The deployment host closes the firewall or opens the required ports between the nodes of the TiDB cluster
- TiUP Cluster currently supports the deployment of TiDB clusters on x86_64 (AMD64) and ARM architectures
- Under the AMD64 architecture, it is recommended to use CentOS 7.3 and above version Linux operating system
- Under the ARM architecture, it is recommended to use the CentOS 7.6 1810 version of the Linux operating system
2. Implementation and deployment
You can use any common user or root user of the Linux system to log in to the host. The following steps take the root user as an example.
1. Download and install TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
2. Declare global environment variables
After the TiUP installation is complete, it will prompt the absolute path of the corresponding Shell profile file. Before executing the following source command, you need to modify ${your_shell_profile} to the actual location of the Shell profile file.
source ${your_shell_profile}
3. Install the cluster component of TiUP
tiup cluster
4. If the TiUP cluster has been installed on the machine, the software version needs to be updated
tiup update --self && tiup update cluster
5. Due to the simulated multi-machine deployment, it is necessary to increase the connection limit of the sshd service through the root user
- Modify /etc/ssh/sshd_config to adjust MaxSessions to 20
- Restart the sshd service
service sshd restart
6. Create and start the cluster
According to the configuration template below, edit the configuration file and name it topo.yaml, where:
- user: "tidb": Indicates that the internal management of the cluster is done through the tidb system user (deployment will be automatically created), and the default port 22 is used to log in to the target machine through ssh
- replication.enable-placement-rules: set this PD parameter to ensure the normal operation of TiFlash
- host: set to the IP of the deployment host
The configuration template is as follows:
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
instance.tidb_slow_log_threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
tiflash:
logger.level: "info"
pd_servers:
- host: 10.0.1.1
tidb_servers:
- host: 10.0.1.1
tikv_servers:
- host: 10.0.1.1
port: 20160
status_port: 20180
config:
server.labels: {
host: "logic-host-1" }
- host: 10.0.1.1
port: 20161
status_port: 20181
config:
server.labels: {
host: "logic-host-2" }
- host: 10.0.1.1
port: 20162
status_port: 20182
config:
server.labels: {
host: "logic-host-3" }
tiflash_servers:
- host: 10.0.1.1
monitoring_servers:
- host: 10.0.1.1
grafana_servers:
- host: 10.0.1.1
7. Execute the cluster deployment command
tiup cluster deploy <cluster-name> <version> ./topo.yaml --user root -p
- The parameter indicates setting the cluster name
- The parameter indicates to set the cluster version, such as v7.1.1. You can use the tiup list tidb command to view the TiDB version currently supported for deployment
- The parameter -p means to log in with a password when connecting to the target machine
Note: If the host uses a key for SSH authentication, please use the -i parameter to specify the key file path, and -i and -p cannot be used at the same time.
Follow the guide and enter "y" and the root password to complete the deployment:
Do you want to continue? [y/N]: y
Input SSH password:
8. Start the cluster
tiup cluster start <cluster-name>
9. Access the cluster
Install the MySQL client. This step can be skipped if the MySQL client is already installed.
yum -y install mysql
To access the TiDB database, the password is empty:
mysql -h 10.0.1.1 -P 4000 -u root
To access TiDB's Grafana monitoring:
Access the cluster Grafana monitoring page through http://{grafana-ip}:3000. The default username and password are both admin.
Visit TiDB's Dashboard:
Access the cluster TiDB Dashboard monitoring page through http://{pd-ip}:2379/dashboard. The default user name is root and the password is empty.
Execute the following command to confirm the list of currently deployed clusters:
tiup cluster list
Execute the following command to view the topology and status of the cluster:
tiup cluster display <cluster-name>