Greenplum combat-GreenPlum database Master node to build Standby [transfer]

GreenPlum database also supports data redundancy mechanism similar to Oracle database physical DataGuard for data redundancy. The database mirroring of Master is called Standby, and the database mirroring of Segment node is called Mirror. This article mainly introduces how to create a Master node without Standby Add Standby.

It should be noted that during the construction of the Standby for the Master node, GreenPlum will automatically close the database and open the Master node in utility mode, then modify the gp_segment_configuration dictionary to add Standby information, and then close the Master node to copy the Master data to Standby Node, and finally start the database, so when adding Standby to the Master node, you need to do it in the idle period, otherwise it will affect the business.

When building Standby, the Standby host needs to install the GreenPlum database software first. This is no different from the normal installation of GreenPlum database software. From the perspective of standardization, try to use the same installation path as the Master host. The following briefly ends the installation process.

1. Create gpadmin user and installation directory

01 [root@mdw-std ~]# groupadd -g 520 gpadmin
02 [root@mdw-std ~]# useradd -u 520 -g gpadmin gpadmin
03 [root@mdw-std ~]# passwd gpadmin
04 Changing password for user gpadmin.
05 New password:
06 BAD PASSWORD: it is based on a dictionary word
07 BAD PASSWORD: is too simple
08 Retype new password:
09 passwd: all authentication tokens updated successfully.
10 [root@mdw-std ~]# mkdir -p /gpdb/app
11 [root@mdw-std ~]# mkdir -p /gpdb/gpdata
12 [root@mdw-std ~]# chown -R gpadmin:gpadmin /gpdb/

2. Configure the hosts file to add information about all hosts

1 [gpadmin@mdw config]$ vi /etc/hosts
2 10.9.15.20      mdw
3 10.9.15.22      mdw-std
4 10.9.15.24      sdw1
5 10.9.15.26      sdw2
6 10.9.15.28      sdw3

Except for the Standby host, the hosts file of all other hosts (Master node and all Segment nodes) needs to add the information of the Standby host, that is, the hosts file of all hosts included in the GreenPlum database must contain all the host information in the above hosts file.

3. Upload installation files and unzip

01 [gpadmin@mdw-std gpdb]$ scp 10.9.15.20:/gpdb/*.zip .
02 The authenticity of host '10.9.15.20 (10.9.15.20)' can't be established.
03 RSA key fingerprint is 61:72:68:57:16:28:40:d4:bc:9e:68:f0:bc:ac:65:e9.
04 Are you sure you want to continue connecting (yes/no)? yes
05 Warning: Permanently added '10.9.15.20' (RSA) to the list of known hosts.
06 [email protected]'s password:
07 greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip                        100%  134MB 133.7MB/s   00:01
08 [gpadmin@mdw-std gpdb]$ unzip greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip
09 Archive:  greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip
10   inflating: README_INSTALL         
11   inflating: greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.bin 

4. Configure the kernel parameters and add the following

01 [root@mdw-std ~]# vi /etc/sysctl.conf
02  
03 kernel.shmmax = 500000000
04 kernel.shmall = 4000000000
05 kernel.shmmni = 4096
06 kernel.sem = 250 512000 100 2048
07 kernel.sysrq = 1
08 kernel.core_uses_pid = 1
09 kernel.msgmnb = 65536
10 kernel.msgmax = 65536
11 kernel.msgmni = 2048
12 net.ipv4.tcp_syncookies = 1
13 net.ipv4.ip_forward = 0
14 net.ipv4.conf.default.accept_source_route = 0
15 net.ipv4.tcp_tw_recycle = 1
16 net.ipv4.tcp_max_syn_backlog = 4096
17 net.ipv4.conf.all.arp_filter = 1
18 net.ipv4.conf.default.arp_filter = 1
19 net.ipv4.ip_local_port_range = 1025 65535
20 net.core.netdev_max_backlog = 10000
21 vm.overcommit_memory = 2

Make it effective through the sysctl command.

1 [root@mdw-std ~]# sysctl -p

5. Configure resource limits (limit)

1 [root@mdw-std ~]# vi /etc/security/limits.conf
2 gpadmin  soft  nofile  65536
3 gpadmin  hard  nofile  65536
4 gpadmin  soft  nproc  131072
5 gpadmin  hard  nproc  131072

6. Modify the disk mode, increase elevator = deadline in the /boot/grub/menu.lst file.

01 [root@mdw-std ~]# echo deadline > /sys/block/sda/queue/scheduler
02 [root@mdw-std ~]# cat /sys/block/sda/queue/scheduler
03 noop [deadline] cfq
04  
05 [root@mdw-std ~]# vi /boot/grub/menu.lst
06 # grub.conf generated by anaconda
07 #
08 # Note that you do not have to rerun grub after making changes to this file
09 # NOTICE:  You have a /boot partition.  This means that
10 #          all kernel and initrd paths are relative to /boot/, eg.
11 #          root (hd0,0)
12 #          kernel /vmlinuz-version ro root=/dev/sda3
13 #          initrd /initrd-[generic-]version.img
14 #boot=/dev/sda
15 default=0
16 timeout=5
17 splashimage=(hd0,0)/grub/splash.xpm.gz
18 hiddenmenu
19 title Oracle Linux Server Unbreakable Enterprise Kernel (3.8.13-16.2.1.el6uek.x86_64)
20         root (hd0,0)
21         kernel /vmlinuz-3.8.13-16.2.1.el6uek.x86_64 ro root=UUID=1fb9c4cf-3cf4-47db-bf4a-0a87958d477d rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16   KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
22         initrd /initramfs-3.8.13-16.2.1.el6uek.x86_64.img
23 title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.el6.x86_64)
24         root (hd0,0)
25         kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=1fb9c4cf-3cf4-47db-bf4a-0a87958d477d rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM elevator=deadline rhgb quiet
26         initrd /initramfs-2.6.32-431.el6.x86_64.img

7. Configure disk read-ahead blocks

1 [root@mdw-std ~]# /sbin/blockdev --setra 65535 /dev/sda3

8. Install GreenPlum database software

1 [gpadmin@mdw-std gpdb]$ /bin/bash greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.bin

After skipping a pile of license information, you will be prompted for license-related information.

1 ********************************************************************************
2 Do you accept the Pivotal Database license agreement? [yes|no]
3 ********************************************************************************

After entering yes to confirm, it will prompt information about the installation path.

1 yes
2  
3 ********************************************************************************
4 Provide the installation path for Greenplum Database or press ENTER to
5 accept the default installation path: /usr/local/greenplum-db-4.3.6.2
6 ********************************************************************************

This uses the same installation path as the Master.

1 /gpdb/app
2  
3 ********************************************************************************
4 Install Greenplum Database into </gpdb/app>? [yes|no]
5 ********************************************************************************

Enter yes to confirm the installation path.

01 yes
02  
03 Extracting product to /gpdb/app
04  
05 ********************************************************************************
06 Installation complete.
07 Greenplum Database is installed in /gpdb/app
08  
09 Pivotal Greenplum documentation is available
10 for download at http://docs.gopivotal.com/gpdb
11 ********************************************************************************

GreenPlum数据库软件安装完成,需要配置gpadmin用户的环境变量,增加以下内容。

1 [gpadmin@mdw-std app]$ vi /home/gpadmin/.bash_profile
2 source /gpdb/app/greenplum_path.sh

使之生效。

1 [gpadmin@mdw-std app]$ . /home/gpadmin/.bash_profile

9.Standby主机创建数据库初始化数据存放目录及文件空间目录,需要和Master目录一样。

1 [gpadmin@mdw-std gpdata]$ mkdir /gpdb/gpdata/master
2 [gpadmin@mdw-std gpdata]$ mkdir /gpdb/gpdata/fspc_master

以上均是在Standby节点的主机上操作,接下来是在Master节点的主机上操作。

10.在Master节点配置所有节点于Standby节点的ssh互信。

01 [gpadmin@mdw config]$ vi hostlist
02  
03 mdw
04 mdw-std
05 sdw1
06 sdw2
07 sdw3
08  
09 [gpadmin@mdw config]$ gpssh-exkeys -f hostlist
10 [STEP 1 of 5] create local ID and authorize on local host
11   ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped
12  
13 [STEP 2 of 5] keyscan all hosts and update known_hosts file
14  
15 [STEP 3 of 5] authorize current user on remote hosts
16   ... send to mdw-std
17   ***
18   *** Enter password for mdw-std:
19   ... send to sdw1
20   ... send to sdw2
21   ... send to sdw3
22  
23 [STEP 4 of 5] determine common authentication file content
24  
25 [STEP 5 of 5] copy authentication files to all remote hosts
26   ... finished key exchange with mdw-std
27   ... finished key exchange with sdw1
28   ... finished key exchange with sdw2
29   ... finished key exchange with sdw3
30  
31 [INFO] completed successfully

通过gpssh工具测试互信设置是否成功。

01 [gpadmin@mdw config]$ gpssh -f hostlist -e 'date'
02 [mdw-std] date
03 [mdw-std] Tue Feb 16 17:50:16 CST 2016
04 [    mdw] date
05 [    mdw] Tue Feb 16 17:50:16 CST 2016
06 [   sdw2] date
07 [   sdw2] Tue Feb 16 17:50:16 CST 2016
08 [   sdw1] date
09 [   sdw1] Tue Feb 16 17:50:16 CST 2016
10 [   sdw3] date
11 [   sdw3] Tue Feb 16 17:50:16 CST 2016

11.在Master节点通过gpinitstandby命令添加Standby。

01 [gpadmin@mdw config]$ gpinitstandby -s mdw-std
02  
03 The filespace locations on the master must be mapped to
04 locations on the standby.  These locations must be empty on the
05 standby master host.  The default provided is the location of
06 the filespace on the master (except if the master and the
07 standby are hosted on the same node or host). In most cases the
08 defaults can be used.
09  
10 Enter standby filespace location for filespace fspc1 (default: /gpdb/gpdata/fspc_master/gpseg-1):

这里会提示Master节点中记录的文件空间的目录,默认和Master相同,这里可以使用和Master节点不同的路径,建议使用一样的路径。需要注意,在这个步骤之前,一定要创建和Master一样的数据库初始化目录,这里是/gpdb/gpdata/master目录,否则会报这个路径找不到的错误。如果文件空间直接使用默认路径,直接回车就可以。

01 >
02 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
03 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /gpdb/gpdata/master/gpseg-1 on mdw-std
04 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /gpdb/gpdata/fspc_master/gpseg-1 on mdw-std
05 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
06 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
07 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
08 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw
09 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /gpdb/gpdata/master/gpseg-1
10 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
11 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = mdw-std
12 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
13 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /gpdb/gpdata/master/gpseg-1
14 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
15 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
16 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:- Filespace locations
17 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
18 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-pg_system -> /gpdb/gpdata/master/gpseg-1
19 20160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-fspc1 -> /gpdb/gpdata/fspc_master/gpseg-1
20 Do you want to continue with standby master initialization? Yy|Nn (default=N):

输入Y确认搭建Standby。

01 > y
02 20160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
03 20160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on mdw-std are consistent.
04 20160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
05 20160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
06 20160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
07 20160216:17:55:02:011967 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
08 20160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Updating filespace flat files...
09 20160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Filespace flat file updated successfully.
10 20160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
11 20160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: mdw-std  in directory: /gpdb/gpdata/master/gpseg-1
12 20160216:17:55:05:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
13 20160216:17:55:11:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
14 20160216:17:55:11:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on mdw-std

搭建完成,可以使用gpstate命令查看Standby的状态。

01 [gpadmin@mdw config]$ gpstate -s
02 20160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -s
03 20160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.6.2 build 1'
04 20160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.6.2 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Nov 12 2015 23:50:28'
05 20160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
06 20160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
07 .
08 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
09 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:--Master Configuration & Status
10 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
11 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master host                    = mdw
12 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master postgres process ID     = 2474
13 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master data directory          = /gpdb/gpdata/master/gpseg-1
14 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master port                    = 5432
15 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master current role            = dispatch
16 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Greenplum initsystem version   = 4.3.6.2 build 1
17 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Greenplum current version      = PostgreSQL 8.2.15 (Greenplum Database 4.3.6.2 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Nov 12 2015 23:50:28
18 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Postgres version               = 8.2.15
19 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master standby                 = mdw-std
20 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Standby master state           = Standby host passive
21 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
22 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-Segment Instance Status Report
23 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
24 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
25 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw1
26 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw1
27 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg0
28 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
29 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
30 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2378
31 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
32 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up
33 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
34 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
35 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw2
36 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw2
37 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg1
38 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
39 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
40 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2362
41 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
42 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up
43 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
44 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
45 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw3
46 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw3
47 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg2
48 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
49 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
50 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2384
51 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
52 20160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up

也可以在Standby节点的主机上查看启动的进程来查看Standby的状态。

1 [gpadmin@mdw-std gpdata]$ ps -ef | grep gpadmin
2 gpadmin   4362     1  0 17:55 ?        00:00:00 /gpdb/app/bin/postgres -D /gpdb/gpdata/master/gpseg-1 -p 5432 -b 5 -z 3 --silent-mode=true -i -M master -C -1 -x 0 -y -E
3 gpadmin   4378  4362  0 17:55 ?        00:00:00 postgres: port  5432, master logger process                     
4 gpadmin   4379  4362  0 17:55 ?        00:00:00 postgres: port  5432, startup process   recovering 000000010000000000000002
5 gpadmin   4390  4362  0 17:55 ?        00:00:00 postgres: port  5432, wal receiver process 

第一个进程是Standby的启动进程也就是守护进程,第二个进程是日志传输进程,第三个进程是数据同步进程也就是数据恢复进程,第四个进程是Standby与Master之间的通信检测进程,相当于心跳,以上进程的作用均是个人的猜测,有可能不准。

Standby搭建完成,即可在数据库中查询到Standby的信息。

01 [gpadmin@mdw config]$ psql -d dbdream
02 psql (8.2.15)
03 Type "help" for help.
04  
05 dbdream=# select * from gp_segment_configuration;
06  dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
07 ------+---------+------+----------------+------+--------+-------+----------+---------+------------------+--------
08     1 |      -1 | p    | p              | s    | u      |  5432 | mdw      | mdw     |                  |
09     2 |       0 | p    | p              | s    | u      | 40000 | sdw1     | sdw1    |                  |
10     3 |       1 | p    | p              | s    | u      | 40000 | sdw2     | sdw2    |                  |
11     4 |       2 | p    | p              | s    | u      | 40000 | sdw3     | sdw3    |                  |
12     5 |      -1 | m    | m              | s    | u      |  5432 | mdw-std  | mdw-std |                  |
13 (5 rows)

 

发布了13 篇原创文章 · 获赞 1 · 访问量 199

Guess you like

Origin blog.csdn.net/murkey/article/details/105625540