Greenplum学习11--添加节点gpexpand ②(添加一台主机,数据先迁移)(改写节点配置文件)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_35767346/article/details/54340680

    上一篇添加节点的方式,也可以直接加新的seg主机之后新加节点;该篇前半部分讲解了迁移节点,之后根据修改配置文件去添加节点;

1.首先配置互信(在root上和gpadmin上)

    ( vi /etc/hosts      vi /gp/all_hosts2)
     [ gpadmin@master  ~]$ source /usr/local/greenplum-db/greenplum_path.sh 
     [ gpadmin@master  ~]$ gpssh-exkeys -f /gp/all_hosts2

2. 将配置好的.bashrc文件拷贝到新机器

    scp .bashrc segment3:`pwd`


3.在新机器建目录,给权限

     chown -R gpadmin:gpadmin /data

4.只启动matser

    [gpadmin@master~]$ gpstart -m       --master 模式

     [gpadmin@master~]$ PGOPTIONS="-c gp_session_role=utility" psql  --进入utility

    xx=#set allow_system_table_mods='dml'; --获得修改权限


5.修改并迁移节点

修改前  #   select * from gp_segment_configuration ;
xx=# SELECT * from gp_segment_configuration ;
  dbid | content | role | preferred_role | mode | status | port  | hostname | address  | replication_port | san_mounts 
------+---------+------+----------------+------+--------+-------+----------+----------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | master   | master   |                  | 
    2 |       0 | p    | p              | s    | u      | 40000 | segment1 | segment1 |            43000 | 
    4 |       0 | m    | m              | s    | u      | 41000 | segment2 | segment2 |            42000 | 
    3 |       1 | p    | p              | s    | u      | 40000 | segment2 | segment2 |            43000 | 
    5 |       1 | m    | m              | s    | u      | 41000 | segment1 | segment1 |            42000 | 
(5 rows)

#update gp_segment_configuration set hostname='segment3',address='segment3',status='d' where(content in(0) and role='m');      将要迁移的节点标记成d
#update gp_segment_configuration set mode='c' where(content in (0) and role ='p');      
                 将down掉的primary的模式从同步S改为改变追踪c

修改后 #   select * from gp_segment_configuration ;
xx=# SELECT * from gp_segment_configuration ;
  dbid | content | role | preferred_role | mode | status | port  | hostname | address  | replication_port | san_mounts 
------+---------+------+----------------+------+--------+-------+----------+----------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | master   | master   |                  | 
    3 |       1 | p    | p              | s    | u      | 40000 | segment2 | segment2 |            43000 | 
    5 |       1 | m    | m              | s    | u      | 41000 | segment1 | segment1 |            42000 | 
    4 |       0 | m    | m              | s    | d      | 41000 | segment3 | segment3 |            42000 | 
    2 |       0 | p    | p              | c    | u      | 40000 | segment1 | segment1 |            43000 | 
(5 rows)

6.执行全量恢复

    [gpadmin@master~]$  gprecoverseg -F

成功没有报错之后,gpstate -m查看状态
    20160802:15:17:53:032633 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------
    20160802:15:17:53:032633 gpstate:master:gpadmin-[INFO]:-   Mirror     Datadir               Port    Status    Data Status      
    20160802:15:17:53:032633 gpstate:master:gpadmin-[INFO]:-    segment3   /data/mirror/gpseg0   41000   Passive   Resynchronizing
    20160802:15:17:53:032633 gpstate:master:gpadmin-[INFO]:-   segment1   /data/mirror/gpseg1   41000   Passive   Synchronized
    20160802:15:17:53:032633 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------
    ============================================================================================
    20160802:15:18:31:032685 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------
    20160802:15:18:31:032685 gpstate:master:gpadmin-[INFO]:-   Mirror     Datadir               Port    Status    Data Status    
    20160802:15:18:31:032685 gpstate:master:gpadmin-[INFO]:-    segment3   /data/mirror/gpseg0   41000   Passive   Synchronized
    20160802:15:18:31:032685 gpstate:master:gpadmin-[INFO]:-   segment1   /data/mirror/gpseg1   41000   Passive   Synchronized
    20160802:15:18:31:032685 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------

7.关闭数据库

并将原segment2中gpseg0的文件重新另命名,再次启动数据库成功,删除该节点,数据节点迁移成功完毕。


8.改写配置文件

    [ gpadmin@master  gp]$ vi gpexpand_inputfile_20160803_091436
     segment3:segment3:40001:/data/primary/gpseg2:6:2:p:43001
     segment2:segment2:41001:/data/mirror/gpseg2:7:2:m:42001

9.执行gpexpand -i 添加节点

    [ gpadmin@master  gp]$ gpexpand -i gpexpand_inputfile_20160803_091436 
没有报错,查看状态
    [ gpadmin@master  gp]$ gpstate -m
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:-   Mirror     Datadir               Port    Status    Data Status    
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:-   segment3   /data/mirror/gpseg0   41000   Passive    Synchronized
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:-   segment1   /data/mirror/gpseg1   41000   Passive    Synchronized
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:-   segment2   /data/mirror/gpseg2   41001   Passive    Synchronized
    20160803:09:28:28:036223 gpstate:master:gpadmin-[INFO]:--------------------------------------------------------------

数据重分布前:
    x=#  select gp_segment_id,count(1) from tab_x1 group by 1 order by 1;
     gp_segment_id | count 
    ---------------+-------
                     0 | 36550
                     1 | 26119
        (2 rows)

10.开始进行表重分布(60h是执行周期)

    [ gpadmin@master ~]$  gpexpand -d 60:00:00

数据重分布后:
    xx=#  select gp_segment_id,count(1) from tab_x1 group by 1 order by 1;
     gp_segment_id | count 
    ---------------+-------
                     0 | 20896
                     1 | 20878
                     2 | 20895
        (3 rows)


11.在gpexpand -i 的时候,多了个gpexpand的schema

    [gpadmin@master ~]$ gpexpand -c     --移除gpexpand schema

    Do you want to dump thegpexpand.status_detail table to file? Yy|Nn (default=Y):

    > y 


猜你喜欢

转载自blog.csdn.net/qq_35767346/article/details/54340680