One super detailed oggDDL single copy configuration record

One super detailed oggDDL single copy configuration record

OS level configuration list

GG software supports data disaster recovery between heterogeneous platform software, so at the OS level we are most concerned about the compatibility between the respective platforms and the same version of GG software. GG software's heterogeneous disaster recovery main data transmission depends on the network The TCP / IP protocol must ensure smooth network communication between hosts.

Insert picture description here

OS level preparation checklist

Configuration process (source and target)

1.1 Modify IP address, hostname, add hosts information

|

vi /etc/sysconfig/network
vi /etc/sysconfig/network-script/ifcfg-eth0
source端与target端按照上表信息填写修改。
vi /etc/hosts
service network restart
ping <<hostaname
hosts不仅要写自己的hosts与ip对应,还要写目标主机的

1.2 Create related users, user groups, user groups, and directory allocation

Insert picture description here

chown –R oracle:oinstall  /u01  ROOT将目录权限给予oracle和oinstall组

1.3 Environment variables of oracle users ~ / .bash_profile

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

ORACLE_BASE=/u01
ORACLE_HOME=$ORACLE_BASE/oracle
ORACLE_SID=prod
PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH

alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

NLS_LANG="simplified chinese"_china.AL32UTF8
export NLS_LANG
export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/network/lib '

Note that the ORACLE_SID item should point to different instance names on different hosts to distinguish SOURCE / TARGET

DATABASE level configuration list

Basic Information

Insert picture description here

DBCA database

export DISPLAY=<客户端地址>:0.0 oracle user to operate, confirm the environment variable information is correct again before installation

SOFT level configuration list

1.1 Introduction to the principle of one-way source-target

1.1.1 OGG general system and process

Insert picture description here

Most of the source and target processes appear in pairs, and each has a general process mgr to uniformly manage all extraction, transmission, and replication processes.

Insert picture description here

Target side: 1. Scan and bind the available ports to the mgr main process to the requested extract process according to the request of the source extract process; 2. Receive the source extract to extract the trail and write it to the target trail file; then replicate The process reads and applies the data to the target database. (Can be synchronized with data capture or executed later) Background process, not directly interactive

Insert picture description here

The trail file is used to store the data changes captured by the source extract. You can use add exttrail ./dirdat, extract
eora_1 to add the ontology trail file, and then send it to the target via tcp via the pump process. The trail file queue is reused by the replicate process.

extract-local trail-pump rmttrail-tcp-collector-local trail-replicat

1.1.2 Progress1:Eini_1(source) – Rini_1(target)

  • Initialize the loading process. It is used to initialize and synchronize the two tables. For the verification of the table structure and table data, if there is no difference between the two tables, they can be out of synchronization.

  • For larger tables that need gg disaster recovery to another host, it is best to use the data pump to import and export, the initial load will be faster.

  • It is best to have primary key constraints between synchronized tables and tables. The data is not positioned with the table rowid as the identifier, and the efficiency is affected.

1.1.3 Progress2: Eora_1 (source)

  • Source data extraction process. The main extraction process is used to extract the dml changes of the source table data.

  • The extracted change vector is stored in the local queue file and waiting for transmission. $ OGG_HOME / dirdat queue folder, need to add extrail ./dirdat/aa, extract eora_1 to create a local trail file

  • According to business requirements, it can run on both the source and the target, extracting and capturing system data changes.

  • You can also capture ddl changes based on business needs, requiring additional configuration.

  • The initialization process is also a form of extracting business requirements.

1.1.4 Progress3: Pora_1 (source)

  • The data pump process on the source side. The second extract process, also called data pump, sends the data to the target through the TCP / IP network. The advantage of this is that it can protect the extracted data and prevent problems with the network or target trail. You can also set certain rules to allow GG to automatically clear the trailpump process under certain conditions, which improves the availability of data captured by the source and target.
  • If Data Pump is not used, the extract process must send the extracted operation data to the target trail;
  • If Data Pump is configured, the extract process extracts the captured data and writes it to the trail. The Data pump reads the trail and sends the trail to the target trail through the network. Need to configure add rmttrail ./dirdat/pa, extract pora_1 remote trail file

1.1.5 Progress4:Rora_1(target)

  • Multiplexing process on the target side, after mgr is configured, it starts automatically, obtains the local queue file $ OGG_HOME / dirdat /,
    applies source table data changes
add REPLICAT  RORA_1, EXTTRAIL ./dirdat/pa
  • The Replicat process runs on the target to read tail files and reconstruct DML and DDL and apply them to the target database

1.1.6 Progress5:Mgr ( source&target )

  • During the disaster recovery process, both the source and target sides have manager processes running to perform the startup, detection, restart, allocation of storage space, and reporting of errors to other processes.

1.1.7 trail file

  • The trail file saves the extracted captured change data information. The trail may exist on the source system disk or the target system disk or the system internal media or related systems according to the GoldenGate configuration.
  • The use of trail file storage to extract the captured data can not depend on the extract or replicate process, you can have more options to process the data and when to reach the target end, for example, you may configure extraction and storage at the same time, and then send the data to later Target side.

1.1.8 Flow chart at a glance

Insert picture description here

1.2 Actual operation based on unidirectional source-target

1.2.1 Database configuration

Insert picture description here

1. Add Database Supplemental Log

 alter database add supplemental log data;

2. Add database Force Logging

 alter database force logging;

3. Open the archive log for the database

 SQL>shutdown immediate;
 SQL>startup mount;
 SQL>alter database archivelog;
 SQL>shutdown immediate;
 SQL>startup;
 SQL>archive log list;

4. Create exclusive ogg users and their table spaces in the database

create tablespace tbs_gguser datafile '+DATA' size 50M autoextend on;
create user ogg identified by Ogg default tablespace tbs_gguser temporary tablespace TEMP quota unlimited on tbs_gguser;

5. Assign relevant permissions for ogg users

Insert picture description here

The ogg user authority can be authorized according to the above illustration, but in actual work, there may be a problem that the actual authority causes the process extraction to fail. You can also consider the form of granting DBA authority in advance and then withdrawing it later.

6. Adjust the parameters

alter system set set enable_goldengate_replication=true;

Enable goldenate copy support

alter system set recycle_bin=off

If you use the DDL copy function on the Oracle 10g platform, you need to turn off the Oracle's own recycle bin function

7. Perform OGG -DDL support

$ cd /ogg
$ ./ggsci
SQL> @marker_setup
SQL> @ddl_setup 
SQL> @ddl_setup
SQL> @role_setup
SQL> GRANT GGS_GGSUSER_ROLE TO ogg;

Grant succeeded.

SQL> @ddl_enable

Trigger altered.

SQL> @marker_status.sql
Please enter the name of a schema for the GoldenGate database objects:
ogg
Setting schema name to OGG

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK
SQL> @?/rdbms/admin/dbmspool.sql

Package created.


Grant succeeded.

SQL> @ddl_pin.sql
Enter value for 1: ogg

PL/SQL procedure successfully completed.

Enter value for 1: ogg

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

8. Check the internal user and table status of the database

SQL> select username,account_status from dba_users where account_status='OPEN' and username not in ('OGG','SYS','SYSTEM');

USERNAME		       ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT			       OPEN



SQL> select owner,table_name,LOGGING from dba_tables where LOGGING = 'NO'and owner in('SCOTT');

no rows selected

Check the LOGGING status of the enabled accounts and tables in the database

1.2.2 Configuration at the OGG software level

1 Add trandata of the specified table

> DBLOGIN USERID ogg@prod, PASSWORD Ogg 在gg里登录数据库
> add trandata scott.* 为表添加trandata 
> info trandata scott.* 查看验证。

2. Configure Mgr and create some directories

> EDIT PARAMS MGR
port 7809
DYNAMICPORTLIST 7810-7860
PURGEOLDEXTRACTS ./dirdat/*,usecheckpoints, minkeepdays 7
LAGREPORTHOURS 1
LAGINFOMINUTES 30
LAGCRITICALMINUTES 45
AUTORESTART EXTRACT *,RETRIES 15,WAITMINUTES 3,RESETMINUTES 180
AUTOSTART EXTRACT *
目标端同上

> start mgr
> info mgr
> create subdirs

3.Configure inital Extract process in source(可选)

> ADD EXTRACT EINI_1, SOURCEISTABLE
> EDIT PARAMS EINI_1

EXTRACT EINI_1:说明这是EXTRACT进程,名字是EINI_1
SETENV:环境变量
USERID:数据库OGG用户
PASSWORD:数据库用户OGG的密码
RMTHOST:目标端地址,如果在/etc/hosts文件里已经设置解析,可以写主机名
MGRPORT:目标端MGR管理进程监听的端口
RMTTASK REPLICAT:目标端REPLICAT应用进程的组和名字
TABLE:源端要初始化数据的表的名字
编辑好捕获进程EINI_1后,还需要在目标端配置REPLICAT应用进程,名字要和源端的捕获进程EINI_1里面RMTTASK REPLICAT参数配置的一样,也就是还需要在目标端配置RMTTASK REPLICAT RINI_1。

4.Configure inital replicat process in target(可选)

> ADD REPLICAT RINI_1, SPECIALRUN
> EDIT PARAMS RINI_1

REPLICAT RINI_1:说明这是REPLICAT应用进程,名字叫RINI_1
SETENV:语言变量,同捕获进程EINI_1
ASSUMETARGETDEFS:告诉OGG目标端和源端需要同步的表的结构完全一致,不需要OGG去检查表的结构,包括表名、字段名、字段类型、字段长度等,如果目标端和源端同步的表的结构不一样,需要使用SOURCEDEFS参数,详见OGG官方文档。
USERID、PASSWORD:同捕获进程EINI_1参数介绍
DISCARDFILE:错误信息存放位置及命名规则
MAP:源端捕获的表的名字
TARGET:目标端同步的表的名字,可以不在同一SCHEMA

5.start init load in source (optional)


>start extract eini_1 只用在源端启动,其对应rini_1进程在目标自动启动。
>info eini_1
>view report eini_1

The initialization process only runs once. After running, the tables on both ends are synchronized, and the process group automatically stops. If the process hangs during the initialization process and abend occurs, you can view the process information through view repot or view $ OGG_HOME / ggser.log.

Normally, for initial loading (steps 3, 4, 5), we are more using the source side to export according to SCN, then importing the target off, and finally using

>start <replicate_name> aftercsn <xxxxxxxxxx> 

To complete fast data synchronization

6.config extract Process in Source system

> EDIT PARAMS EORA_1
Extract EORA_1
setenv (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
userid ogg@prod,password Ogg
tranlogoptions  asmuser sys@ASM1,asmpassword <xxxx> 
如果出现忘记ASM密码,可以在Oracle 11G平台上使用 TranlogOptions DBLOGREADER 代替
threadoptions maxcommitpropagationdelay 60000
warnlongtrans 2h,checkinterval 3m
dynamicresolution
gettruncates
reportcount every 1 minutes,rate
discardfile ./dirrpt/extsmz.dsc,append,megabytes 1024
exttrail ./dirdat/sz
ddl include mapped
ddloptions addtrandata, report
table scott.*;

> ADD EXTRACT EORA_1, TRANLOG, BEGIN NOW 
  如果是集群模式需要依据节点数添加Thread <n> 关键词来指定抽取

  

7.Define GoldenGate local trail

>add exttrail ./dirdat/sz, extract EORA_1, megabytes 200
  添加本地trail文件配置,添加提取、捕获数据的存放位置及文件 到这个进程单个文件大小200M

8 start extract progress Eora_1

>start eoar_1
>info eora_1
$ll $OGG_HOME/dirdat/

查看dirdat目录下已捕获的TRAIL文件是否以sz开头

9.Configure pump process in source system

>add extract Pora_1 ,exttrailsource ./dirdat/sz
添加PUMP进程PORA_1到OGG,并指定本地的TRAIL文件。
>edit param Pora_1

extract Pora_1
passthru PASSTHRU是说data pump不对数据进行转换
rmthost 192.167.3.99, mgrport 7809,compress
numfiles 5000
dynamicresolution
rmttrail ./dirdat/sz 复制到目标该地址,以备目标库复制进程应用
table scott.*;

10.Add GoldenGate remote trail in Source system

>add rmttrail ./dirdat/sz,extract Pora_1
> START EXTRACT PORA_1
$ ll dirdat/  目标端验证ogg安装目录下dirdat是否出现指定的trail文件

11.Configure replicat process in target system

> ADD REPLICAT RORA_1, EXTTRAIL ./dirdat/sz
添加复制进程 及文件,文件来源于源端 pump进程
> EDIT PARAM RORA_1
REPLICAT RORA_1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg, PASSWORD Ogg
HANDLECOLLISIONS
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/RORA_aa.DSC, PURGE
MAP scott.emp, TARGET scott.emp;
MAP scott.dept, TARGET scott.dept;

12.Create GLOBALS parameter in target system

> EDIT PARAMS ./GLOBALS
CHECKPOINTTABLE ogg.ggschkpt
>exit
> ll GLOBALS
> DBLOGIN USERID ogg, PASSWORD Ogg
>> ADD CHECKPOINTTABLE
添加检查点  为了断点续传,配置成功后,在目标端ogg用户下也有对应检查点表产生

13.Config a base-on_scn’s expdp in source system

SQL> col current_scn for 9999999999999999999999999999999999
SQL> select current_scn from v$database;

			CURRENT_SCN
-----------------------------------
			   41423196


SYS			       EXP_DIR
/backup/expdp_ogg/


nohup expdp "'/ as sysdba'"  directory=EXP_DIR dumpfile=expdpscott2019101211_%U.dmp  logfile=expdpscott2019101211.log schemas=scott content=all  flashback_scn=42256141 parallel=2 cluster=N&

14.Config a base-on_scn’s impdp in target system

nohup impdp "'/ as sysdba'"  directory=EXP_DIR dumpfile=expdpscott2019101211_%U.dmp  logfile=impdpscott2019101211.log parallel=2 cluster=N &

15.Start Replicate procress in targget system

$ cd /ogg
$ ./ggsci

>start Rora_1,aftercsn 42256141 

Finally, a general configuration sequence diagram
Insert picture description here

Published 3 original articles · won 3 · views 73

Guess you like

Origin blog.csdn.net/weixin_42145355/article/details/105429052