ogg同步oracle数据到kafka

ogg即Oracle GoldenGate是Oracle的同步工具。
源端和目标端的配置信息:

- 版本 OGG版本
源端 OracleRelease 11.2.0.1.0 Oracle GoldenGate 11.2.1.0.3 for Oracle on Linux x86-64
目标端 kafka 0.10.1 、confluent 3.1.2 Oracle GoldenGate for Big Data 12.3.1.1.1 on Linux x86-64

1、下载

可在这里旧版本查询下载
注意:源端和目标端的文件不一样,目标端需要下载Oracle GoldenGate for Big Data,源端需要下载Oracle GoldenGate for Oracle。

2、源端(Oracle)配置

注意:源端是安装了oracle的机器,oracle环境变量之前都配置好了。

2.1 解压

先建立ogg目录

mkdir -p /opt/ogg
unzip V34339-01.zip

解压后得到一个tar包,再解压这个tar

tar xf fbo_ggs_Linux_x64_ora11g_64bit.tar -C /opt/ogg
chown -R oracle:oinstall /opt/ogg (使oracle用户有ogg的权限,后面有些需要在oracle用户下执行才能成功)

2.2 配置ogg环境变量

oracle用户下编辑:

vi /home/oracle/.bash_profile
export PATH
export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
#export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=ora11g
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/lib64
export OGG_HOME=/opt/ogg
export PATH=$OGG_HOME:$PATH:$ORACLE_HOME/bin

使之生效

source /home/oracle/.bash_profile

测试一下ogg命令

ggsci

如果命令成功即可进行下一步,不成功请检查前面的步骤。

2.3 oracle打开归档模式

su - oracle
sqlplus / as sysdba

执行下面的命令查看当前是否为归档模式

archive log list
SQL> archive log list 
Database log mode          No Archive Mode
Automatic archival         Disabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     12
Current log sequence           14

若为Disabled,手动打开即可

conn / as sysdba (以DBA身份连接数据库) 
shutdown immediate (立即关闭数据库)
startup mount (启动实例并加载数据库,但不打开)
alter database archivelog; (更改数据库为归档模式)
alter database open; (打开数据库)
alter system archive log start; (启用自动归档)

再执行一下

archive log list
Database log mode          Archive Mode
Automatic archival         Enabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     12
Next log sequence to archive   14
Current log sequence           14

可以看到为Enabled,则成功打开归档模式。

2.4 Oracle打开日志相关

OGG基于辅助日志等进行实时传输,故需要打开相关日志确保可获取事务内容,通过下面的命令查看该状态

select force_logging, supplemental_log_data_min from v$database;
FORCE_ SUPPLEMENTAL_LOG
------ ----------------
NO     NO

若为NO,则需要通过命令修改

alter database force logging;
alter database add supplemental log data;

再查看一下为YES即可

SQL> select force_logging, supplemental_log_data_min from v$database;

FORCE_ SUPPLEMENTAL_LOG
------ ----------------
YES    YES

2.5 oracle创建复制用户

首先root用户建立相关文件夹,并赋予权限

mkdir -p /home/oracle/oggdata/orcl
chown -R oracle:oinstall /home/oracle/oggdata/orcl

然后执行下面sql

SQL> create tablespace oggtbs datafile '/home/oracle/oggdata/orcl/oggtbs01.dbf' size 1000M autoextend on;

Tablespace created.

SQL>  create user ogg identified by ogg default tablespace oggtbs;

User created.

SQL> grant dba to ogg;

Grant succeeded.

2.6 OGG初始化

cd /opt/ogg/
ggsci
create subdirs
ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.



GGSCI (ambari.master.com) 1> create subdirs

Creating subdirectories under current directory /root

Parameter files                /root/dirprm: created
Report files                   /root/dirrpt: created
Checkpoint files               /root/dirchk: created
Process status files           /root/dirpcs: created
SQL script files               /root/dirsql: created
Database definitions files     /root/dirdef: created
Extract data files             /root/dirdat: created
Temporary files                /root/dirtmp: created
Stdout files                   /root/dirout: created


GGSCI (ambari.master.com) 2>

2.7 Oracle创建测试表

创建一个用户,在该用户下新建测试表,用户名、密码、表名均为 test_ogg。

create user test_ogg  identified by test_ogg default tablespace users;
grant dba to test_ogg;
conn test_ogg/test_ogg;
create table test_ogg(id int ,name varchar(20),primary key(id));

3 目标端(kafka)配置

mkdir -p /opt/ogg
unzip 123111_ggs_Adapters_Linux_x64.zip 
tar xf ggs_Adapters_Linux_x64.tar  -C /opt/ogg/

3.2 环境变量

vim /etc/profile
export OGG_HOME=/opt/ogg
export PATH=$OGG_HOME:$PATH
export JAVA_HOME=/usr/java/jdk1.8.0_77
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64:$JAVA_HOME/jre/lib/amd64/server:$JAVA_HOME/jre/lib/amd64/libjsig.so:$JAVA_HOME/jre/lib/amd64/server/libjvm.so:$OGG_HOME/lib
source /etc/profile

同样测试一下ogg命令

ggsci

3.3 初始化目录

cd /opt/ogg/
ggsci
create subdirs

4、OGG源端配置

4.1 配置OGG的全局变量

先切换到oracle用户下

su - oracle
cd /opt/ogg
ggsci
GGSCI (ambari.master.com) 1> dblogin userid ogg password ogg
Successfully logged into database.

GGSCI (ambari.master.com) 2> edit param ./globals

然后和用vim编辑一样添加

oggschema ogg

4.2 配置管理器mgr

GGSCI (ambari.master.com) 3> edit param mgr
PORT 7809
DYNAMICPORTLIST 7810-7909
AUTORESTART EXTRACT *,RETRIES 5,WAITMINUTES 3
PURGEOLDEXTRACTS ./dirdat/*,usecheckpoints, minkeepdays 3

说明:PORT即mgr的默认监听端口;DYNAMICPORTLIST动态端口列表,当指定的mgr端口不可用时,会在这个端口列表中选择一个,最大指定范围为256个;AUTORESTART重启参数设置表示重启所有EXTRACT进程,最多5次,每次间隔3分钟;PURGEOLDEXTRACTS即TRAIL文件的定期清理

4.3 添加复制表

GGSCI (ambari.master.com) 4> add trandata test_ogg.test_ogg

Logging of supplemental redo data enabled for table TEST_OGG.TEST_OGG.

GGSCI (ambari.master.com) 5> info trandata test_ogg.test_ogg

Logging of supplemental redo log data is enabled for table TEST_OGG.TEST_OGG.

Columns supplementally logged for table TEST_OGG.TEST_OGG: ID

4.4 配置extract进程

GGSCI (ambari.master.com) 6> edit param extkafka
extract extkafka
dynamicresolution
SETENV (ORACLE_SID = "ora11g")
SETENV (NLS_LANG = "american_america.AL32UTF8")
userid ogg,password ogg
exttrail /opt/ogg/dirdat/to
table test_ogg.test_ogg;

说明:第一行指定extract进程名称;dynamicresolution动态解析;SETENV设置环境变量,这里分别设置了Oracle数据库以及字符集;userid ggs,password ggs即OGG连接Oracle数据库的帐号密码,这里使用2.5中特意创建的复制帐号;exttrail定义trail文件的保存位置以及文件名,注意这里文件名只能是2个字母,其余部分OGG会补齐;table即复制表的表名,支持*通配,必须以;结尾

添加extract进程:

GGSCI (ambari.master.com) 16> add extract extkafka,tranlog,begin now
EXTRACT added.

(注:若报错

ERROR: Could not create checkpoint file /opt/ogg/dirchk/EXTKAFKA.cpe (error 2, No such file or directory).

执行下面的命令再重新添加即可。

create subdirs

)

添加trail文件的定义与extract进程绑定:

GGSCI (ambari.master.com) 17> add exttrail /opt/ogg/dirdat/to,extract extkafka
EXTTRAIL added.

4.5 配置pump进程

pump进程本质上来说也是一个extract,只不过他的作用仅仅是把trail文件传递到目标端,配置过程和extract进程类似,只是逻辑上称之为pump进程

GGSCI (ambari.master.com) 18> edit param pukafka
extract pukafka
passthru
dynamicresolution
userid ogg,password ogg
rmthost 10.68.7.185 mgrport 7809
rmttrail /opt/ogg/dirdat/to
table test_ogg.test_ogg;

说明:第一行指定extract进程名称;passthru即禁止OGG与Oracle交互,我们这里使用pump逻辑传输,故禁止即可;dynamicresolution动态解析;userid ogg,password ogg即OGG连接Oracle数据库的帐号密码rmthost和mgrhost即目标端(kafka)OGG的mgr服务的地址以及监听端口;rmttrail即目标端trail文件存储位置以及名称。

分别将本地trail文件和目标端的trail文件绑定到extract进程:

GGSCI (ambari.master.com) 1> add extract pukafka,exttrailsource /opt/ogg/dirdat/to
EXTRACT added.
GGSCI (ambari.master.com) 2> add rmttrail /opt/ogg/dirdat/to,extract pukafka
RMTTRAIL added.

4.6 配置define文件

Oracle与MySQL,Hadoop集群(HDFS,Hive,kafka等)等之间数据传输可以定义为异构数据类型的传输,故需要定义表之间的关系映射,在OGG命令行执行:

GGSCI (ambari.master.com) 3> edit param test_ogg
defsfile /opt/ogg/dirdef/test_ogg.test_ogg
userid ogg,password ogg
table test_ogg.test_ogg;

在OGG主目录下执行(oracle用户):

./defgen paramfile dirprm/test_ogg.prm

***********************************************************************
        Oracle GoldenGate Table Definition Generator for Oracle
 Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258
   Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 16:58:29

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2018-05-23 05:03:04
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Wed Apr 12 15:04:24 UTC 2017, Release 3.10.0-514.16.1.el7.x86_64
Node: ambari.master.com
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 13126

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
defsfile /opt/ogg/dirdef/test_ogg.test_ogg
userid ogg,password ***
table test_ogg.test_ogg;
Retrieving definition for TEST_OGG.TEST_OGG



Definitions generated for 1 table in /opt/ogg/dirdef/test_ogg.test_ogg

将生成的/opt/ogg/dirdef/test_ogg.test_ogg发送的目标端ogg目录下的dirdef里:

scp -r /opt/ogg/dirdef/test_ogg.test_ogg root@10.68.7.185:/opt/ogg/dirdef/

5、OGG目标端配置

5.1 开启kafka服务

使用hdp中kafka

5.2 配置管理器mgr

GGSCI (ambari.slave1.com) 1>  edit param mgr
PORT 7809
DYNAMICPORTLIST 7810-7909
AUTORESTART EXTRACT *,RETRIES 5,WAITMINUTES 3
PURGEOLDEXTRACTS ./dirdat/*,usecheckpoints, minkeepdays 3

5.3 配置checkpoint

checkpoint即复制可追溯的一个偏移量记录,在全局配置里添加checkpoint表即可。

edit  param  ./GLOBALS
CHECKPOINTTABLE test_ogg.checkpoint

5.4 配置replicate进程

GGSCI (ambari.slave1.com) 4> edit param rekafka
REPLICAT rekafka
sourcedefs /opt/ogg/dirdef/test_ogg.test_ogg
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE 
GROUPTRANSOPS 10000
MAP test_ogg.test_ogg, TARGET test_ogg.test_ogg;

说明:REPLICATE rekafka定义rep进程名称;sourcedefs即在4.6中在源服务器上做的表映射文件;TARGETDB LIBFILE即定义kafka一些适配性的库文件以及配置文件,配置文件位于OGG主目录下的dirprm/kafka.props;REPORTCOUNT即复制任务的报告生成频率;GROUPTRANSOPS为以事务传输时,事务合并的单位,减少IO操作;MAP即源端与目标端的映射关系

5.5 配置kafka.props

在此使用kafka connect,需先安装confluent,下载相应版本解压即可。

cd /opt/ogg/dirprm/
vim kafka.props
gg.handlerlist=kafkaconnect

#The handler properties
gg.handler.kafkaconnect.type=kafkaconnect
gg.handler.kafkaconnect.kafkaProducerConfigFile=confluent.properties
gg.handler.kafkaconnect.mode=op
#The following selects the topic name based on the fully qualified table name
gg.handler.kafkaconnect.topicMappingTemplate=test_ogg
#The following selects the message key using the concatenated primary keys
gg.handler.kafkaconnect.keyMappingTemplate=id

#The formatter properties
gg.handler.kafkaconnect.messageFormatting=row
gg.handler.kafkaconnect.insertOpKey=I
gg.handler.kafkaconnect.updateOpKey=U
gg.handler.kafkaconnect.deleteOpKey=D
gg.handler.kafkaconnect.truncateOpKey=T
gg.handler.kafkaconnect.treatAllColumnsAsStrings=false
gg.handler.kafkaconnect.iso8601Format=false
gg.handler.kafkaconnect.pkUpdateHandling=abend
gg.handler.kafkaconnect.includeTableName=true
gg.handler.kafkaconnect.includeOpType=true
gg.handler.kafkaconnect.includeOpTimestamp=true
gg.handler.kafkaconnect.includeCurrentTimestamp=true
gg.handler.kafkaconnect.includePosition=true
gg.handler.kafkaconnect.includePrimaryKeys=false
gg.handler.kafkaconnect.includeTokens=false

goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=/root/zzz/confluent-3.1.2/share/java/kafka-serde-tools/*:/root/zzz/confluent-3.1.2/share/java/kafka/*:/root/zzz/confluent-3.1.2/share/java/confluent-common/*

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=.:ggjava/ggjava.jar:./dirprm
vi confluent.properties
bootstrap.servers=10.68.7.185:6667

value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer

value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true

internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter

其中需要将后面的注释去掉,ogg不识别注释,如果不去掉会报错

5.6 添加trail文件到replicate进程

GGSCI (ambari.slave1.com) 2> add replicat rekafka exttrail /opt/ogg/dirdat/to,checkpointtable test_ogg.checkpoint
REPLICAT added.

6、测试

6.1 启动所有进程

在源端和目标端的OGG命令行下使用start [进程名]的形式启动所有进程。
启动顺序按照源mgr——目标mgr——源extract——源pump——目标replicate来完成。
全部需要在ogg目录下执行ggsci目录进入ogg命令行。
源端依次是

start mgr
start extkafka
start pukafka

目标端

start mgr
start rekafka

可以通过info all 或者info [进程名] 查看状态,所有的进程都为RUNNING才算成功
源端

GGSCI (ambari.master.com) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTKAFKA    04:50:21      00:00:03    
EXTRACT     RUNNING     PUKAFKA     00:00:00      00:00:03

目标端

GGSCI (ambari.slave1.com) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REKAFKA     00:00:00      00:00:01

6.2 异常解决

如果有不是RUNNING可通过查看日志的方法检查解决问题,具体通过下面两种方法

vim ggser.log

或者ogg命令行,以rekafka进程为例

GGSCI (ambari.slave1.com) 2> view report rekafka

6.3 测试同步更新效果

现在源端执行sql语句

conn test_ogg/test_ogg
insert into test_ogg values(1,'test');
commit;
update test_ogg set name='zhangsan' where id=1;
commit;
delete test_ogg where id=1;
commit;

查看源端trail文件状态

ls -l /opt/ogg/dirdat/to*
-rw-rw-rw- 1 oracle oinstall 1464 May 23 10:31 /opt/ogg/dirdat/to000000

查看目标端trail文件状态

ls -l /opt/ogg/dirdat/to*
-rw-r----- 1 root root 1504 May 23 10:31 /opt/ogg/dirdat/to000000

查看kafka是否自动建立对应的主题

bin/kafka-topics.sh --list --zookeeper localhost:2181

在列表中显示有test_ogg则表示没问题
通过消费者看是否有同步消息

bin/kafka-console-consumer.sh --bootstrap-server 192.168.44.129:9092 --topic test_ogg --from-beginning

6.4 数据格式

confluent.properties中不同的convert对应不同的数据格式:

key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false

Struct{table=TEST_OGG.TEST_OGG,op_type=I,op_ts=2018-06-22 11:24:50.737881,current_ts=2018-06-22 11:24:57.102000,pos=00000000000000001764,ID=3.0,NAME=ss}

value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false

{“table”:”TEST_OGG.TEST_OGG”,”op_type”:”I”,”op_ts”:”2018-06-22 11:27:09.737641”,”current_ts”:”2018-06-22 11:27:14.680000”,”pos”:”00000000000000001901”,”ID”:4.0,”NAME”:”jj”}

value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true

{“schema”:{“type”:”struct”,”fields”:[{“type”:”string”,”optional”:true,”field”:”table”},{“type”:”string”,”optional”:true,”field”:”op_type”},{“type”:”string”,”optional”:true,”field”:”op_ts”},{“type”:”string”,”optional”:true,”field”:”current_ts”},{“type”:”string”,”optional”:true,”field”:”pos”},{“type”:”double”,”optional”:true,”field”:”ID”},{“type”:”string”,”optional”:true,”field”:”NAME”}],”optional”:false,”name”:”TEST_OGG.TEST_OGG”},”payload”:{“table”:”TEST_OGG.TEST_OGG”,”op_type”:”I”,”op_ts”:”2018-06-22 11:28:40.737534”,”current_ts”:”2018-06-22 11:28:45.620000”,”pos”:”00000000000000002040”,”ID”:5.0,”NAME”:”kk”}}

参考:

https://dongkelun.com/2018/05/23/oggOracle2Kafka/

猜你喜欢

转载自blog.csdn.net/ukakasu/article/details/80894967