搭建环境准备:
两台mycat服务器、两台 keepalived 服务器、 两台 haproxy服务器,外加两台 mysql数据库服务器。
这里以两台服务器为例进行安装配置
192.168.0.42
192.168.0.40
mycat安装:
下载mycat-server
Mycat-server-1.6.7.4-release-linux.tar.gz
上传之服务器安装目录之下
tar -zxvf Mycat-server-1.6.7.4-release-linux.tar.gz 解压至当前目录
进入到mycat文件目录下
cd mycat
修改 conf 目录下 server.xml
vim conf/server.xml
<user name="root" defaultAccount="true">
<property name="password">root</property>
<property name="schemas">order</property>
<!--No MyCAT Database selected 错误前会尝试使用该schema作为schema,不设置则为null,报错 -->
<!-- 表级 DML 权限设置 -->
<!--
<privileges check="false">
<schema name="TESTDB" dml="0110" >
<table name="tb01" dml="0000"></table>
<table name="tb02" dml="1111"></table>
</schema>
</privileges>
-->
</user>
<user name="user">
<property name="password">user</property>
<property name="schemas">order</property>
<property name="readOnly">true</property>
</user>
user标签为用户信息(这里创建了两个用户,分别是 root 用户和 user 用户)
用户名 :root
用户密码:root (这里自行设置)
schemas 配置为 mysql 数据库的库名 两个用户必须都配置
修改 schema.xml (这里主要配置数据源,以及读写分离分库分表的策略)
vim conf/schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="order" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn41">
<!-- auto sharding by id (long) -->
<!--splitTableNames 启用<table name 属性使用逗号分割配置多个表,即多个表使用这个配置-->
<table name="orders" dataNode="dn41,dn42" rule="sharding-by-murmur">
<childTable name="order_items" joinKey="order_id" parentKey="id"/>
<childTable name="order_status" joinKey="order_id" parentKey="id"/>
</table>
<!-- <table name="oc_call" primaryKey="ID" dataNode="dn1$0-743" rule="latest-month-calldate"
/> -->
</schema>
<!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743"
/> -->
<dataNode name="dn41" dataHost="db41" database="order" />
<dataNode name="dn42" dataHost="db42" database="order" />
<!--<dataNode name="dn4" dataHost="sequoiadb1" database="SAMPLE" />
<dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" />
<dataNode name="jdbc_dn2" dataHost="jdbchost" database="db2" />
<dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" /> -->
<dataHost name="db41" maxCon="1000" minCon="10" balance="0"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="M1" url="192.168.0.41:3306" user="hanye" password="HanYe@123456">
<!-- <readHost host="S1" url="192.168.0.40:3306" user="hanye" password="HanYe@123456"/> -->
</writeHost>
<!-- <writeHost host="M2" url="192.168.0.40:3306" user="hanye" password="HanYe@123456"/> -->
</dataHost>
<dataHost name="db42" maxCon="1000" minCon="10" balance="0"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="M1" url="192.168.0.42:3306" user="hanye"
password="HanYe@123456">
</writeHost>
<!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> -->
</dataHost>
<!--
<dataHost name="sequoiadb1" maxCon="1000" minCon="1" balance="0" dbType="sequoiadb" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="sequoiadb://1426587161.dbaas.sequoialab.net:11920/SAMPLE" user="jifeng" password="jifeng"></writeHost>
</dataHost>
<dataHost name="oracle1" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="oracle" dbDriver="jdbc"> <heartbeat>select 1 from dual</heartbeat>
<connectionInitSql>alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'</connectionInitSql>
<writeHost host="hostM1" url="jdbc:oracle:thin:@127.0.0.1:1521:nange" user="base" password="123456" > </writeHost> </dataHost>
<dataHost name="jdbchost" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="mongodb" dbDriver="jdbc">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM" url="mongodb://192.168.0.99/test" user="admin" password="123456" ></writeHost> </dataHost>
<dataHost name="sparksql" maxCon="1000" minCon="1" balance="0" dbType="spark" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="jdbc:hive2://feng01:10000" user="jifeng" password="jifeng"></writeHost> </dataHost> -->
<!-- <dataHost name="jdbchost" maxCon="1000" minCon="10" balance="0" dbType="mysql"
dbDriver="jdbc"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1"
url="jdbc:mysql://localhost:3306" user="root" password="123456"> </writeHost>
</dataHost> -->
</mycat:schema>
schema 主要是数据库和分片表配置
这里的 name="order" 必须和 server.xml中的 schema相同
rule="sharding-by-murmur" 这里是分片规则配置,具体规则可以参考 rule.xml文件中的内容
dataNode="dn41" 默认数据节点 这里指向dn41
childTable 表示子表配置
<schema name="order" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn41">
<!-- auto sharding by id (long) -->
<!--splitTableNames 启用<table name 属性使用逗号分割配置多个表,即多个表使用这个配置-->
<table name="orders" dataNode="dn41,dn42" rule="sharding-by-murmur">
<childTable name="order_items" joinKey="order_id" parentKey="id"/>
<childTable name="order_status" joinKey="order_id" parentKey="id"/>
</table>
<!-- <table name="oc_call" primaryKey="ID" dataNode="dn1$0-743" rule="latest-month-calldate"
/> -->
</schema>
数据节点配置:
database="order" 这里配置的是实际数据库
<dataNode name="dn41" dataHost="db41" database="order" />
<dataNode name="dn42" dataHost="db42" database="order" />
这段是指数据源配置 (具体参数不做过多的赘述,请参考完整版的配置)
<dataHost name="db41" maxCon="1000" minCon="10" balance="0"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="M1" url="192.168.0.41:3306" user="hanye" password="HanYe@123456">
<!-- <readHost host="S1" url="192.168.0.40:3306" user="hanye" password="HanYe@123456"/> -->
</writeHost>
<!-- <writeHost host="M2" url="192.168.0.40:3306" user="hanye" password="HanYe@123456"/> -->
</dataHost>
到这里mycat 配置完成
回到mycat 目录下执行以下命令启动mycat服务器
启动mycat服务器
./bin/mycat start
停止启动
./bin/mycat stop
查看启动状态
./bin/mycat status
控制台启动
./bin/mycat console
注意:mycat 默认端口为8066 ,mycat 管理工具默认端口为9066
至此就可以用 navite 连接mycat了
Haproxy 安装
查找haproxy
yum serarch haproxy
安装haproxy
yum -y install haproxy.x86_64
配置haproxy
vim /etc/haproxy/haproxy.cfg
数据库连接修改mode 为tcp
配置数据库(mycat)服务器地址
启动haproxy 且查看haproxy 进程
haproxy -f /etc/haproxy/haproxy.cfg
ps -ef | grep haproxy
keepalived安装
查找keepalived
yum search keepalived
安装keepalived
yum -y install keepalived.x86_64
配置keepalived
vim /etc/keepalived/keepalived.conf
启动keepalived
service keepalived start
指定配置文件启动方式
keepalived -f /etc/keepalived/keepalived.conf
查看进程
ps -ef |grep keepalived
killall 命令
killall -o haproxy
使用kiall命令之前需要安装
yum search killall
yum install psmisc.x86_64
配置 VIP
所有节点关闭网络配置管理器,可能会和网络接口冲突
systemctl stop NetworkManager
systemctl disable NetworkManager
复制 本地回环文件ifcfg-lo
cp ifcfg-lo ifcfg-lo:1
修改本地回环文件
vim ifcfg-lo:1
刷新配置
Ifup lo
至此 keepalived haproxy +mycat 高可用数据库服务器搭建完成
注意:VIP (虚IP) 阿里云不支持 ,腾讯云需要购买
这里的VIP 需要在 vim /etc/keepalived/keepalived.conf 文件中配置 我这里配置的是 192.168.0.50
virtual_ipaddress{
192.168.0.50
}
具体配置参考以下详细配置内容吧
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict #这里需要注释掉,负责不生效
vrrp_garp_interval 0
vrrp_gna_interval 0
}
#心跳检测配置
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 2
}
vrrp_instance VI_1 {
state MASTER #主节点 从节点为 BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.0.50
}
#调用心跳检测配置
track_script {
check_haproxy
}
}
#VIP配置
virtual_server 192.168.0.50 6000 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
#RS 配置 harproxy服务器的连接地址(也就是真实服务器)
real_server 192.168.0.40 5000 {
weight 1
}
}