快学Big Data -- Codis (十二)

Codis 集群安装

概述

   Codis 是一个分布式 Redis 解决方案, 对于上层的应用来说, 连接到 Codis Proxy 和连接原生的 Redis Server 没有显著区别 (不支持的命令列表), 上层应用可以像使用单机的 Redis 一样使用, Codis 底层会处理请求的转发, 不停机的数据迁移等工作, 所有后边的一切事情, 对于前面的客户端来说是透明的, 可以简单的认为后边连接的是一个内存无限大的 Redis 服务。

 

结构如下

 

 

在上面的结构图可以看出zookeeper保存着每个codis-proxy的信息,在每个codis-proxy下有codis-group,每个codis-group下有codis-server,在客户端连接时只需要链接zookeeper即可,zookeeper会自动维护每台机器的运行情况。

 

集群安装

相关软件请到链接:http://pan.baidu.com/s/1bprMdjx 密码:u6hq 如果无法下载请联系作者

1-1)、环境准备

A)、go 语言安装

[root@hadoop1 opt]# chmod  a+x  go1.7.5.linux-amd64.tar.gz

[root@hadoop1 opt]# tar -zxvf  go1.7.5.linux-amd64.tar.gz

会在当前目录下生成go文件夹

[root@hadoop1opt]# mv go go1.7

 

创建go编译的文件夹

[root@hadoop1 opt]# mkdir goPath

 

配置环境变量

[root@hadoop1 opt]# vi /etc/profile

export GOPATH=/opt/goPath

export GOROOT=/opt/go1.7

export PATH=$PATH:$GOPATH/bin:$GOROOT/bin

 

使配置文件生效

[root@hadoop1 opt]# source /etc/profile

 

查看版本

[root@hadoop1 opt]# go version

go version go1.7.5 linux/amd64

 

B)、git语言安装

 

安装编译git时需要的包

[root@hadoop1 opt]# yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel

[root@hadoop1 opt]# yum install  gcc perl-ExtUtils-MakeMaker

 

 

[root@hadoop1 opt]# wget https://www.kernel.org/pub/software/scm/git/git-2.0.5.tar.gz

[root@hadoop1 opt]# tar xzf git-2.0.5.tar.gz

[root@hadoop1 opt]# cd git-2.0.5

[root@hadoop1 git-2.0.5]# make

[root@hadoop1 git-2.0.5]# make install

 

配置环境变量

[root@hadoop1 git-2.0.5]# vi /etc/profile

export GIT_HOME=/opt/git-2.0.5

export PATH=$PATH:$GIT_HOME

 

[root@hadoop1 git-2.0.5]# source /etc/profile

 

 

查看版本

[root@hadoop1 git-2.0.5]# git version

git version 2.0.5

 

1-2)、安装Codis集群

A)、创建下载codis的源码目录

创建文件夹

[root@hadoop1 opt]# mkdir -p $GOPATH/src/github.com/CodisLabs

下载codis3.2文件

[root@hadoop1 opt]# cd $_ && git clone https://github.com/CodisLabs/codis.git -b release3.2

 

B)、进入的codis目录进行编译

[root@hadoop1 codis]# cd $GOPATH/src/github.com/CodisLabs/codis

[root@hadoop1 codis]# make

****************

go build -i -o bin/codis-dashboard ./cmd/dashboard

go build -i -tags "cgo_jemalloc" -o bin/codis-proxy ./cmd/proxy

go build -i -o bin/codis-admin ./cmd/admin

go build -i -o bin/codis-fe ./cmd/fe

 

查看编译后的执行脚本文件

[root@hadoop1 codis]# ll bin/

total 84504

drwxr-xr-x. 4 root root     4096 May 14 17:08 assets

-rwxr-xr-x. 1 root root 15470266 May 14 17:08 codis-admin

-rwxr-xr-x. 1 root root 17091830 May 14 17:07 codis-dashboard

-rwxr-xr-x. 1 root root 15363141 May 14 17:08 codis-fe

-rwxr-xr-x. 1 root root 19316366 May 14 17:07 codis-proxy

-rwxr-xr-x. 1 root root  7983034 May 14 17:07 codis-server

-rwxr-xr-x. 1 root root  5580567 May 14 17:07 redis-benchmark

-rwxr-xr-x. 1 root root  5712451 May 14 17:07 redis-cli

-rw-r--r--. 1 root root      169 May 14 17:07 version

 

把编译好的codis3.2 复制到制定的目录

[root@hadoop1 CodisLabs]# cp -r codis/ /opt/

 

[root@hadoop1 CodisLabs]# cd /opt/

[root@hadoop1 opt]# cd codis/

1-3)、安装Zookeeper

安装zookeeper的详细步骤请查看zookeeper总结资料一节

 

1-4)、配置Codis的环境

A)、创建codis的配置文件

[root@hadoop1 codis]# mkdir codisConf/

[root@hadoop1 codis]# cd codisConf/

[root@hadoop1 codisConf]# mkdir redis_data_6379

[root@hadoop1 codisConf]# mkdir redis_data_6380

 

添加6379的配置文件

[root@hadoop1 codisConf]# vi redis6379.conf

daemonize yes

pidfile /opt/codis/codisConf/redis6379.pid

port 6379

timeout 86400

tcp-keepalive 60

loglevel notice

logfile /opt/codis/codisConf/redis6379.log

databases 16

#save 900 1

#save 300 10

#save 60 10000

stop-writes-on-bgsave-error no

rdbcompression yes

dbfilename dump6379.rdb

dir /opt/codis/codisConf/redis_data_6379

masterauth "xxxxx"

slave-serve-stale-data yes

repl-disable-tcp-nodelay no

slave-priority 100

maxmemory 10gb

maxmemory-policy allkeys-lru

appendonly no

appendfsync everysec

no-appendfsync-on-rewrite yes

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 0 0 0

hz 10

aof-rewrite-incremental-fsync yes

# Generated by CONFIG REWRITE

slaveof 127.0.0.1 6379

 

 

添加6380的配置文件

[root@hadoop1 codisConf]# vi redis6380.conf

daemonize yes

pidfile "/opt/codis/codisConf/redis6380.pid"

port 6380

timeout 86400

tcp-keepalive 60

loglevel notice

logfile "/opt/codis/codisConf/redis6380.log"

databases 16

#save 900 1

#save 300 10

#save 60 10000

stop-writes-on-bgsave-error no

rdbcompression yes

dbfilename "dump6380.rdb"

dir "/opt/codis/codisConf/redis_data_6380"

 

slave-serve-stale-data yes

repl-disable-tcp-nodelay no

slave-priority 100

maxmemory 10gb

maxmemory-policy allkeys-lru

appendonly no

appendfsync everysec

no-appendfsync-on-rewrite yes

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-entries 512

list-max-ziplist-value 64

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 0 0 0

hz 10

aof-rewrite-incremental-fsync yes

# Generated by CONFIG REWRITE

slaveof 127.0.0.1 6379

 

 

启动服务

[root@hadoop1 codis]# ./bin/codis-server /opt/codis/codisConf/redis6379.conf

[root@hadoop1 codis]# ./bin/codis-server /opt/codis/codisConf/redis6380.conf

 

 

查看端口信息

[root@hadoop1 codis]# netstat -nltp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      14953/./bin/codis-s

tcp        0      0 0.0.0.0:6380                0.0.0.0:*                   LISTEN      14959/./bin/codis-s

 

B)、启动bashboard

[root@hadoop1 bin]# ./codis-dashboard --default-config | tee dashboard.conf

[root@hadoop1 bin]# vi dashboard.conf

 

##################################################

#                                                #

#                  Codis-Dashboard               #

#                                                #

##################################################

 

# Set Coordinator, only accept "zookeeper" & "etcd" & "filesystem".

# Quick Start

coordinator_name = "zookeeper"

coordinator_addr = "192.168.132.140:2181,192.168.132.148:2181,192.168.132.139:2181"

 

# Set Codis Product Name/Auth.

product_name = "codis-demo"

product_auth = ""

 

# Set bind address for admin(rpc), tcp only.

admin_addr = "192.168.132.139:18080"

 

# Set configs for redis sentinel.

sentinel_quorum = 2

 

 

参数说明

coordinator_name              外部存储类型,接受 zookeeper/etcd

coordinator_addr               外部存储地址

product_name                 集群名称,满足正则 \w[\w\.\-]*

product_auth                  集群密码,默认为空

admin_addr                   RESTful API 端口

 

 

启动

[root@hadoop1 codis]# ./bin/codis-dashboard --ncpu=4 --config=/opt/codis/bin/dashboard.conf --log=/opt/codis/log/dashboard.log --log-level=WARN &

[1] 15055

 

查看日志

[root@hadoop1 log]# vi dashboard.log.******

*************************

sentinel_client_reconfig_script = ""

2017/05/14 17:35:25 main.go:140: [WARN] [0xc42025d560] dashboard is working ...

2017/05/14 17:35:25 topom.go:424: [WARN] admin start service on 192.168.132.139:18080

 

参数说明

--ncpu=N                 最大使用 CPU 个数

-c  CONF, --config=CONF    指定启动配置文件

-l   FILE, --log=FILE        设置 log 输出文件

--log-level=LEVEL           设置 log 输出等级:INFO,WARN,DEBUG,ERROR;默认INFO,推荐WARN

l  对于同一个业务集群而言,可以同时部署多个codis-proxy 实例;

l  不同 codis-proxy 之间由 codis-dashboard 保证状态同步。

C)、启动codis-proxy

[root@hadoop1 bin]# ./codis-proxy --default-config | tee proxy.conf

[root@hadoop1 bin]# vi proxy.conf

##################################################

#                                                #

#                  Codis-Proxy                   #

#                                                #

##################################################

 

# Set Codis Product Name/Auth.

# product_name = "codis-test"

# product_auth = ""

 

# Set bind address for admin(rpc), tcp only.

admin_addr = "192.168.132.139:11080"

 

# Set bind address for proxy, proto_type can be "tcp", "tcp4", "tcp6", "unix" or "unixpacket".

proto_type = "tcp4"

proxy_addr = "192.168.132.139:19000"

 

# Set jodis address & session timeout

#   1. jodis_name is short for jodis_coordinator_name, only accept "zookeeper" & "etcd".

#   2. jodis_addr is short for jodis_coordinator_addr

#   3. proxy will be registered as node:

#        if jodis_compatible = true (not suggested):

#          /zk/codis/db_{PRODUCT_NAME}/proxy-{HASHID} (compatible with Codis2.0)

#        or else

#          /jodis/{PRODUCT_NAME}/proxy-{HASHID}

jodis_name = "zookeeper"

jodis_addr = "192.168.132.140:2181,192.168.132.148:2181,192.168.132.139:2181"

jodis_timeout = "20s"

jodis_compatible = false

 

# Set datacenter of proxy.

proxy_datacenter = ""

 

# Set max number of alive sessions.

proxy_max_clients = 1000

 

 

参数说明

product_name                 集群名称,参考dashboard参数说明

product_auth                  集群密码,默认为空

admin_addr                   RESTfulAPI 端口

proto_type                    Redis 端口类型,接受tcp/tcp4/tcp6/unix/unixpacket

proxy_addr                    Redis 端口地址或者路径

jodis_addr                     Jodis 注册 zookeeper地址

jodis_timeout                  Jodis 注册 sessiontimeout时间,单位second

backend_ping_period            与 codis-server 探活周期,单位second,0表示禁止

session_max_timeout            与 client 连接最大读超时,单位second,0表示禁止

session_max_bufsize             与 client 连接读写缓冲区大小,单位byte

session_max_pipeline            与 client 连接最大的pipeline 大小

session_keepalive_period         与 client 的 tcp keepalive 周期,仅tcp有效,0表示禁止

 

 

后台启动codis-proxy

[root@hadoop1 codis]# ./bin/codis-proxy --ncpu=4 --config=/opt/codis/bin/proxy.conf --log=/opt/codis/log/proxy.log --log-level=WARN &

[2] 15091

 

 

查看proxy的日志

[root@hadoop1 log]# vi proxy.log.******

 

*******************

metrics_report_statsd_period = "1s"

metrics_report_statsd_prefix = ""

2017/05/14 17:43:26 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:26 proxy.go:402: [WARN] [0xc4200e5b80] proxy start service on 192.168.132.139:19000

2017/05/14 17:43:27 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:28 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:29 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:30 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:31 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

2017/05/14 17:43:32 main.go:209: [WARN] [0xc4200e5b80] proxy waiting online ...

 

 

日志说明

codis-proxy启动后,处于 waiting 状态,监听proxy_addr 地址,但是不会 accept 连接,添加到集群并完成集群状态的同步,才能改变状态为online

D)、通过 codis-fe 添加:通过 Add Proxy 按钮

[root@hadoop1 codis]# ./bin/codis-admin --dashboard=192.168.132.139:18080  --create-proxy  -x  192.168.132.139:11080

 

E)、配置启动Cdis FE 集群管理界面

生成配置文件

[root@hadoop1 bin]# ./codis-admin --dashboard-list --zookeeper=192.168.132.139 | tee codis.json

 

查看文件

[root@hadoop1 bin]# vi codis.json

 

[

    {

        "name": "codis-test",

        "dashboard": "192.168.132.139:18080"

    }

]

 

启动codis-fe命令

[root@hadoop1 bin]# ./codis-fe --ncpu=4 --log=/opt/codis/log/fe.log --log-level=WARN --dashboard-list=/opt/codis/bin/codis.json --listen=192.168.132.139:18090 &

 

 

Options:

--ncpu=N                        最大使用 CPU 个数

-d LIST, --dashboard-list=LIST     配置文件,能够自动刷新

-l FILE, --log=FILE              设置 log 输出文件

--log-level=LEVEL              设置 log 输出等级:INFO,WARN,DEBUG,ERROR;默认INFO,推荐WARN

--listen=ADDR                 HTTP 服务端口

1-5)、一键启动Codis脚本

[root@hadoop1 bin]# vi start-codis-all.sh

cd /opt/codis

 

./bin/codis-server /opt/codis/codisConf/redis6379.conf

./bin/codis-server /opt/codis/codisConf/redis6380.conf

 

./bin/codis-dashboard --ncpu=4 --config=/opt/codis/bin/dashboard.conf --log=/opt/codis/log/dashboard.log --log-level=WARN &

./bin/codis-proxy --ncpu=4 --config=/opt/codis/bin/proxy.conf --log=/opt/codis/log/proxy.log --log-level=WARN &

 

# 休息10S是为了等待启动的时间

sleep 10s

 

./bin/codis-admin --dashboard=192.168.132.139:18080  --create-proxy  -x  192.168.132.139:11080

 

./bin/codis-fe --ncpu=4 --log=/opt/codis/log/fe.log --log-level=WARN --dashboard-list=/opt/codis/bin/codis.json --listen=192.168.132.139:18090 &

 

1-6)、查看运行的进程

[root@hadoop1 /]# ps  -ef|grep  codis

root      15412      1  0 19:40 ?        00:00:00 ./bin/codis-server *:6379                             

root      15415      1  0 19:40 ?        00:00:00 ./bin/codis-server *:6380                             

root      15416      1  0 19:40 pts/3    00:00:00 ./bin/codis-dashboard --ncpu=4 --config=/opt/codis/bin/dashboard.conf --log=/opt/codis/log/dashboard.log --log-level=WARN

root      15417      1  0 19:40 pts/3    00:00:01 ./bin/codis-proxy --ncpu=4 --config=/opt/codis/bin/proxy.conf --log=/opt/codis/log/proxy.log --log-level=WARN

root      15445      1  0 19:41 pts/3    00:00:00 ./bin/codis-fe --ncpu=4 --log=/opt/codis/log/fe.log --log-level=WARN --dashboard-list=/opt/codis/bin/codis.json --listen=192.168.132.139:18090

root      15466  15336  0 19:49 pts/3    00:00:00 grep codis

1-7)、图形界面创建组和solt节点

http://hadoop1:18090/       最好用谷歌浏览器访问

 

 

  1. 、创建组

输入编号1,点击New Group,第1组创建完成

B)、添加实例

在后面输入组的id号,然后后面框内输入redis实例所在机器的ip地址和redis实例的端口号,点击Add Server即可添加完成 实例

C)、对slots进行分组

点击Migrate Range进行分组

 

 

 

快速分组

 

 

点击Rebalance All Slots 即可快速完成分组

 

 

1-8)、添加管理proxy

A)、创建组

[root@hadoop1 bin]#./codis-admin  --dashboard=192.168.132.139:18080  --create-group   --gid=1

 

B)、组添加服务器

[root@hadoop1 bin]#./codis-admin  --dashboard=192.168.132.139:18080  --group-add  --gid=1 --addr=192.168.132.139:19000

 

C)、把从库跟主库同步

[root@hadoop1 bin]#./codis-admin  --dashboard=192.168.132.139:18080   --sync-action  --create --addr=192.168.132.139:6379

 

D)、若从库需要提升为master

[root@hadoop1 bin]#./codis-admin  --dashboard=192.168.132.139:18080   --promote-server  --gid=1  --addr=192.168.132.139:6380(从库ip和端口)

 

E)、初始化 slots,并设置 server group 服务的slot范围

初始化 slots,并设置 server group 服务的slot范围((只在一节点执行一次)sid是slot的编号。Codis 采用 Pre-sharding 的技术来实现数据的分片, 默认分成1024个slots (0-1023), 对于每个key来说, 通过以下公式确定所属的Slot Id : SlotId = crc32(key) % 1024 每一个slot都会有一个且必须有一个特定的server group id来表示这个slot的数据由哪个server group来提供

 

 

[root@hadoop1 bin]#./codis-admin  --dashboard=192.168.132.139:18080  --slot-action  --create-range  --beg=0 --end=300  --gid=1

 

 

安装codis集群资料参考:

https://github.com/CodisLabs/codis/blob/release3.2/doc/tutorial_zh.md

http://blog.csdn.net/xfg0218/article/details/72365808

 

 

客户端连接Codis

1-1)、命令行连接

[root@hadoop3 bin]# ./redis-cli -h 192.168.132.139 -p 19000

192.168.132.139:19000> SET "test" "test"

OK

192.168.132.139:19000> GET "test"

"test"

192.168.132.139:19000> SET "codis-test" "codis-test"

192.168.132.139:19000> slotsscan 0 0

“test”

“codis-test”

 

 

会看到已经保存两个了Key,slotscan是查看keys的信息的

 

 

 

1-2)、API连接

参考资料:https://github.com/CodisLabs/jodis

 

 

在Pom文件中添加以下依赖

<dependency>

  <groupId>io.codis.jodis</groupId>

  <artifactId>jodis</artifactId>

  <version>${jodis.version}</version>

</dependency>

 

<dependency>  

  <groupId>org.apache.curator</groupId>  

  <artifactId>curator-recipes</artifactId>  

  <version>2.8.0</version>  

</dependency>

 

 

在代码中可以直接使用

JedisResourcePool jedisPool = RoundRobinJedisPool.create()

        .curatorClient("192.168.132.140:2181,192.168.132.148:2181,192.168.132.139:2181", 5000).zkProxyDir("/jodis/codis-test").build();

try (Jedis jedis = jedisPool.getResource()) {

    jedis.set("foo", "bar");

    String value = jedis.get("foo");

    System.out.println(value);

}

 

 

其中192.168.132.140:2181,192.168.132.148:2181,192.168.132.139:2181 是zookeeper地址,

/jodis/codis-test是实例的名字即product_name

 

 

 

更多资料请查看:

https://github.com/CodisLabs/jodis

https://github.com/CodisLabs/codis

猜你喜欢

转载自blog.csdn.net/xfg0218/article/details/82343509