Hyperledger Fabric v2.0手动进行分布式部署
官网提供的test-network和first-network 教程还是比较全面的,在单机部署起来也是比较容易的,所以不再详细介绍了,自定义的分布式部署更贴合实际的生产环境,所以我将从零开始搭建fabricv2.0.0网络,进行分布式部署
目录
目标
根据本文,通过linux命令实现分布式部署fabric网络,并安装java版本的链码,通过java版本的sdk 实现与链码通讯,可以进行数据落块和查询,所有素全部来自官网提供的fabric-samples工程
fabric-samples下载地址
下载fabric-samples后,将分支切换到tag v2.0.0
提示:以下是本篇文章正文内容,下面案例可供参考
一、基础环境准备
每台服务器都需要安装
安装 yum-utils
yum install -y yum-utils device-mapper-persistent-data lvm2
设置yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安装 epel-release
yum install -y epel-release
安装 docker
# 查询docker版本
yum list docker-ce --showduplicates | sort -r
# 安装docker
yum install docker-ce -y
# 启动docker服务
systemctl start docker
# 配置开机自启动
systemctl enable docker
安装 docker-compse
# 安装docker-compse
yum install docker-compose -y
安装 golang
# 安装docker-compse
yum install golang -y
查看docker-compose,docker,golang 的版本
# 查看版本命令
docker -v && docker-compose -v && go version
# 版本信息
Docker version 19.03.12, build 48a66213fe
docker-compose version 1.18.0, build 8dd22a9
go version go1.13.14 linux/amd64
服务器1 安装完工具后,登录到服务器2,也需要按照第一点教程,安装
二、通过xx获取证书
下载fabric-samples工程
在/usr/local下创建fabric文件夹并进入该路径
mkdir -p /usr/local/fabric && cd /usr/local/fabric
克隆fabric-samples工程
git clone https://gitee.com/peter_code_git/fabric-samples.git
进入 fabric-sample文件夹并切换为 v2.0.0 tag
cd fabric-samples && git checkout v2.0.0
在/usr/local/fabric/fabric-samples目录下 查看分支
# 查看分支命令
git branch
#显示结果
* (分离自 v2.0.0)
master
生成证书
获取 cryptogen 工具
go get -u github.com/hyperledger/fabric/cmd/configtxgen
执行过程会有些长,请耐心等待(我获取了20分钟左右…)
或者可以直接下载fabric,自己重新编译也可以, 但是我尝试过多次,都无法编译,卡在请求google.com/golang 时连接超时
获取configtxgen工具
go get -u github.com/hyperledger/fabric/cmd/configtxgen
设置环境变量
查看go的环境变量
go env
将 GOPATH 和 GOROOT目录添加到 /etc/profile的最后一行
export GOROOT=/usr/lib/golang
export GOPATH=/root/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
保存退出后,使之生效
source /etc/profile
进入 /usr/local/fabric/fabric-samples/first-network目录下并 byfn.sh脚本
cd /usr/local/fabric/fabric-samples/first-network && ./byfn.sh generate
成功生成证书
# 执行byfn命令
./byfn.sh generate
Generating certs and genesis block for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
# 输入 y
Continue? [Y/n] y
# 生成过程
proceeding ...
/root/go/bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
+ cryptogen generate --config=./crypto-config.yaml
org1.example.com
org2.example.com
+ res=0
+ set +x
Generate CCP files for Org1 and Org2
/root/go/bin/configtxgen
##########################################################
######### Generating Orderer Genesis block ##############
##########################################################
+ configtxgen -profile SampleMultiNodeEtcdRaft -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
2020-09-08 13:26:11.295 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-09-08 13:26:11.311 CST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 orderer type: etcdraft
2020-09-08 13:26:11.311 CST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:16777216
2020-09-08 13:26:11.311 CST [common.tools.configtxgen.localconfig] Load -> INFO 004 Loaded configuration: /usr/local/fabric/fabric-samples/first-network/configtx.yaml
2020-09-08 13:26:11.314 CST [common.tools.configtxgen] doOutputBlock -> INFO 005 Generating genesis block
2020-09-08 13:26:11.314 CST [common.tools.configtxgen] doOutputBlock -> INFO 006 Writing genesis block
+ res=0
+ set +x
#################################################################
### Generating channel configuration transaction 'channel.tx' ###
#################################################################
+ configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel
2020-09-08 13:26:11.345 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-09-08 13:26:11.360 CST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /usr/local/fabric/fabric-samples/first-network/configtx.yaml
2020-09-08 13:26:11.360 CST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 003 Generating new channel configtx
2020-09-08 13:26:11.363 CST [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 004 Writing new channel tx
+ res=0
+ set +x
#################################################################
####### Generating anchor peer update for Org1MSP ##########
#################################################################
+ configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
2020-09-08 13:26:11.395 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-09-08 13:26:11.409 CST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /usr/local/fabric/fabric-samples/first-network/configtx.yaml
2020-09-08 13:26:11.410 CST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 003 Generating anchor peer update
2020-09-08 13:26:11.411 CST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 004 Writing anchor peer update
+ res=0
+ set +x
#################################################################
####### Generating anchor peer update for Org2MSP ##########
#################################################################
+ configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
2020-09-08 13:26:11.443 CST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-09-08 13:26:11.458 CST [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /usr/local/fabric/fabric-samples/first-network/configtx.yaml
2020-09-08 13:26:11.458 CST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 003 Generating anchor peer update
2020-09-08 13:26:11.459 CST [common.tools.configtxgen] doOutputAnchorPeersUpdate -> INFO 004 Writing anchor peer update
+ res=0
+ set +x
[root@localhost first-network]#
通道配置文件在 /usr/local/fabric/fabric-samples/first-network目录下的channel-artifacts文件夹下,
/usr/local/fabric/fabric-samples/first-network/channel-artifacts目录结构如下( 使用的tree命令查看的,没有此命令可以yum install tree -y 安装)
.
├── channel.tx
├── genesis.block
├── Org1MSPanchors.tx
└── Org2MSPanchors.tx
证书文件在 /usr/local/fabric/fabric-samples/first-network目录下的crypto-config文件夹下
/usr/local/fabric/fabric-samples/first-network/crypto-config的目录结构如下
.
├── ordererOrganizations
│ └── example.com
│ ├── ca
│ │ ├── ca.example.com-cert.pem
│ │ └── priv_sk
│ ├── msp
│ │ ├── admincerts
│ │ │ └── [email protected]
│ │ ├── cacerts
│ │ │ └── ca.example.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.example.com-cert.pem
│ ├── orderers
│ │ ├── orderer2.example.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── [email protected]
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.example.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── priv_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer2.example.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.example.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── orderer3.example.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── [email protected]
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.example.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── priv_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer3.example.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.example.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── orderer4.example.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── [email protected]
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.example.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── priv_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer4.example.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.example.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── orderer5.example.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── [email protected]
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.example.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── priv_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer5.example.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.example.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ └── orderer.example.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── [email protected]
│ │ │ ├── cacerts
│ │ │ │ └── ca.example.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── priv_sk
│ │ │ ├── signcerts
│ │ │ │ └── orderer.example.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.example.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ ├── tlsca
│ │ ├── priv_sk
│ │ └── tlsca.example.com-cert.pem
│ └── users
│ └── [email protected]
│ ├── msp
│ │ ├── admincerts
│ │ │ └── [email protected]
│ │ ├── cacerts
│ │ │ └── ca.example.com-cert.pem
│ │ ├── keystore
│ │ │ └── priv_sk
│ │ ├── signcerts
│ │ │ └── [email protected]
│ │ └── tlscacerts
│ │ └── tlsca.example.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
└── peerOrganizations
├── org1.example.com
│ ├── ca
│ │ ├── ca.org1.example.com-cert.pem
│ │ └── priv_sk
│ ├── msp
│ │ ├── admincerts
│ │ ├── cacerts
│ │ │ └── ca.org1.example.com-cert.pem
│ │ ├── config.yaml
│ │ └── tlscacerts
│ │ └── tlsca.org1.example.com-cert.pem
│ ├── peers
│ │ ├── peer0.org1.example.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.org1.example.com-cert.pem
│ │ │ │ ├── config.yaml
│ │ │ │ ├── keystore
│ │ │ │ │ └── priv_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── peer0.org1.example.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.org1.example.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ └── peer1.org1.example.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ ├── cacerts
│ │ │ │ └── ca.org1.example.com-cert.pem
│ │ │ ├── config.yaml
│ │ │ ├── keystore
│ │ │ │ └── priv_sk
│ │ │ ├── signcerts
│ │ │ │ └── peer1.org1.example.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.org1.example.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ ├── tlsca
│ │ ├── priv_sk
│ │ └── tlsca.org1.example.com-cert.pem
│ └── users
│ ├── [email protected]
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ ├── cacerts
│ │ │ │ └── ca.org1.example.com-cert.pem
│ │ │ ├── config.yaml
│ │ │ ├── keystore
│ │ │ │ └── priv_sk
│ │ │ ├── signcerts
│ │ │ │ └── [email protected]
│ │ │ └── tlscacerts
│ │ │ └── tlsca.org1.example.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── [email protected]
│ ├── msp
│ │ ├── admincerts
│ │ ├── cacerts
│ │ │ └── ca.org1.example.com-cert.pem
│ │ ├── config.yaml
│ │ ├── keystore
│ │ │ └── priv_sk
│ │ ├── signcerts
│ │ │ └── [email protected]
│ │ └── tlscacerts
│ │ └── tlsca.org1.example.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
└── org2.example.com
├── ca
│ ├── ca.org2.example.com-cert.pem
│ └── priv_sk
├── msp
│ ├── admincerts
│ ├── cacerts
│ │ └── ca.org2.example.com-cert.pem
│ ├── config.yaml
│ └── tlscacerts
│ └── tlsca.org2.example.com-cert.pem
├── peers
│ ├── peer0.org2.example.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ ├── cacerts
│ │ │ │ └── ca.org2.example.com-cert.pem
│ │ │ ├── config.yaml
│ │ │ ├── keystore
│ │ │ │ └── priv_sk
│ │ │ ├── signcerts
│ │ │ │ └── peer0.org2.example.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.org2.example.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ └── peer1.org2.example.com
│ ├── msp
│ │ ├── admincerts
│ │ ├── cacerts
│ │ │ └── ca.org2.example.com-cert.pem
│ │ ├── config.yaml
│ │ ├── keystore
│ │ │ └── priv_sk
│ │ ├── signcerts
│ │ │ └── peer1.org2.example.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.org2.example.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── server.crt
│ └── server.key
├── tlsca
│ ├── priv_sk
│ └── tlsca.org2.example.com-cert.pem
└── users
├── [email protected]
│ ├── msp
│ │ ├── admincerts
│ │ ├── cacerts
│ │ │ └── ca.org2.example.com-cert.pem
│ │ ├── config.yaml
│ │ ├── keystore
│ │ │ └── priv_sk
│ │ ├── signcerts
│ │ │ └── [email protected]
│ │ └── tlscacerts
│ │ └── tlsca.org2.example.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
└── [email protected]
├── msp
│ ├── admincerts
│ ├── cacerts
│ │ └── ca.org2.example.com-cert.pem
│ ├── config.yaml
│ ├── keystore
│ │ └── priv_sk
│ ├── signcerts
│ │ └── [email protected]
│ └── tlscacerts
│ └── tlsca.org2.example.com-cert.pem
└── tls
├── ca.crt
├── client.crt
└── client.key
获取界面如下:
三、开放端口
每台服务器,选择自己需要开放的端口.
fabric网络和docker网络会用到一下端口,所以我将所有端口全部列举出来了,根据自己的需要进行选择性开放.
# 永久开放端口
firewall-cmd --zone=public --add-port=7946/udp --permanent
firewall-cmd --zone=public --add-port=4789/udp --permanent
firewall-cmd --zone=public --add-port=5789/tcp --permanent
firewall-cmd --zone=public --add-port=4097/tcp --permanent
firewall-cmd --zone=public --add-port=2376/tcp --permanent
firewall-cmd --zone=public --add-port=2377/tcp --permanent
firewall-cmd --zone=public --add-port=7946/tcp --permanent
firewall-cmd --zone=public --add-port=4789/tcp --permanent
firewall-cmd --zone=public --add-port=7051/tcp --permanent
firewall-cmd --zone=public --add-port=7052/tcp --permanent
firewall-cmd --zone=public --add-port=8051/tcp --permanent
firewall-cmd --zone=public --add-port=8052/tcp --permanent
firewall-cmd --zone=public --add-port=9051/tcp --permanent
firewall-cmd --zone=public --add-port=9052/tcp --permanent
firewall-cmd --zone=public --add-port=10051/tcp --permanent
firewall-cmd --zone=public --add-port=10052/tcp --permanent
firewall-cmd --zone=public --add-port=7050/tcp --permanent
firewall-cmd --zone=public --add-port=8050/tcp --permanent
firewall-cmd --zone=public --add-port=9050/tcp --permanent
firewall-cmd --zone=public --add-port=10050/tcp --permanent
firewall-cmd --zone=public --add-port=11050/tcp --permanent
firewall-cmd --zone=public --add-port=5984/tcp --permanent
firewall-cmd --zone=public --add-port=6984/tcp --permanent
firewall-cmd --zone=public --add-port=7984/tcp --permanent
firewall-cmd --zone=public --add-port=8984/tcp --permanent
firewall-cmd --zone=public --add-port=7054/tcp --permanent
firewall-cmd --zone=public --add-port=8054/tcp --permanent
# 自定义sdk-api服务的端口
firewall-cmd --zone=public --add-port=20006/tcp --permanent
# 重新加载
firewall-cmd --reload
# 查询已开放的端口
firewall-cmd --zone=public --list-ports
四、构建docker-sarm网络
创建docker网络,要求两台服务器都安装基础环境, docker-compose和都docker,
构建网络可以在任意目录下进行创建
首先我们查看一下,在默认状态下docker网络都有哪些
# 查看docker网络的命令
docker network ls
# 查询结果
NETWORK ID NAME DRIVER SCOPE
ccb1e0038f25 bridge bridge local
87fc29f75b6d host host local
c10f4448f821 none null local
在服务器1上初始化docker网络
docker swarm init --advertise-addr 192.168.0.1
输出
Swarm initialized: current node (pro275l77dl95qesrisi97ymt) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-5iyfnf6ivy30jad6p07eh3hj1hzdqs2wc11v6kc7m1hoahfxp5-8yp41ggl9y6re59xbkgi74mhy 192.168.0.1:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
将服务器1 以管理者身份加入网络
docker swarm join-token manager
输出
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-5iyfnf6ivy30jad6p07eh3hj1hzdqs2wc11v6kc7m1hoahfxp5-bv0un3xmvap5nlws2klkuroow 192.168.0.1:2377
需要复制 输出的结果,token每次初始化时都会不一致,所以需要复制你服务器上的,而不是本文中的
# 不要复制本文的token,复制你服务器上的
docker swarm join --token SWMTKN-1-5iyfnf6ivy30jad6p07eh3hj1hzdqs2wc11v6kc7m1hoahfxp5-bv0un3xmvap5nlws2klkuroow 192.168.0.1:2377
进入服务器2,在服务器1 上复制的token 后面加上 --advertise-addr 192.168.0.2,以管理员身份加入网络
docker swarm join --token SWMTKN-1-5iyfnf6ivy30jad6p07eh3hj1hzdqs2wc11v6kc7m1hoahfxp5-bv0un3xmvap5nlws2klkuroow 192.168.198.194:2377 --advertise-addr 192.168.134.195
输出一下内容,说明防火墙端口没有开放,我们需要开放端口
Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
开发端口后,再次加入输出一下内容,说明已经存在一个swarm网络,我需要离开
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
执行
# 执行离开网络命令
docker swarm leave -f
# 输出
Node left the swarm.
再次加入网络输出
This node joined a swarm as a manager.
创建docker swarm网络
docker network create --attachable --driver overlay dev
分别在服务器1和服务器2上 查看docker网络
docker network ls
输出
NETWORK ID NAME DRIVER SCOPE
937340fd60f2 bridge bridge local
t3xo616snlkf dev overlay swarm
06a919394684 docker_gwbridge bridge local
25df6f72d8cd host host local
u6dcftrhdces ingress overlay swarm
cae3975d28ad none null local
其中 dev的网络就是刚刚创建的
五、部署fabric网络
编写docker-compose配置文件
接下来需要编写fabric的配置文件,由于需要分布式部署, 所以需要两个配置文件node1和node2,按照前面介绍部署,
node1 包含 2个peer,2个couchdb,3个order节点,
node2 包含 2个order,2个peer,2个couch,1个client
我们可以都在服务器1上编写配置文件然后将配置文件发送到服务器2上
进入服务器1 的/usr/local/fabric目录下并创建docker-compse文件夹,并在该文件夹下创建node1.yaml文件,
mkdir -p /usr/local/fabric/docker-compose && cd /usr/local/fabric/docker-compose && touch node1.yaml
因为依赖first-network目录下的base,channel-artifacts,crypto-config文件夹,所以将这些文件夹复制到 docker-compose目录下
cp -r /usr/local/fabric/fabric-samples/first-network/base /usr/local/fabric/docker-compose
cp -r /usr/local/fabric/fabric-samples/first-network/channel-artifacts /usr/local/fabric/docker-compose
cp -r /usr/local/fabric/fabric-samples/first-network/crypto-config /usr/local/fabric/docker-compose
修改官方配置,忘记修改,则无法实例化链码
进入/usr/local/fabric/base目录下
修改peer-base.yaml文件,将peer夏的network修改为dev
修改前
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
修改后
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=dev
node1.yaml配置文件
version: '2'
volumes:
peer0.org1.example.com:
peer1.org1.example.com:
orderer.example.com:
orderer1.example.com:
orderer2.example.com:
networks:
byfn:
external:
name: dev
services:
couchdb0:
container_name: couchdb0
image: couchdb:2.3
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
volumes:
- /var/hyperledger/couchdb0:/opt/couchdb/data
ports:
- "5984:5984"
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
depends_on:
- couchdb0
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
couchdb1:
container_name: couchdb1
image: couchdb:2.3
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
volumes:
- /var/hyperledger/couchdb1:/opt/couchdb/data
ports:
- "6984:5984"
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
depends_on:
- couchdb0
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
networks:
- byfn
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
orderer2.example.com:
extends:
file: base/peer-base.yaml
service: orderer-base
environment:
- ORDERER_GENERAL_LISTENPORT=8050
container_name: orderer2.example.com
networks:
- byfn
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 8050:8050
orderer3.example.com:
extends:
file: base/peer-base.yaml
service: orderer-base
environment:
- ORDERER_GENERAL_LISTENPORT=9050
container_name: orderer3.example.com
networks:
- byfn
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 9050:9050
node2.yaml配置文件
version: '2'
volumes:
peer1.org2.example.com:
peer0.org2.example.com:
orderer3.example.com:
orderer4.example.com:
networks:
byfn:
external:
name: dev
services:
couchdb2:
container_name: couchdb2
image: couchdb:2.3
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
volumes:
- /var/hyperledger/couchdb2:/opt/couchdb/data
ports:
- "7984:5984"
networks:
- byfn
peer0.org2.example.com:
container_name: peer0.org2.example.com
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb2:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
depends_on:
- couchdb2
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
networks:
- byfn
couchdb3:
container_name: couchdb3
image: couchdb:2.3
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
volumes:
- /var/hyperledger/couchdb3:/opt/couchdb/data
ports:
- "8984:5984"
networks:
- byfn
peer1.org2.example.com:
container_name: peer1.org2.example.com
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb3:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
depends_on:
- couchdb3
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
networks:
- byfn
orderer4.example.com:
extends:
file: base/peer-base.yaml
service: orderer-base
environment:
- ORDERER_GENERAL_LISTENPORT=7050
container_name: orderer4.example.com
networks:
- byfn
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 10050:10050
orderer5.example.com:
extends:
file: base/peer-base.yaml
service: orderer-base
environment:
- ORDERER_GENERAL_LISTENPORT=8050
container_name: orderer5.example.com
networks:
- byfn
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 11050:11050
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org2.example.com:9051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./chaincode/:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
networks:
- byfn
此时/usr/local/fabric/docker-compose的 目录下应该有如下目录,缺少一个chaincode文件夹,这个是链码所在目录,在docker-compose配置文件中都有使用到,但是暂时没有也可以,开发完链码后在将源码放入chaincode文件中即可
注意映射的路径,如果使用示例的docker-compose配置,那么目录结构必须按照下图所示
拉取docker镜像
每台服务器都需要拉取docker镜像
# 拉取 peer 镜像
docker pull hyperledger/fabric-peer:2.0
# 拉取 cli 镜像
docker pull hyperledger/fabric-tools:2.0
# 拉取 order 镜像
docker pull hyperledger/fabric-orderer:2.0
# 拉取 java 镜像 编译java版本的链码
docker pull hyperledger/fabric-javaenv:2.0
# 拉取 ccenv 镜像 编译链码
docker pull hyperledger/fabric-ccenv:2.0
# 拉取 couchdb镜像
docker pull couchdb:2.3
启动fabric网络
设置环境变量,由于官方配置中使用镜像的版本是 $IMAGE_TAG变量,所以将变量赋值
export IMAGE_TAG=2.0
设置环境变量仅仅是本次有效,可以直接将其改为2.0
修改后的image
# 指定2.0版本
image: hyperledger/fabric-xxxx:2.0
在服务器1的/usr/local/fabric/docker-compose目录下启动网络
cd /usr/local/fabric/docker-compose && docker-compose -f node1.yaml up -d
查看启动的节点
docker ps -a
输出
将 服务器1上的 /usr/local/fabric目录下的 docker-compose目录 全部发送到服务器2的/usr/local/fabric/目录下
scp -r /usr/local/fabric/docker-compse 服务器2账号@服务器2ip:/usr/local/fabric
进入服务2,设置环境变量
export IMAGE_TAG=2.0
启动fabric网络
cd /usr/local/fabric && docker-compose -f node2.yaml up -d
查看服务器2的容器
docker ps -a
输出
至此 fabric网络启动成功
六、修改javaenv镜像
由于我使用的java版本的链码, 所以在部署链码的时候, peer会通过javaenv镜像进行编译源码,通过ccenv部署链码,由于原生的javaenv镜像中的maven仓库使用的是maven官方的,所以需要将maven仓库地址修改为阿里镜像仓库
注意, 修改后的javaenv镜像,需要放到每台服务器上
七、开发chaincode
由于创建通道和部署链码都需要进入cli容器中,所以先开发链码,进入cli容器后先创建通道后部署链码一次完成
将服务器1 /usr/local/fabric/下的fabric-samples工程下载到本地
在/usr/local/fabric/fabric-samples/chaincode/abstore/java/src/main/java/org/hyperledger/fabric-samples
/usr/local/fabric/fabric-samples/chaincode/abstore/java/src的目录结构
.
└── main
└── java
└── org
└── hyperledger
└── fabric-samples
└── ABstore.java
路径下的 ABstore.java文件就是java版本的链码
下载后进改造,改造后将src目录和pom文件上传到 服务器2 的/usr/local/fabric/chaincode目录下即可
/usr/local/fabric/chaincode的目录结构
.
├── pom.xml
└── src
└── main
└── java
├── com
│ └── fabric
│ ├── ChainCode.java
│ └── common
│ ├── demo
│ │ ├── Craft.java
│ │ ├── PeterData.java
│ │ └── ProcessName.java
│ └── TimeUtil.java
└── reademe.txt
八、创建通道&部署链码
由于client部署到了服务器2上,所以进入服务器2
# 进入 cli容器
docker exec -it cli bash
#设置peer0-org1的环境变量
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
# 部署链码时需要使用到的环境变量
export CHANNEL_NAME=mychannel
export CC_SRC_PATH=/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode/
export CC_RUNTIME_LANGUAGE=java
export VERSION=1
export SEQUENCE=1
export ORDERPEM=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ 创建通道
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls true --cafile $ORDERPEM
# 将peer0-org1 加入通道
peer channel join -b $CHANNEL_NAME.block
成功加入通道提示
# 设置peer0-org1 为背书节点
peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls true --cafile $ORDERPEM
成功将peer0-org1设置为背书节点
# 切换 peer1-org1 环境变量
CORE_PEER_ADDRESS=peer1.org1.example.com:8051
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
# 将peer1-org1 加入通道
peer channel join -b $CHANNEL_NAME.block
# 切换peer0-org2环境变量
CORE_PEER_LOCALMSPID=Org2MSP
CORE_PEER_ADDRESS=peer0.org2.example.com:9051
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
# 将peer0-org2加入通道
peer channel join -b $CHANNEL_NAME.block
# 将peer0-org2指定为背书节点
peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls true --cafile $ORDERPEM
# 切换 peer1-org2环境变量
CORE_PEER_ADDRESS=peer1.org2.example.com:10051
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
# 将peer1-org2 加入通道
peer channel join -b $CHANNEL_NAME.block
通道创建完成,下面开始部署链码,只有背书节点需要部署链码 也就是 peer0-org1和peer0-org2需要部署链码
# 切换peer0-org2环境变量
CORE_PEER_LOCALMSPID=Org2MSP
CORE_PEER_ADDRESS=peer0.org2.example.com:9051
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
# 打包链码
peer lifecycle chaincode package chaincode.tar.gz --path ${CC_SRC_PATH} --lang ${CC_RUNTIME_LANGUAGE} --label mycc_${VERSION}
成功刚打包链码
# peer0-org2 节点安装链码
peer lifecycle chaincode install chaincode.tar.gz
peer0-org2 成功安装链码
# 查询已经安装的链码
peer lifecycle chaincode queryinstalled
查询结果
# 将 链码id 设置为环境变量,(每个人的都不同,需要复制自己服务器查询出来的id)
export CC_PACKAGE_ID=mycc_1:89442fef1c06cb2ede9d9681c53cf7a1808772a2ca8a63a3b7f1609d558397ec
# 审批链码
peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name mycc --version $VERSION --init-required --package-id $CC_PACKAGE_ID --sequence $SEQUENCE --tls true --cafile $ORDERPEM
peer0-org2成功审批链码
# 查询链码审批状态
peer lifecycle chaincode checkcommitreadiness --channelID $CHANNEL_NAME --name mycc --version $VERSION --init-required --sequence $SEQUENCE --tls true --cafile $ORDERPEM --output json
链码此时的审批状态为 org2通过审批,org1没有通过审批
# 切换peer0-org1环境变量
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
# peer0-org1安装链码
peer lifecycle chaincode install chaincode.tar.gz
# 查询已经安装的链码
peer lifecycle chaincode queryinstalled
# 审批链码
peer lifecycle chaincode approveformyorg --channelID $CHANNEL_NAME --name mycc --version $VERSION --init-required --package-id $CC_PACKAGE_ID --sequence $SEQUENCE --tls true --cafile $ORDERPEM
# 查询链码审批状态
peer lifecycle chaincode checkcommitreadiness --channelID $CHANNEL_NAME --name mycc --version $VERSION --init-required --sequence $SEQUENCE --tls true --cafile $ORDERPEM --output json
在peer0-org1审批后,再次查询链码审批状态,org1和org2都通过了审批
# 提交链码
peer lifecycle chaincode commit -o orderer.example.com:7050 --channelID $CHANNEL_NAME --name mycc --version $VERSION --sequence $SEQUENCE --init-required --tls true --cafile $ORDERPEM --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
成功提交链码
# 查询已经提交的链码
peer lifecycle chaincode querycommitted --channelID $CHANNEL_NAME --name mycc
查询已经提交的链码结果
# 链码实例化, 由于我的链码实例化接口无需任何参数,所以命令中 -c参数 是无参的
peer chaincode invoke -o orderer.example.com:7050 --isInit --tls true --cafile $ORDERPEM -C $CHANNEL_NAME -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["Init", ""]}' --waitForEvent
成功实例化链码
查看服务器1 运行的容器
查看服务器2运行的容器
九、开发SDK-API
十、部署SDK-API
十一、通过sdk-api与链码交互
手动进行分布式部署的流程
1.安装docker,golang,docker-compose;
2.安装 cryptogen工具和configtxgen工具;
3.获取fabric-samples工程(通过byfn.sh脚本生成证书,创世区块,通道配置)
4.构建docker网络(经过交流,据说使用公网ip也可以,尚未证实)
5.修改fabric-javaenv镜像并将镜像复制到所有服务器上(如果不用java版的链码,可以忽略此步)
6.编写docker-compse文件,需要注意的是我们的证书和区块的相对路径是否正确
7.开发chaicode链码
8.启动fabric网络,进入cli容器,创建通道和安装链码