Hyperledger Fabric 1.0 multi-machine deployment
Environment setup
Before performing multi-machine deployment, you need to ensure that each server is deployed successfully on a single machine, and you can run the official example fabric/examples/e2e_cli
.
For the specific construction process, please refer to [https://blog.csdn.net/mellymengyan/article/details/77529390]
Node selection
The author uses 3 servers to build 5 nodes, including 1 orderer node and 4 peer nodes.
server | node |
---|---|
192.168.2.2 | peer0.org1.example.com peer1.org1.example.com client |
192.168.2.3 | peer0.org2.example.com peer1.org2.example.com |
192.168.2.4 | orderer.example.com |
Readers can allocate nodes according to the number of servers.
Generate public and private keys, certificates, etc.
1. Select any server and fabric/examples/e2e_cli
execute
bash generateArtifacts.sh mychannel
it in the directory. At this time, two files channel-artifacts
and are generated in this directory crypto-config
.
channel-artifacts
Used for orderer to create channel.
crypto-config
Used to ensure secure communication between nodes.
2. Copy these two files to the same directory on other servers. If they already exist before, they need to be deleted and replaced.
Set the configuration files of peer0.org1 and peer1.org1
192.168.2.2
Refer to the docker-compose-cli.yaml
compiling configuration file in the e2e_cli directory , you can make a copy of this file and modify it on the original basis.
1. Execute in the e2e_cli directory
cp docker-compopose-cli.yaml docker-compose-peer.yaml
and open the docker-compose-peer.yaml
file for editing.
The author has allocated two peer nodes of org1 and cli on this server, so only the configuration information of the two peers and cli in the file is kept, and the rest can be deleted.
2. Each peer node also needs to configure the domain name-IP mapping of the orderer node, and add:
extra_hosts:
- "orderer.example.com:192.168.2.4"
3. For the cli configuration information, remove the invalid depend, only keep the two peers of org1, and need to add the domain name-IP mapping of each node:
extra_hosts:
- "peer0.org1.example.com:192.168.2.2"
- "peer1.org1.example.com:192.168.2.2"
- "peer0.org2.example.com:192.168.2.3"
- "peer1.org2.example.com:192.168.2.3"
- "orderer.example.com:192.168.2.4"
4. The final configuration information is as follows:
services:
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.2.4"
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.2.4"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash -c './scripts/script.sh ${
CHANNEL_NAME}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org1.example.com
- peer1.org1.example.com
extra_hosts:
- "peer0.org1.example.com:192.168.2.2"
- "peer1.org1.example.com:192.168.2.2"
- "peer0.org2.example.com:192.168.2.3"
- "peer1.org2.example.com:192.168.2.3"
- "orderer.example.com:192.168.2.4"
Set the configuration files of peer0.org2 and peer1.org2
192.168.2.3
The steps are basically similar to the above, except that the author did not choose this server as a cli node, so there is no need to configure the cli part, just pay attention to the two peers of org2.
Set orderer configuration file
192.168.2.4
The same procedure is similar, copy the docker-compose-cli.yaml file in the e2e_cli directory, and execute:
cp docker-compose-cli.yaml docker-compose-orderer.yaml
only keep the orderer part, without adding extra_hosts
configuration information:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
Set the configuration file of the base directory
The above configuration information is in the extend fabric/examples/e2e_cli/base
directory, and docker-compose-base.yaml
the port number of each node is configured in this file.
Since the example is running on a stand-alone machine, the ports are mapped in this file. According to the author's node allocation, peer0.org1 and peer0.org2 are on the same server, so there is no need to modify the port mapping. peer0.org2 and peer1.org2 are on another server, so you need to modify the port mapping:
change the original
ports:
- 9051:7051
- 9052:7052
- 9053:7053
To
ports:
- 7051:7051
- 7052:7052
- 7053:7053
The original
ports:
- 10051:7051
- 10052:7052
- 10053:7053
To
ports:
- 8051:7051
- 8052:7052
- 8053:7053
Remember that every server needs to be modified!
Start each node
1. Start peer0.org1, peer1.org1 and cli
192.168.2.2
Execute in the e2e_cli directory:
docker-compose -f docker-compose-peer.yaml up -d
use the docker ps
command to view at this time , if the cli and two peer containers are generated and run, it means that the node has started successfully.
2. Start peer0.org2, peer1.org2
192.168.2.3
Execute in the e2e_cli directory:
docker-compose -f docker-compose-peer.yaml up -d
At this time, use the docker ps
command to view. If two peer containers are generated and run, it means that the node has started successfully.
3. Start the orderer
192.168.2.4
Execute in the e2e_cli directory:
docker-compose -f docker-compose-peer.yaml up -d
use the docker ps
command to view at this time , if the orderer container is generated and run, it means that the node has started successfully.
Modify scripts/script.sh
(Written in the front: The author did not see this step when referring to other multi-machine deployment materials, so I encountered the problem that peer1.org1 could not join the channel. After analyzing and modifying script.sh, it can run. As for why they It can run without modification, I still don’t understand it now. I hope someone can give some pointers if they know it, thank you!)
Return to the server
192.168.2.2 where the cli is located.
Before running the network, you need to modify the e2e_cli directory scripts/script.sh
.
In the setGlobals() method, the environment variable CORE_PEER_ADDRESS needs to specify the port number of the node. We base/docker-compose-base.yaml
were in the port mapping, so you need to modify the port number in this environment variables:
the
peer1.org1.example.com:7051
change into
peer1.org1.example.com:8051
Put
peer1.org2.example.com:7051
change into
peer1.org2.example.com:8051
(Of course, readers can also try a series of operations such as manually creating channels, selecting anchor nodes, adding peers to channels, installing chain codes, instantiating chain codes, etc., without using scripts, but they are just more troublesome.)
Start the network
192.168.2.2
1. Execute in the e2e_cli directory:
docker exec -it cli bash
enter the cli container
2. Execute the script in the cli container:
./scripts/script.sh mychannel
Finally, All GOOD, End-2-End execution completed appears, indicating that the network is running successfully.