This article describes how to use IPFS to build your own private storage network. If you still know about IPFS, I suggest you read the following two articles first
1. Install IPFS
There are two installation methods, one is to compile and install the clone source code (provided that you have installed the Go language operating environment):
git clone https://github.com/ipfs/go-ipfs.git
cd go-ipfs
make install
The source code of the project depends a lot, and it will take a certain amount of time to compile, please wait patiently. Another method is very simple, directly download the official compiled executable file, the latest stable version download address
Officially provide download packages for various system distributions, I downloaded the Linux 64-bit version
Unzip and install directly after downloading
tar xzf go-ipfs_v0.4.19_linux-amd64.tar.gz
cd go-ipfs
sudo ./install.sh
2. Start the IPFS node
Initialize the node first
ipfs init
initializing IPFS node at /home/rock/.ipfs
generating 2048-bit RSA keypair...done
peer identity: QmTrA1w1ux7jW55eqC8Vu7DCRyTMqdpHA5iAZUTRt7snuN
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
Initialization is mainly to generate node IDs and generate initial configuration documents and node data
Next, start the daemon
ipfs daemon
Initializing daemon...
go-ipfs version: 0.4.19-
Repo version: 7
System version: amd64/linux
Golang version: go1.11.5
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/172.17.0.1/tcp/4001
Swarm listening on /ip4/192.168.0.110/tcp/4001
Swarm listening on /ip4/192.168.56.1/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /p2p-circuit
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip4/172.17.0.1/tcp/4001
Swarm announcing /ip4/192.168.0.110/tcp/4001
Swarm announcing /ip4/192.168.56.1/tcp/4001
Swarm announcing /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
After starting the daemon process, a .ipfs
folder
and output some configuration information of your nodes, such as node connection address, file access gateway, and a built-in webui management system. You can see the management interface by visiting http://127.0.0.1:5001/webui
through a browser .
Now you can use the IPFS client command line tools to manage your nodes. For example, you can use the following command to view README
the file
ipfs cat /ipfs/QmTrA1w1ux7jW55eqC8Vu7DCRyTMqdpHA5iAZUTRt7snuN/readme
Another example is to add files
echo "hello world" > hello.txt
ipfs add hello.txt
added QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o hello.txt
12 B / 12 B [===========================================================================================================================] 100.00%
3. Build a private network
By default, IPFS is connected to the global network through some seeds, but we are now building a private network, so we need to delete the connection information of the seed nodes first.
There are two ways to delete the seed node connection information, one is a standard operation, directly execute the command:
ipfs bootstrap rm --all
The other is a violent operation, directly delete the bootstrap connection information from the configuration file, vim ~/.ipfs/config
findbootstrap
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"
],
Just delete all the nodes inside. In this way, after you restart the node, you will be an isolated node.
Next, we start to build a private network. We need to start three nodes, whether you use three physical machines or virtual machines or create three new containers .
Suppose the names are ipfs-master, ipfs-node1, and ipfs-node2 respectively, where master is the master node (seed node), and node1 and node2 are ordinary nodes.
Note: The following operations assume that you have initialized three nodes and deleted their bootstrap seed nodes.
In the first step, we need to generate a private network shared key on the master node as the access certificate for other nodes to join the private network. Nodes without a shared key are not allowed to join the private network.
We need to use the go-ipfs-swarm-key-gen tool to create a shared key. The installation method is very simple:
go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
The second step is to generate a shared key on the master node
ipfs-swarm-key-gen > ~/.ipfs/swarm.key
The third part is to copy them swarm.key
to ~/.ipfs/
the directories of node1 and node2 respectively, and then you need to obtain the connection address of the master node:
~$ ipfs id
{
"ID": "QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"PublicKey": "CAASpgIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxNWcsgcNlD6DrYHLLNLeJt2y0x0mqaruSse6hhM11tcPocdJq7z03WL9Elu/sPoBZ0SfG6SKgS9xrXewNrJKIGR85qlJcv43c7/6xjP41liOpY5Gtw4UWQlEZ4gV40OZceILQFD5bnpym+bQh/3zDduvASwDOBOpNS+3liIDXpR4fDh8EWoIi4pFBqDinsIs6lkd0dJBchHnUgPT83ZKpTj1pWf+52MxNDMQq8bmI7ZioojhncZb+Qp5yrgD80XR21WtbUIfVrZyF9e5Yo+DUV1WTEWG+955Cl+3FmXP0IEkZBPZL0g5DGibS+p0XQFXqJd4rcPPw1J0Gq0fWv9VrAgMBAAE=",
"Addresses": [
"/ip4/127.0.0.1/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip4/192.168.1.5/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip4/192.168.1.2/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip4/172.17.0.1/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/::1/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:ff02:6900:dea:2285:6549:3ce0/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:ff02:6900:9bbf:20ba:9f76:5a1/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:ff85:a100::1/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:fffc:2500:dea:2285:6549:3ce0/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:fffc:2500:1702:d952:9dec:84fa/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:ff02:6900:45ce:6b27:8eae:62fb/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP",
"/ip6/240e:fa:ff02:6900:6ea2:555d:866e:4445/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP"
],
"AgentVersion": "go-ipfs/0.4.20/",
"ProtocolVersion": "ipfs/0.1.0"
}
Here you need to choose the connection address of the LAN, which usually 192.168
starts with , because there are two network cards on my machine, so I have two IPs here. You can just choose one.
Suppose we choose"/ip4/192.168.1.5/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP"
We deleted the seed nodes of node1 and node2 before, so now they are isolated nodes, so we need to set our master node as their seed nodes, and there are two ways to set them, one is in node1 and node2
respectively Execute on:
ipfs bootstrap add /ip4/192.168.1.5/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP
added /ip4/192.168.1.5/tcp/4001/ipfs/QmeXkxzGxUrChcYJbuQQfw34Ze5bmQhmegTNDLtANKHWLP
Or take our previous rough method, directly modify the configuration files of node1 and node2, and modify the value of the Bootstrap option.
We can also add environment variables to each node to force the connection to the private network when the node starts:
export LIBP2P_FORCE_PNET=1
After setting up and restarting each node, you will find that there are two more entries in the output log:
Swarm is limited to private network of peers with the swarm key
Swarm key fingerprint: c2fc00b19ee671210674155a5cf76ee8
It means that the node is now connected to the private network.
4. Test
Next, we started testing, adding files to the three nodes, and then seeing if the files can be downloaded on the other two nodes. Next, I will directly talk about my test results.
- Adding a file at any node in the network can download the file at any other node.
- Files larger than 256KB will be stored in fragments automatically, but the fragments of the same file will not be stored on different nodes as we imagined. In fact, no matter how large a file you add, even if the file is finally divided
into 100 pieces, in the end, these 100 pieces are only stored on the node that performs the adding operation, and the.ipfs
directory not changed significantly.
It is estimated that only the distributed hash table (DHT) is synchronized. - Other nodes will automatically download a complete file cache to the local node when accessing the file for the first time, but if the node storing the file stops serving, the file cannot be downloaded from other nodes.
- The files uploaded to the ipfs node will be permanently stored and cannot be deleted. As long as the file hash is known, users can access the stored files through the gateway service provided by the node daemon. The access address is:
http://127.0.0.1:8080/ipfs/ {hash}
5. External services
By default, the services provided by the ipfs node are all on the local machine. If you want your own node to provide external services (local area network or public network), you need to modify the configuration file, do not bind the local IP, and change to127.0.0.1
0.0.0.0
"Addresses": {
"API": "/ip4/0.0.0.0/tcp/5001",
"Announce": [],
"Gateway": "/ip4/0.0.0.0/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
},
If you want to call the front end, you also need to configure cross-domain settings.
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Methods": [
"PUT",
"GET",
"POST"
],
"Access-Control-Allow-Origin": [
"*"
]
}
}
If you want to use the entire network as a cluster to provide external services, you can use a load balancer to cover all nodes, simply use Nginx, and it is better if you are familiar with LVS configuration.
This article was first published by the younger generation of proletarian code farmers