First, the plan
1, swarm01 as a manager node, swarm02 and swarm03 as worker nodes.
# cat /etc/hosts 127.0.0.1 localhost 192.168.139.175 swarm01 192.168.139.176 swarm02 192.168.139.177 swarm03
2, configure SSH-free secret landing
# ssh-keygen -t rsa -P '' # ssh-copy-id -i .ssh/id_rsa.pub [email protected] # ssh-copy-id -i .ssh/id_rsa.pub [email protected]
Second, the installation docker and ansible
1, installation configuration ansible
# yum -y install ansible # cat /etc/ansible/hosts | grep -v ^# | grep -v ^$ [node] 192.168.139.176 192.168.139.177 # sed -i "s/SELINUX=enforcing/SELINUX=disabled" /etc/selinux/config # ansible node -m copy -a 'src=/etc/selinux/config dest=/etc/selinux/' # systemctl stop firewalld # systemctl disable firewalld # ansible node -a 'systemctl stop firewalld' # ansible node -a 'systemctl disable firewalld'
Note: Here choose to turn off the firewall, the actual environment can open ports on their own.
2, installation docker
-
Mounted manager node docker
# yum install -y yum-utils device-mapper-persistent-data lvm2 # yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # yum list docker-ce --showduplicates | sort -r # yum -y install docker-ce # docker --version Docker version 17.06.0-ce, build 02c1d87 # systemctl start docker # systemctl status docker # systemctl enable docker
-
Use ansible mounted in the worker nodes docker
# ansible node -m copy -a 'src=/etc/yum.repos.d/docker-ce.repo dest=/etc/yum.repos.d/' # ansible node -m yum -a "state=present name=docker-ce" # ansible node -a 'docker --version' 192.168.139.173 | SUCCESS | rc=0 >> Docker version 17.06.0-ce, build 02c1d87 192.168.139.174 | SUCCESS | rc=0 >> Docker version 17.06.0-ce, build 02c1d87 # ansible node -a 'systemctl start docker' # ansible node -a 'systemctl status docker' # ansible node -a 'systemctl enable docker'
Third, the docker swarm cluster configuration
1. Create a docker swarm cluster
# docker swarm init --listen-addr 0.0.0.0 Swarm initialized: current node (a1tno675d14sm6bqlc512vf10) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-3sp9uxzokgr252u1jauoowv74930s7f8f5tsmm5mlk5oim359e-dk52k5uul50w49gbq4j1y7zzb 192.168.139.175:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
2. Check node
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS a1tno675d14sm6bqlc512vf10 * swarm01 Ready Active Leader
3, see the management node added to the cluster manager commands
# docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-3sp9uxzokgr252u1jauoowv74930s7f8f5tsmm5mlk5oim359e-7tdlpdnkyfl1bnq34ftik9wxw 192.168.139.175:2377
4, to see added to the cluster node command worker
# docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-3sp9uxzokgr252u1jauoowv74930s7f8f5tsmm5mlk5oim359e-dk52k5uul50w49gbq4j1y7zzb 192.168.139.175:2377
5, the front two worker nodes join the cluster plan
# docker swarm join --token SWMTKN-1-3sp9uxzokgr252u1jauoowv74930s7f8f5tsmm5mlk5oim359e-dk52k5uul50w49gbq4j1y7zzb 192.168.139.175:2377 This node joined a swarm as a worker.
6, to see whether the worker has joined a cluster node
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 7zkbqgrjlsn8c09l3fagtfwre swarm02 Ready Active a1tno675d14sm6bqlc512vf10 * swarm01 Ready Active Leader apy9zys2ch4dlwbmgdqwc0pn3 swarm03 Ready Active
7, see the docker swarm of network management
# docker network ls NETWORK ID NAME DRIVER SCOPE 05efca714d2f bridge bridge local c9cd9c37edd7 docker_gwbridge bridge local 10ac9e48d81b host host local n60tdenc5jy7 ingress overlay swarm a9284277dc18 none null local
Here, a docker swarm cluster build better
Fourth, build a docker swarm of UI-Portainer
Portainer Address: HTTPS : //portainer.io/.
1, use this command to deploy Portainer
# docker service create \ --name portainer \ --publish 9000:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock # docker images |grep portainer portainer/portainer latest 07cde96d4789 2 weeks ago 10.4MB # docker service ls ###查看集群列表 ID NAME MODE REPLICAS IMAGE PORTS p5bo3n0fmqgz portainer replicated 1/1 portainer/portainer:latest *:9000->9000/tcp
This deployment Well
2, the browser input http: // localhost: 9000 enters the UI interface, as shown, into the first Portainer, the configuration of the 8-digit password admin
After editing the password click on "validate" verification
As shown below, the input user name and password into the admin Portainer
Home follows
Check swarm node module
Here you can pull in images module mirror, where I pull the nginx
Created under the Services module nginx service, Services> Add service, create three copies here, and map out port 80, and finally click "Create Service" Creating a Service
Refresh the service list to see if successfully created
3, use the command confirmation
# docker images | grep nginxnginx latest b8efb18f159b 7 days ago 107MB # ansible node -m shell -a 'docker images|grep nginx' 192.168.139.177 | SUCCESS | rc=0 >> nginx latest b8efb18f159b 8 days ago 107MB 192.168.139.176 | SUCCESS | rc=0 >> nginx latest b8efb18f159b 8 days ago 107MB # docker service ls ###查看服务的任务列表 ID NAME MODE REPLICAS IMAGE PORTS emrs3rj73bwh Nginx replicated 3/3 nginx:latest *:80->80/tcp p5bo3n0fmqgz portainer replicated 1/1 portainer/portainer:latest *:9000->9000/tcp # docker service ps Nginx ID NAME IMAGE NODE 0smpndfx0bwc Nginx.1 nginx:latest swarm03 werrrzlyfbf1 Nginx.2 nginx:latest swarm01 l7puro0787cj Nginx.3 nginx:latest swarm02 DESIRED STATE CURRENT STATE ERROR PORTS Running Running 15 minutes ago Running Running 15 minutes ago Running Running 15 minutes ago
五、搭建docker swarm的UI—Shipyard
Shipyard的UI也是比较简单的,但是比较反复,它需要在每个节点都pull相应镜像才能加入Shipyard的UI。
1、先pull相应镜像到本地,这里我使用的是网易蜂巢的镜像,很快而且镜像也是比较新的
# docker pull hub.c.163.com/library/alpine:latest # docker pull hub.c.163.com/library/rethinkdb:latest # docker pull hub.c.163.com/longjuxu/microbox/etcd:latest # docker pull hub.c.163.com/wangjiaen/shipyard/docker.io/shipyard/docker-proxy:latest # docker pull hub.c.163.com/library/swarm:latest # docker pull hub.c.163.com/wangjiaen/shipyard/docker.io/shipyard/shipyard:latest
2、给这些镜像新建一个tag标签
# docker tag 7328f6f8b418 alpine # docker tag 4a511141860c rethinkdb # docker tag 6aef84b9ec5a microbox/etcd # docker tag cfee14e5d6f2 shipyard/docker-proxy # docker tag 0198d9ac25d1 swarm # docker tag 36fb3dc0907d shipyard/shipyard
3、使用如下命令搭建Shipyard的UI
# curl -sSL https://shipyard-project.com/deploy | bash -s Deploying Shipyard -> Starting Database -> Starting Discovery -> Starting Cert Volume -> Starting Proxy -> Starting Swarm Manager -> Starting Swarm Agent -> Starting Controller Waiting for Shipyard on 192.168.139.175:8080 .. Shipyard available at http://192.168.139.175:8080 Username: admin Password: shipyard
4、根据提示输入http://localhost:8080,输入用户名admin,密码shipyard进入shipyard
5、进入shipyard首页容器界面
6、进入nodes模块查看,这里现在只有manager节点
7、在worker节点上pull并tag镜像,即是重复如上的第①和第②步,之后,在该worker节点上输入如下命令将其加入shipyard
# curl -sSL https://shipyard-project.com/deploy | ACTION=node DISCOVERY=etcd://192.168.139.175:4001 bash -s Adding Node -> Starting Cert Volume -> Starting Proxy -> Starting Swarm Manager -> Starting Swarm Agent Node added to Swarm: 192.168.139.176
其他节点同理。
对比两种UI,其实都是比较简单的,个人认为Portainer较好,在manager节点pull一个镜像即可搭建UI。
问题:
-
manager:
# docker swarm init --advertise-addr 192.168.139.175
-
worker:
# docker swarm join --token SWMTKN-1-4dwtfbdvjmuf3limglbpy66k85ply2cn66hd0ugsaxfed5fj1d-3rp33pedt9k7ewpfizbzc9bvi 192.168.139.175:2377 Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
出现worker节点无法加入集群的问题,这里需要设置监听地址全零。
本文出自https://www.centos.bz/2017/08/docker-swarm-cluster-shipyard-ui-manager/