(1) Configuration Description
Node Role | IP addresses | CPU | RAM |
master、etcd | 192.168.128.110 | 4 Nuclear | 2G |
node1/minion1 | 192.168.128.111 | 4 Nuclear | 2G |
node2 / minion2 | 192.168.128.112 | 4 Nuclear | 2G |
(2). Kubernetes build container cluster management system
1) three hosts install common software packages
bash-completion may cause the press <Tab> key padded, vim vi editor is upgraded, wget for downloading Ali cloud yum source documents.
# yum -y install bash-completion vim wget
2) Three Host Configuration Ali cloud yum source
# mkdir /etc/yum.repos.d/backup # mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/ # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo # yum clean all && yum list
3) modify the hosts file
[root@kube-master ~]# vim /etc/hosts 192.168.128.110 kube-master 192.168.128.110 etcd 192.168.128.111 kube-node1 192.168.128.112 kube-node2 [root@kube-master ~]# scp /etc/hosts 192.168.128.111:/etc/ [root@kube-master ~]# scp /etc/hosts 192.168.128.112:/etc/
4) and a mounting assembly on the master node and configured etcd
Install components on master node K8s
[root@kube-master ~]# yum install -y kubernetes etcd flannel ntp
Turn off the firewall or open K8s components of the corresponding port, ETCD default port 2379, the default port 8080 pages
[root@kube-master ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. #添加端口方法 [root@kube-master ~]# firewall-cmd --permanent --zone=public --add-port={2379,8080}/tcp success [root@kube-master ~]# firewall-cmd --reload success [root@kube-master ~]# firewall-cmd --zone=public --list-ports 2379/tcp 8080/tcp
Etcd modify configuration files, and start to see
[Kube the root-Master @ ~] # Vim /etc/etcd/etcd.conf ETCD_DATA_DIR = "/ var / lib / ETCD / default.etcd" at line # 3, data storage directory ETCD_LISTEN_CLIENT_URLS = "http: // localhost: 2379 , http: //192.168.128.110: 2379 "# The first line 6, etcd Foreign service listening address, the default port 2379. If set to 0.0.0.0 then listen on all interfaces ETCD_NAME = "default" # Line 9, node name. If the storage cluster has only one node, this one can be annotated, default is default. ETCD_ADVERTISE_CLIENT_URLS = "http://192.168.128.110:2379" # Line 21 is [the root Kube-Master @ ~] && # systemctl Start ETCD ETCD systemctl enable the Created from the symlink /etc/systemd/system/multi-user.target.wants /usr/lib/systemd/system/etcd.service to /etcd.service. [Kube the root-Master @ ~] # systemctl Status ETCD ● etcd.service - ETCD Server loaded: loaded (/ usr / lib / systemd / System / etcd.service; enabled; vendor preset: the Active: the Active (running) Operating since two 2020-01-14 14:02:31 CST; CGroup: /system.slice/etcd.service └─12573 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://localhost:2379,http://192.168.128.110:2379 1月 14 14:02:31 kube-master etcd[12573]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 1月 14 14:02:31 kube-master etcd[12573]: setting up the initial cluster version to 3.3 1月 14 14:02:31 kube-master etcd[12573]: set the initial cluster version to 3.3 1月 14 14:02:31 kube-master etcd[12573]: enabled capabilities for version 3.3 1月 14 14:02:31 kube-master etcd[12573]: published {Name:default ClientURLs:[http://192.168.128.110:2379]} to cluster cdf818194e3a8c32 1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests 1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests 1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 192.168.128.110:2379, this is strongly discouraged! 1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! 1月 14 14:02:31 kube-master systemd[1]: Started Etcd Server. [root@kube-master ~]# yum -y install net-tools #需要使用到网络工具 [root@kube-master ~]# netstat -antup | grep 2379 tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 12573/etcd tcp 0 0 192.168.128.110:2379 0.0.0.0:* LISTEN 12573/etcd tcp 0 0 192.168.128.110:2379 192.168.128.110:49240 ESTABLISHED 12573/etcd tcp 0 0 127.0.0.1:2379 127.0.0.1:35638 ESTABLISHED 12573/etcd tcp 0 0 192.168.128.110:49240 192.168.128.110:2379 ESTABLISHED 12573/etcd tcp 0 0 127.0.0.1:35638 127.0.0.1:2379 ESTABLISHED 12573/etcd [root@kube-master ~]# etcdctl cluster-health 检查etcd cluster状态 member 8e9e05c52164694d is healthy: got healthy result from http://192.168.128.110:2379 cluster is healthy [root@kube-master ~]# etcdctl member list 检查etcd集群成员 8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.128.110:2379 isLeader=true
The generic configuration file modifies the master
[Kube the root-Master @ ~] # Vim / etc / Kubernetes / config KUBE_LOGTOSTDERR = "- = logtostderr to true" row # 13, whether the error log to standard error, if the error is not output to the standard is recorded to a file KUBE_LOG_LEVEL = "--v = 0" # line 16, the log rank KUBE_ALLOW_PRIV = "- allow-privileged = false" # line 19, whether to allow the vessel to run privileged, false representation does not allow KUBE_MASTER = "- master = http: / /192.168.128.110:8080 "# line 22
Modify the configuration file of the API Server
[root @ Kube-Master ~] # vim / etc / Kubernetes / apiserver KUBE_API_ADDRESS = "- in the insecure-the bind-address = 0.0.0.0" Line # 8, API Server listens on all ports KUBE_ETCD_SERVERS = "- etcd-servers = http://192.168.128.110:2379 "# line 17, etcd storage address KUBE_SERVICE_ADDRESSES =" - service-cluster- ip-range = 10.254.0.0 / 16 "# line 20, IP address range, to provide Pod and-Service # default access module allows the following: NamespaceLifecycle, NamespaceExists, LimitRanger, SecurityContextDeny, ServiceAccount, ResourceQuota KUBE_ADMISSION_CONTROL = "- Admission Control-AlwaysAdmit =" # line 23, which allows access to the module, not limited herein. KUBE_API_ARGS = "" # line 26
Controller Manager configuration files do not need to modify, you can look at
[root@kube-master ~]# vim /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="" [root@kube-master ~]# rpm -qf /etc/kubernetes/controller-manager kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64
Scheduler modify configuration files
[root@kube-master ~]# vim /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
Modify flanneld (overlay network) of the profile
[Kube the root-Master @ ~] # Vim / etc / sysconfig / flanneld FLANNEL_ETCD_ENDPOINTS = "http://192.168.128.110:2379" at line # 4, etcd storage address FLANNEL_ETCD_PREFIX = "/ k8s / network" # line 8, etcd storage configuration directory FLANNEL_OPTIONS = "- iface = ens33" # line 11, specifying a communication physical NIC [Kube the root-Master @ ~] # mkdir -p / K8S / Network [Kube the root-Master @ ~] # etcdctl SET / k8s / network / config '{ " Network": "10.255.0.0/16"}' # fill the IP range { "the Network": "10.255.0.0/16"} [Kube the root-Master @ ~] # etcdctl get / k8s / network / config # Thus, flanneld node running on the node behind the docker automatically obtain an IP address { "the Network": "10.255.0.0/16"} [Kube the root-Master @ ~] # systemctl systemctl enable flanneld flanneld && Start systemctl enable flanneld Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. the Created from the symlink / etc / systemd / System / Docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. [root@Kube-master ~]# ip a sh #成功 ...... 3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500 link/none inet 10.255.29.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::723e:875f:5995:76d0/64 scope link flags 800 valid_lft forever preferred_lft forever
API Server, Controller Manager and Scheduler on the restart Master, and set the post. Note: You can operate individually each assembly configuration, the operation may be revised and disposable.
[root@kube-master ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler [root@kube-master ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
5) install components on node1 / minion1 the node and configure
K8s install components on node1 / minion1 node
[root@kube-node1 ~]# yum -y install kubernetes flannel ntp
K8s close or open the firewall components corresponding port, Kube-Proxy default port 10249, kubelet default port 10248,10250,10255
[root@kube-node1 ~]# systemctl stop firewalld && systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Network configuration, used here flanneld (overlay network), and then restart set boot from Kai flaneld
[@ Kube the root-node1 ~] # Vim / etc / sysconfig / flanneld FLANNEL_ETCD_ENDPOINTS = "http://192.168.128.110:2379" #etcd storage address FLANNEL_ETCD_PREFIX = "/ k8s / network" #etcd store directory FLANNEL_OPTIONS = "- iface = ens33 "communication using the physical NIC # [@ Kube the root-node1 ~] && # systemctl systemctl enable the restart flanneld flanneld the Created from /etc/systemd/system/multi-user.target.wants/flanneld.service to the symlink / usr /lib/systemd/system/flanneld.service. the Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Modify k8s common configuration
[@ Kube the root-node1 ~] # Vim / etc / Kubernetes / config KUBE_MASTER = "--master = HTTP: //192.168.128.110: 8080" #, line 22, point master node
Look kube-proxy, because the default monitor all IP, so do not be modified.
[@ Kube the root-node1 ~] -v # grep '^ #' / etc / Kubernetes / Proxy KUBE_PROXY_ARGS = "" # default listen to all IP
Kubelet modify configuration files. Description: KUBELET_POD_INFRA_CONTAINER Pod base image specified address. This is a base image, it will start to generate a container through which each mirror Pod start, if this image is not local, then kubelet external network will download the image.
[@ Kube the root-node1 ~] # Vim / etc / Kubernetes / kubelet KUBELET_ADDRESS = "- 0.0.0.0 address =" # line 5, all the IP monitor, to be used as a remote connection to kubectl kubelet, and the view Pod container status KUBELET_HOSTNAME = - # line 11 "hostname-override = kube- node1", modify the host name, speed KUBELET_API_SERVER = "--api-servers = http : //192.168.128.110: 8080" # 14 line, point Server the API KUBELET_POD_INFRA_CONTAINER = "- Infra-POD-Image Container-= registry.access.redhat.com / rhel7 / Infrastructure-POD: Latest" line # 17, the base image specified address Pod KUBELET_ARGS = "" 20 Row
Restart kube-proxy, kubelet and docker (in fact, did not start), and set the boot from Kai
[root@kube-node1 ~]# systemctl restart kube-proxy kubelet docker [root@kube-node1 ~]# systemctl enable kube-proxy kubelet docker Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
check
[root@kube-node1 ~]# ip a sh ...... 3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500 link/none inet 10.255.42.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::a721:7a65:54ea:c2b/64 scope link flags 800 valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:5c:5b:ae:8c brd ff:ff:ff:ff:ff:ff inet 10.255.42.1/24 scope global docker0 valid_lft forever preferred_lft forever [root@kube-node1 ~]# yum -y install net-tools [root @ node1 ~] # netstat -antup | grep proxy TCP 0 0 127.0.0.1:10249 0.0.0.0:* listen 1473 / proxy TCP 0 0 192.168.128.111:55342 192.168.128.110:8080 Established 1473 / proxy TCP 0 0 192.168.128.111:55344 192.168. 128,110: 8080 Established 1473 / proxy [root @ node1 ~] # netstat -antup | grep kubelet TCP 0 0 127.0.0.1:10248 0.0.0.0:* listen 1698 / kubelet TCP 0 0 192.168.128.111:55350 192.168.128.110:8080 Established 1698 / kubelet TCP 0 0 192.168.128.111:55351 192.168.128.110:8080 Established 1698 / kubelet TCP 0 0 192.168.128.111:55354 192.168.128.110:8080 Established 1698 / kubelet TCP 0 0 192.168.128.111:55356 192.168.128.110:8080 Established 1698 / kubelet BCP6 0 0 ::: 4194 ::: * 1698 Olten / omelet BCP6 0 0 ::: 10250 ::: * 1698 Olten / omelet BCP6 0 0 ::: 10255 ::: * Ten
6) on the mounting assembly node2 / minion2 the node and configure
The operation is repeated on node1 / minion1 node
7) Test: check the status of the entire cluster node on a Master
[root@kube-master ~]# kubectl get nodes NAME STATUS AGE kube-node1 Ready 1h kube-node2 Ready 2m
So far K8s container cluster management system to build a complete, but at this time can not be managed using a Web page, only through kubectl command.