Kubernetes cluster deployment
Architecture Description
node | IP address |
---|---|
master | 10.10.10.14 |
node1 | 10.10.10.15 |
node2 | 10.10.10.16 |
Change the Hostname to master, node1, node2, and configure the /etc/hosts file of all test machines
[root@master ~]# cat /etc/hosts
10.10.10.14 master etcd node14
10.10.10.15 node1 node15
10.10.10.16 node2 node16
Turn off the firewall service that comes with CentOS7
Install
System initial installation (all hosts) - select [minimized installation], then yum update, upgrade to the latest version
yum update
[root@master ~]#
yum install -y etcd kubernetes-master ntp flannel
[root@node1 ~]#
yum install -y kubernetes-node ntp flannel docker
Time proofing
all hosts
ntpdate ntp1.aliyun.com
hwclock -w
Configure etcd server
[root@master ~]# grep -v '^#' /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://10.10.10.14:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.14:2379"
start the service
systemctl start etcd;systemctl enable etcd
Check etcd cluster status
[root@master ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.10.10.14:2379
cluster is healthy
Check the etcd cluster member list, there is only one
[root@master ~]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://10.10.10.14:2379 isLeader=true
Configure the master server
1) Configure the kube-apiserver configuration file
[root@master ~]# grep -v '^#' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.10.10.14:8080"
[root @ master ~] # grep -v '^ #' / etc / kubernetes / apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.10.14:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
KUBE_API_ARGS=""
2) Configure the kube-controller-manager configuration file
[root@master ~]# grep -v '^#' /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""
3) Configure the kube-scheduler configuration file
[root@master ~]# grep -v '^#' /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
4) Start the service
for i in kube-apiserver kube-controller-manager kube-scheduler;do systemctl restart $i; systemctl enable $i;done
Configure node1 node server
1) Configure etcd
[root@master ~]# etcdctl set /atomic.io/network/config '{"Network": "10.255.0.0/16"}'
{"Network": "10.255.0.0/16"}
[root@master ~]# etcdctl get /atomic.io/network/config
{"Network": "10.255.0.0/16"}
2) Configure the node1 network. This example uses the flannel method to configure. For other methods, please refer to the Kubernetes official website.
[root@node1 ~]# grep -v '^#' /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.10.10.14:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_OPTIONS=""
3) Configure node1 kube-proxy
[root@node1 ~]# grep -v '^#' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.10.10.14:8080"
[root@node1 ~]# grep -v '^#' /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--bind=address=0.0.0.0"
[root@node1 ~]#
4) Placement node1 kubelet
[root@node1 ~]# grep -v '^#' /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=10.10.10.15"
KUBELET_API_SERVER="--api-servers=http://10.10.10.14:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
5) Start the node1 service
for i in flanneld kube-proxy kubelet; do systemctl restart $i;systemctl enable $i;systemctl status $i ;done
Configure node2 node server
The configuration of node2 is basically the same as that of node1, except for the following exception
[root@node2 ~]# vi /etc/kubernetes/kubelet
KUBELET_HOSTNAME="--hostname-override=10.10.10.16"
View Nodes
[root@master ~]# kubectl get nodes
NAME STATUS AGE
10.10.10.15 Ready 18h
10.10.10.16 Ready 13h
k8s supports two ways, one is directly through the command parameters, the other is through the configuration file, the configuration file supports json and yaml,
command mode
build pod
kubectl run nginx --image=nginx --port=80 --replicas=2
Encounter problems
The creation is successful but kubectl get pods has no result
prompt message: no API token found for service account default
solution: edit /etc/kubernetes/apiserver to remove SecurityContextDeny, ServiceAccount in KUBE_ADMISSION_CONTROL, and restart the kube-apiserver.service service
pod-infrastructure:latest image download failed
error message: image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.
Solution: yum install *rhsm * -y
command view
[root@master log]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3449338310-h6l9d 1/1 Running 0 6m
nginx-3449338310-n4grl 1/1 Running 0 6m
[root@master log]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 2 2 2 2 13m