kubenetes architecture
Divided into Master and node node, master scheduling is assigned the task, node actually received master scheduling work, apiserver the master user interfaces to run administrative commands, all services are by api server communication.
etcd save some configuration information
scheduler
controller-manager
kubelet accept the scheduling, management pod
kube-proxy and load balancing service discovery
K8S pod smallest deployment unit, which is a container, generally a close relationship several containers are deployed in the same pod.
k8s installation
kubeadm wrap up the entire installation process, easy to install (recommended)
Binary more clearly understand the whole structure, easy debugging (recommended)
minikube stand-alone, up the environment for rapid testing
yum is not recommended
k8s kubeadm installed
Preparing the environment
Turn off the firewall: $ systemctl stop firewalld $ systemctl disable firewalld Close selinux: $ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0 Close swap: Swapoff $ - A $ Temporary Vim $ / etc / fstab $ permanently Add the host name and IP correspondence relationship (remember to set the host name): $ cat /etc/hosts 192.168.31.61 k8s-master 192.168.31.62 k8s-node1 192.168.31.63 k8s-node2 IPv4 traffic will be passed to the bridging chain iptables: $ cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
All nodes installed Docker / kubeadm / kubelet
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo $ yum -y install docker-ce-18.06.1.ce-3.el7 $ systemctl enable docker && systemctl start docker $ docker --version Docker version 18.06.1-ce, build e68fc7a
Add Ali cloud YUM repositories
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [Kubernetes] name = Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Installation kubeadm , kubelet and kubectl
Because of frequent version updates, deployment version number specified here: $ Yum install -y kubelet- 1.15 . 0 kubeadm- 1.15 . 0 kubectl- 1.15 . 0 $ systemctl enable omelet
Department Kubernetes Master
The init kubeadm $ \ apiserver-address-advertise = 192.168 . 3161 \ image-repository registryaliyuncscom googlecontainers \ kubernetes-v1 version. 15.0 \ service-cidr = 10.1 . 0.0 and / 16 And \ pod-network-cidr = 10244 . 0.0 / 16
Since the default pull mirroring address k8s.gcr.io domestic inaccessible, specify the address of the warehouse Ali cloud mirrored here.
Use kubectl tools:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl get nodes
Installation Pod network plug ( the CNI )
HTTPS kubectl the Apply -f $: // raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml ensure access to quay.io this registery. If the download fails, you can change this image Address: lizhenliang / flannel:. V0 11.0 -amd64
Subscription Kubernetes Node
Execution (Node). Add new nodes to the cluster, perform kubeadm join in kubeadm init command output: $ kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \ --discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5
Test kubernetes cluster
Creating a pod in Kubernetes cluster, verify proper operation: $ kubectl create deployment nginx --image=nginx $ kubectl expose deployment nginx --port=80 --type=NodePort $ kubectl get pod,svc Access Address: HTTP: // NodeIP: Port
Deploy Dashboard
HTTPS kubectl the Apply -f $: // raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml default image domestic unable to access, modify Mirror address: lizhenliang / kubernetes-dashboard -amd64:. v1 10.1 The default Dashboard only within the cluster to access, modify Service is NodePort type, exposed to the outside: kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard $ kubectl apply -f kubernetes-dashboard.yaml Access Address: HTTP: // NodeIP: 30001 create a service account and bind the default Cluster- ADMIN Administrator roles in a cluster: $ kubectl create serviceaccount dashboard-admin -n kube-system $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin $ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Using the output of token login Dashboard.