K8S build -1 Master 2 Workers (dashboard + ingress)

This article describes the latest version of k8s build (v1.15.2)

Points following several topic steps:

  1. The basic configuration of each node
  2. Build master node
  3. Construction of worker nodes
  4. Installation dashboard
  5. Installation ingress
  6. Common commands
  7. docker mirror blame

 

 

The basic configuration of each node (each node needs to execute the following command: Master, Work1, Work2)

Under its own IP changes, according to the actual situation

STOP firewalld && systemctl disable systemctl firewalld 

CAT >> / etc / hosts << EOF 10.8.1.1 K8S-Master1 api.k8s.cn
 10.8.1.2 K8S-slave1
 10.8.1.3 K8S-slave2 
EOF 
# New iptable modify configuration files 
cat << the EOF> net.iptables.k8s.conf 
net.bridge.bridge-NF-Call-the ip6tables. 1 = 
net.bridge.bridge-NF-Call-iptables. 1 = 
the EOF 
# swap partition off 
the sudo The swapoff -a 
# prevent the automatic loading start swap partition disposed commented 
the sudo -i Sed '/ swap / S / ^ \ (. * \) $ / # \. 1 / G' / etc / fstab 
# closing the SELinux 
the sudo the setenforce 0 
# prevent the boot open, modify the configuration SELINUX 
sudo sed -i s '/ SELINUX = enforcing / SELINUX = disabled' / g / etc / selinux / config






 enforcing / SELINUX = disabled' / g / etc / selinux / config 
configuration iptables
&& sudo mv net.iptables.k8s.conf /etc/sysctl.d/ sudo sysctl --system 

# yum increase Ali cloud image for kubernetes 
CAT << EOF> /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
= Kubernetes name 
BaseURL = HTTPS: //mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 
Enabled. 1 = 
gpgcheck. 1 = 
repo_gpgcheck. 1 = 
gpgkey = HTTPS: //mirrors.aliyun.com/kubernetes/ yum / DOC / yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
the EOF 

yum Update 

# mounted wget 
the sudo yum the install wget -Y 

# mounted Docker 
yum the install Docker -y .x86_64 

# k8s installation tool 
yum install -y kubelet kubeadm kubectl 
 
# restart the service
systemctl enable kubelet && systemctl Start kubelet  

Then you need the docker images (any course need to be performed separately on the master, worker1, worker2 node)

vi init-docker-images.sh

#然后内容如下
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.2
docker pull registry.aliyuncs.com/google_containers/coredns:1.3.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.15.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.2
docker pull registry.aliyuncs.com/google_containers/etcd:3.2.24
docker pull registry.aliyuncs.com/google_containers/etcd:3.3.10
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker pull docker.io/jmgao1983/flannel:v0.11.0-amd64
docker pull quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.2 k8s.gcr.io/kube-controller-manager:v1.15.2
docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2
docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker tag docker.io/jmgao1983/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0

Execution until the download is complete:

./init-docker-images.sh

  

Build master node

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=0.0.0.0  

Then recorded similar to the following string:

kubeadm join api.k8s.cn:6443 --token b5jxzg.5jtj2odzoqujikk1 \
    --discovery-token-ca-cert-hash sha256:90d0ad57b39bf47bface0c7f4edec480aaf8352cab872f4d52072f998cf45105   

 

At this point, k8s cluster will be in NotReady state (Master node, see kubectl get nodes with the following command), you need the following:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config


/var/lib/kubelet/kubeadm-flags.env vi # then --network-plugin = cni delete the text after # save and reboot k8s service service kubelet restart  

 

Wait a moment, master node becomes Ready status (kubectl get nodes)

 

Construction worker node (node ​​worker1 performed separately and worker2)

the Join kubeadm {Master1 the ip, need to replace their own} : 6443 --token rntn5f.vy9h28s4pxwx6eig \ 
    --discovery-token-CA-CERT the hash-SHA256: 62624adcc8aa5baa095dae607b8e57c8b619db956ad69e0e97f0e40c74542a92
/var/lib/kubelet/kubeadm-flags.env vi 

# then --network-plugin = cni delete the text 
after # save and reboot k8s service 


service kubelet restart  

 

 

Installation Dashboard (operation as long as the master node)

https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml wget 


vi Kubernetes-dashboard.yaml 
# start modifying NodePort, in the final document
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml  

New account

vi dashboard-account.yaml

#内容如下
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aks-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: aks-dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: aks-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-head
  labels:
    k8s-app: kubernetes-dashboard-head
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-head
  namespace: Kube-System 


# execution 
kubectl apply -f dashboard-account.yaml  

  They can then visit https using firefox: // master's ip address: 30001

Then find token:

[root@k8s-master1 ~]# kubectl -n kube-system get secrets|grep aks-dashboard-admin
aks-dashboard-admin-token-gmjfv                  kubernetes.io/service-account-token   3      4h52m


[root@k8s-master1 ~]# kubectl -n kube-system describe secret aks-dashboard-admin-token-gmjfv
Name:         aks-dashboard-admin-token-gmjfv
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: aks-dashboard-admin
              kubernetes.io/service-account.uid: 87d4ec1b-1829-4420-98d6-e77c1519aed6

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJha3MtZGFzaGJvYXJkLWFkbWluLXRva2VuLWdtamZ2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFrcy1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4N2Q0ZWMxYi0xODI5LTQ0MjAtOThkNi1lNzdjMTUxOWFlZDYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWtzLWRhc2hib2FyZC1hZG1pbiJ9.ELpsYbmWhW1sr3DOZfyupOkb87AbJ7sVoXEBitoTD46kuuNYcn8ajvwJcdfGruwrM9LwDcvMN7jD5UFF7-rgz1MUBEOZCoAjXFRrM1-Jn59TlXMk9W9JRD3DhMtuBRh6XUgPRjf755qr7WzR_DC8aCwjywAvFE1_R4N2oMZIU8gdmG0BsqwACHIbBnLJDAElBvgnKl8Jm4_XzKZW5ls-C45PSu-GC-yszt8qSN2bO5Z_rIUXhvK13Es5d0nUBvcanFBOsLjotWry195SWKEAuLiMp7qm6RJRrYWEpObh81w3MvbtrycZGMP7g-9H3s5vmHgs7HAnvjTEQht4c0F5qA
[root@k8s-master1 ~]# 

 

Above with this token to log on the line 

 

Installation ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

  

 

Common commands 

When the # or initialization k8s join the cluster fails, execution 
kubeadm RESET 

# View all cluster nodes 
kubectl GET Nodes 


# view Secrets 
kubectl GET Secrets 
kubectl -n-System Kube Secrets GET 

# pod view specific details, such as the reason to see the non-Running state 
kubectl describe pod {pod name} 
kubectl Kube -n-name System describe pod {pod}

  

docker mirror blame

As some docker is mirrored walls, so will block live default installation, the solution is to download in advance, with the domestic image download

If the download is the mirror image of, namespace will change, therefore after the download is complete, you need to re-tag docker tag command to foreign namespace

 

Insufficient point, post-solution:

  • Availability master node
  • etcd high availability (in fact contained in the above)

 

Guess you like

Origin www.cnblogs.com/aarond/p/k8s.html