kubeadm practice installation Kubernetes 1.15

The original address reference github

First, prepare the environment (carried out on all devices)

Three centos7.5 servers, network usage Calico.

IP addresses Node Role CPU RAM Hostname
10.0.1.45 master 2c 4G k8s-master
10.0.1.20 node 2c 4G node1
10.0.1.18 node 2c 4G node2
  1. Set the host name hostname.
hostnamectl set-hostname k8s-master

Corresponding to the table above Hostname, node1 and node2 empathy.

  1. 3 sets of host file editing device, add the domain name resolution.
cat <<EOF >>/etc/hosts
10.0.1.45 k8s-master
10.0.1.20 node1
10.0.1.18 node2
EOF
  1. Turn off the firewall, selinux and swap
systemctl stop firewalld

systemctl disable firewalld

Disable SELINUX:

setenforce 0

vim /etc/selinux/config
SELINUX=disabled

Close swap

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab
  1. Kernel configuration parameters, IPv4 traffic will be passed to the bridging chain iptables
cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1
EOF

sysctl --system
  1. kube-proxy open ipvs.
    Since ipvs it has been added to the trunk of the kernel, so open ipvs is kube-proxy premise needs to load the kernel module:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

/Etc/sysconfig/modules/ipvs.modules file creation, ensure that the node restart automatically load the required modules. Use lsmod | grep -e ip_vs -e nf_conntrack_ipv4 command to check whether the required kernel module has been loaded correctly.

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  1. Installation docker

Kubernetes 1.15 a list of currently supported version is 1.13.1 docker, 17.03, 17.06, 17.09, 18.06, 18.09. Here 18.09.7 install version docker in each node.

yum install -y yum-utils   device-mapper-persistent-data   lvm2 &&\
yum-config-manager --add-repo    https://download.docker.com/linux/centos/docker-ce.repo &&\
yum install docker-ce-18.09.7-3.el7 -y
  1. Modify docker cgroup driver to systemd

According to the document CRI installation content, for use systemd as the release init system in Linux using systemd as cgroup driver docker can ensure that the server node is more stable in a tight resource situation, so here modify cgroup driver on each node docker is systemd.

vim /etc/docker/daemon.json

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

#重启docker
systemctl restart docker

#验证
docker info | grep Cgroup
Cgroup Driver: systemd
  1. Installation Kubernetes Tools

Configure yum domestic source

yum install -y wget

mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo

wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo

yum clean all && yum makecache

Configuring domestic Kubernetes source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

Installation kubeadm, kubelet, kubectl

yum install -y kubeadm kubelet kubectl

systemctl enable kubelet

Two, k8s-master node configuration

1. kubeadm cluster initialization Kubernetes

kubeadm init --kubernetes-version=1.15.0 \

--apiserver-advertise-address=10.0.1.45 \

--image-repository registry.aliyuncs.com/google_containers \

--pod-network-cidr=10.244.0.0/16

Definition:

--apiserver-advertise-address=10.0.1.45
#定义api server地址就是master本机IP地址。

--image-repository registry.aliyuncs.com/google_containers
#由于kubeadm默认从官网k8s.grc.io下载镜像,以防网络原因,改为阿里云镜像仓库地址。

--pod-network-cidr=10.244.0.0/16
#定义POD网段为: 10.244.0.0/16。

Cluster initialization successful return the following information:

join 10.0.1.45:6443 --token bybzi7.7201j7f7mtiwtmg4 \

--discovery-token-ca-cert-hash sha256:9186c9b0709af151079bcb034f1771f10f382341bfb45024e5d0c541a055f2eb
  1. Configuration tool kubectl
mkdir -p ~/.kube

cp /etc/kubernetes/admin.conf ~/.kube/config

#查看集群状态,确认所有组件都处于healthy状态
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

If you encounter problems initializing a cluster, you can use the following command to clean up (with caution):

kubeadm reset
  1. Installation Calico v3.8 Network Services
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

#由于默认的 calico.yaml 的 Pod 网段为192.168.0.0/16,打开 calico.yaml 文件找到这行改为10.244.0.0/16。
vim calico.yaml

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

kubectl apply -f ./calico.yaml

#查看创建情况,等待每个 pod 的 STATUS 变为 Running。
watch kubectl get pods --all-namespaces
NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
kube-system  calico-node-24h85                          1/1    Running  0         2m43s
kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s

4 Set Environment Variables

cat   > ~/.bash_profile << EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

Three, node node configuration (node ​​operations on all nodes)

Add node node, execute the following command so that all node cluster node joins Kubernetes.

kubeadm join 10.0.1.45:6443 --token bybzi7.7201j7f7mtiwtmg4    \
 --discovery-token-ca-cert-hash sha256:9186c9b0709af151079bcb034f1771f10f382341bfb45024e5d0c541a055f2eb

After a successful, enter the command k8s-master check the status of the cluster.

kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   36m     v1.15.0
node1        Ready    <none>   3m10h   v1.15.0
node2        Ready    <none>   3m      v1.15.0

Fourth, the deployment of the Dashboard (operating on k8s-master)

  1. yaml file download the Dashboard.
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

#修改 yaml 文件中使用的源镜像。
sed -i 's/k8s.gcr.io/loveone/g' kubernetes-dashboard.yaml
#使用 NodePort 模式映射 30001 至 k8s 所有宿主机的 30001 端口上。
sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml
  1. Deploy Dashboard.
kubectl apply -f kubernetes-dashboard.yaml
  1. Once created, check the related services running.
kubectl get deployment kubernetes-dashboard -n kube-system
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           3m

kubectl get services -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   40m
kubernetes-dashboard   NodePort    10.99.190.175   <none>        443:30001/TCP            4m
  1. Enter the Firefox browser Dashboard Access Address: https://10.0.1.45:30001
    Note: Due to the certification reasons Dashborad, Google and IE browser may not open, use the Firefox browser.

  2. Dashboard view access authentication token.
kubectl create serviceaccount  dashboard-admin -n kube-system
kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  1. Using the output of token login Dashboard.

Creating five

1 Use command to create a pod

[root@k8s-master mainfests]# kubectl run nginx-deploy --image=nginx:1.14 --port=80 --replicas=3

[root@k8s-master mainfests]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
nginx-deploy-bc9ff65dd-6kvfg   1/1     Running   0          16h
nginx-deploy-bc9ff65dd-ffcl5   1/1     Running   0          16h
nginx-deploy-bc9ff65dd-pvjzt   1/1     Running   0          17h

2 Use yaml create pod

[root@k8s-master mainfests]# cat pod-demo.yml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: nginx:1.14
  - name: busybox
    image: busybox:latest
    command:
    - "bin/sh"
    - "-c"
    - "sleep 3600"

[root@k8s-master mainfests]# kubectl  create -f pod-demo.yml

Guess you like

Origin www.cnblogs.com/boy215/p/11276010.html