centos7之使用最新版的kubeadm体验k8s1.12.1

1、环境准备

centos7 、docker-ce18.06.1-ce、kubeadm、kubelet、kubectl

2、安装

yum安装,准备repo文件

docker:

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://download.docker.com/linux/centos/7/source/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

kubeadm、kubelet、kubctl

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum -y install docker-ce-18.06.1.ce-3.el7.x86_64
yum -y  install kubeadm kubelet kubectl

3、配置docker

#vim /usr/lib/systemd/system/docker.service
 ExecStart=/usr/bin/dockerd --graph=/data/docker --storage-driver=overlay2
# mkdir -p /etc/systemd/system/docker.service.d
#vim http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.10.23.74:8118" "NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.10.29.43,10.10.25.49,172.11.0.0,172.10.0.0,172.11.0.0/16,172.10.0.0/16,10.,172.,.evo.get.com,.kube.hpp.com,charts.gitlab.io,.mirror.ucloud.cn"

  #cat https-proxy.conf
 [Service]

  Environment="HTTPS_PROXY=http://10.10.23.74:8118" "NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.10.29.43,10.10.25.49,172.11.0.0,172.10.0.0,172.11.0.0/16,172.10.0.0/16,10.,172.,.evo.get.com,.kube.hpp.com,charts.gitlab.io,.mirror.ucloud.cn"

shadowsocks的安装参考我的另外的一篇博客:https://www.cnblogs.com/cuishuai/p/8463458.html

4、初始化

创建/etc/sysctl.d/k8s.conf文件

cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

sysctl -p /etc/sysctl.d/k8s.conf
 
swapoff -a 
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.18.1.12

--apiserver-advertise-address
是master的apiserver的监听地址,默认是本机ip。
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

默认master节点是node-role.kubernetes.io/master:NoSchedule,需要做一个修改临时的,为了测试和后面部署flannel。

kubectl taint nodes ku node-role.kubernetes.io/master-

ku是我的master的节点名称。当然可以制定为--all。

安装flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

vim kube-flannel.yml
   args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0

kubectl apply -f kube-flannel.yml

由于前面已经修改了master的taint,可以直接部署,如果没有修改的话,可以修改flannel的压马路文件来实现部署,否则会部署失败。

修改

spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule

tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - key: node.kubernetes.io/not-ready
        operator: Exists
        effect: NoSchedule

测试DNS

kubectl run curl --image=radial/busyboxplus:curl -it
[ root@curl-5cc7b478b6-6cfqr:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

进入pod

kubectl exec -it curl-5cc7b478b6-6cfqr -n default -- /bin/sh

!添加node节点到集群:

使用初始化得到的命令直接加入即可,node节点需要安装kubelet、kubeadm

!集群移除节点

在master节点:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

node2是要删除的节点的名称。

在node2节点上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

安装helm

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
tar xf helm-v2.11.0-linux-amd64.tar.gz
cp  helm tiller  /usr/local/bin

创建tiller需要的用户,这里为了可以使用helm部署到所有的namespace里面,赋予clusterole权限,创建rbac-tiller.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
kubectl  apply  -f  rbac-tiller.yaml

初始化:

helm init --service-account tiller  --upgrade

默认安装到kube-system空间下,可以自己指定namespace和image等。

镜像:

# kubernetes
k8s.gcr.io/kube-apiserver:v1.12.1
k8s.gcr.io/kube-controller-manager:v1.12.1
k8s.gcr.io/kube-scheduler:v1.12.1
k8s.gcr.io/kube-proxy:v1.12.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/pause:3.1

# network and dns
quay.io/coreos/flannel:v0.10.0-amd64
k8s.gcr.io/coredns:1.2.2


# helm and tiller
gcr.io/kubernetes-helm/tiller:v2.11.0

# nginx ingress
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
k8s.gcr.io/defaultbackend:1.4

# dashboard and metric-sever
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
gcr.io/google_containers/metrics-server-amd64:v0.3.0

猜你喜欢

转载自www.cnblogs.com/cuishuai/p/9767954.html