kubernetes1.9.0版本的离线部署

http://blog.51cto.com/ylw6006/2071885
本文将介绍使用kubeadm进行kubernetes1.9.0版本的离线部署,1.9版本是2017年发布的最后一个版本。kubeadm是kubernetes官方推荐的自动化部署工具,他将kubernetes的组件以pod的形式部署在master和node节点上,并自动完成证书认证等操作。


https://www.ctolib.com/docs/sfile/kubernetes-handbook/practice/jenkins-ci-cd.html






环境介绍:
Master节点: 
主机名:vm1
IP地址:192.168.115.5/24


Node节点:
主机名:vm2
IP地址:192.168.115.6/24


软件版本:
kubernetes:v1.9.0
docker:17.03
etcd:3.1.10
pause :3.0
flannel:v0.9.1
kubernetes-dashboard:v1.8.1


在开始之前我们需要将前文配置的1.5.2版本的k8s以及docker、etcd等软件进行卸载。


# yum -y remove kubernetes*  docker* docker-selinux etcd
kubeadm默认要从google的镜像仓库下载镜像,我们要将下载好的镜像文件进行本地导入
下载链接: https://pan.baidu.com/s/1c2O1gIW 
密码:9s92
MD5: b60ad6a638eda472b8ddcfa9006315ee
一、环境准备工作 (master和node节点上同步执行)
1、配置vm1和vm2节点ssh互信


# ssh-keygen 
# ssh-copy-id -i /root/.ssh/id_rsa.pub  root@vm2
# scp -rp k8s_images vm2:/root


# ssh-keygen 
# ssh-copy-id -i /root/.ssh/id_rsa.pub root@vm1 
2、关闭防火墙、selinux、配置内核参数


# systemctl stop firewalld && systemctl disable firewalld 
# getenforce 
Disabled
# echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
# sysctl -p
禁用selinux,主要为了允许容器可以访问主机文件系统和pod networks的需要。
设置内核参数主要是为了避免 RHEL/CentOS 7系统下出现路由异常。
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed。
二、安装 17.03.2-ce版本的docker,并导入image文件。(master和node节点上同步执行)


# yum install bzip2
# tar -xjvf k8s_images.tar.bz2 
# cd k8s_images
# yum -y localinstall docker-ce-*


# systemctl start docker && systemctl enable docker
# docker version
 


# cd k8s_images/docker_images/
# for i in $(ls *.tar);do docker load < $i;done
# cd ..
# docker load < kubernetes-dashboard_v1.8.1.tar 
 


三、安装k8s 1.9.0版本软件包(master和node节点上同步执行)


# cd /root/k8s_images/
# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
# rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm \
 kubelet-1.9.9-9.x86_64.rpm  \
kubectl-1.9.0-0.x86_64.rpm
# rpm -ivh kubectl-1.9.0-0.x86_64.rpm
# rpm -ivh kubeadm-1.9.0-0.x86_64.rpm
# rpm -qa |grep kube
# rpm -qa |grep socat
 


# systemctl enable kubelet && systemctl start kubelet
kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd,因此我们要修改成一致。在虚拟机上部署k8s 1.9版本需要关闭操作系统交换分区


# swapoff -a
# grep -i 'cgroupfs' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# systemctl daemon-reload
K8s默认支持多种网络插件如flannel、weave、calico,这里我们使用flanne,需要设置--pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml文件配置的默认网段,如果需要修改,--pod-network-cidr和kube-flannel.yml文件需要保持一致。
# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16 


[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 18.518775 seconds
[uploadconfig]?Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node vm1 as master by adding a label and a taint
[markmaster] Master vm1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 20049e.19abe8bacc412b0a
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy


Your Kubernetes master has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


You can now join any number of machines by running the following on each node
as root:


  kubeadm join --token 20049e.19abe8bacc412b0a 192.168.115.5:6443 --discovery-token-ca-cert-hash sha256:b44f687a629fe0d56a6700f8e6bbee1837190a64baad0ea057070e30c6a28142


# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# source  .bash_profile


# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
如果初始化失败需要重新进行初始化,需要先进行reset一下


# kubeadm reset
五、在master节点上部署网络插件flannel


# wget \
https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
# kubectl create -f kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
六、部署node节点


# swapoff  -a
# kubeadm join --token   \
20049e.19abe8bacc412b0a  \
 192.168.115.5:6443  \
--discovery-token-ca-cert-hash  \
sha256:b44f687a629fe0d56a6700f8e6bbee1837190a64baad0ea057070e30c6a28142


[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.115.5:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.115.5:6443"
[discovery] Requesting info from "https://192.168.115.5:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.115.5:6443"
[discovery] Successfully established connection with API Server "192.168.115.5:6443"


This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the master to see this node join the cluster.




# grep -i 'cgroupfs' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"








# systemctl daemon-reload
# systemctl restart kubelet
# kubectl get node
# kubectl get pod --all-namespaces




 


七、部署dashboard


# kubectl create -f kubernetes-dashboard.yaml 
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
默认验证方式有kubeconfig和token,这里我们使用basicauth的方式进行apiserver的验证
创建/etc/kubernetes/manifests/pki/basic_auth_file 用于存放用户名和密码。basic_auth_file文件格式为user,password,userid


# echo 'admin,admin,2' > /etc/kubernetes/pki/basic_auth_file
给kube-apiserver添加basic_auth验证


# grep 'auth' /etc/kubernetes/manifests/kube-apiserver.yaml
    - --enable-bootstrap-token-auth=true
    - --authorization-mode=Node,RBAC
    - --basic_auth_file=/etc/kubernetes/pki/basic_auth_file 
# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
k8s1.6后版本都采用RBAC授权模型。默认情况下cluster-admin是拥有全部权限的,将admin和cluster-admin角色进行clusterrolebinding绑定,这样admin就有cluster-admin的权限。


# kubectl create clusterrolebinding  \
login-on-dashboard-with-cluster-admin  \
--clusterrole=cluster-admin --user=admin
clusterrolebinding "login-on-dashboard-with-cluster-admin" created


# kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml      
 


# curl --insecure https://vm1:6443 -basic -u admin:admin  


Firefox访问测试,因为是自签的证书,所以浏览器会报证书未受信任问题


 




https://segmentfault.com/a/1190000012559479

猜你喜欢

转载自blog.csdn.net/haoxiaoyan/article/details/80394872