ubuntu16 k8s 1.11.16安装

 
1.安装前准备
关闭selinux  防火墙  swap分区(否则在init初始化无法进行),准备工作较为基础此处不过多编写
$ vi /etc/sysctl.d/k8s-sysctl.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 $ sysctl -p /etc/sysctl.d/k8s-sysctl.conf
 
2.安装docker
#Ubuntu安装dcoker存储库
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
#添加Docker官方的GPG密钥
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
#设置stable存储库
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
 
#更新apt包索引
apt-get update
 
#安装docker最新版本18.06
apt install docker-ce=18.06.0~ce~3-0~ubuntu
 
3.安装kubeadm,kubelet和kubectl(建议版本号与Kubernetes的版本匹配)
apt-get update && apt-get install -y apt-transport-https curl
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
 
apt-get update
 
apt-get install -y kubelet kubeadm kubectl
 
 
4.配置主节点上的kubelet使用的cgroup驱动程序(小坑,前面版本的部署基本都没怎么配置docker驱动和kubelet都是一致的,这里忽略了导致后面出现的报错且日志中没有反馈这方面报错信息)
dcoker info查看下驱动,修改/etc/default/kubelet
我这里是cgroupfs ,保证驱动一致,多说一句很多文档对这里没有提到,1.13和以前在这里配置略微不同,老版本这里都是修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf配置就可以,kubelet默认驱动配置是system
cat /etc/defaule/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs
 
需要重新启动kubelet
systemctl daemon-reload systemctl restart kubelet
 
5.在每台node上执行images
root@ubuntu:~# vim /root/pull.sh
#!/bin/bash
Tar_List="k8s.gcr.io/kube-apiserver:v1.13.3
k8s.gcr.io/kube-controller-manager:v1.13.3
k8s.gcr.io/kube-proxy:v1.13.3
k8s.gcr.io/kube-scheduler:v1.13.3
k8s.gcr.io/coredns:1.2.6
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/pause:3.1
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3"
 
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker pull quay.io/coreos/flannel:v0.10.0-arm
 
Tar_ali_Begin=registry.cn-hangzhou.aliyuncs.com/google_containers
 
for Project in ${Tar_List};do
echo "docker ${Project}"
Tar=$(echo ${Project}|awk -F/ '{print $NF}')
echo docker pull ${Tar_ali_Begin}/${Tar}
docker pull ${Tar_ali_Begin}/${Tar}
docker tag ${Tar_ali_Begin}/${Tar} ${Project}
docker rmi ${Tar_ali_Begin}/${Tar}
done
 
$hostnamectl set-hostname `hostname`
 $/bin/bash /root/pull.sh
 
8.初始化init
kubeadm init --pod-network-cidr=10.244.0.0/16
 
初始化成功
保留token,已便后面添加节点用
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm token list(可查看生成的token)
 
运行以下命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
#部署网络Flanner
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 
查看节点状态
kubectl get node
 
kubectl get pod --all-namespaces
这里master节点安装完成,如果pod状态不是running查看系统日志或者使用kubectl describe pod –n kube-system+pod名查看
 
9.将证书文件从第一个master节点复制到其余master节点,集群etcd采用的是http没有加证书
USER=root # customizable CONTROL_PLANE_IPS="172.16.10.2 172.16.10.3" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: #scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt #scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key scp /etc/kubernetes/admin.conf "${USER}"@$host: done
其他master节点操作
USER=root # customizable mkdir -p /etc/kubernetes/pki/etcd mv /home/${USER}/ca.crt /etc/kubernetes/pki/ mv /home/${USER}/ca.key /etc/kubernetes/pki/ mv /home/${USER}/sa.pub /etc/kubernetes/pki/ mv /home/${USER}/sa.key /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
 
执行kubeadm join启动
kubeadm join 172.16.10.210:6443 --token r2tshb.svfb47rz7fdg874p --discovery-token-ca-cert-hash sha256:b2721cafb1e3979b19040aeebc544277998e15b683c93ee7360b28d363473c9a
 
其余节点重复上述操作。
 
node节点加入集群执行kubeadm join,如果出现报错想要初始化init执行kubeadm reset
 
 
#部署Dashboard
cd kubernetes-dashboard/
kubectl apply -f kubernetes-dashboard.yaml
 
 
##让master也可以部署pod(默认master 禁用安装pod)
kubectl taint node master1 node-role.kubernetes.io/master-

猜你喜欢

转载自www.cnblogs.com/yunweiadmin/p/10441705.html