kubeadm部署kubernetes集群

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/jeremy_yangt/article/details/84335537


Version

  • OS: ubuntu 18.04 64bit
  • docker: 18.06.1
  • kubeadm: 1.12.2-00
  • kubectl: 1.12.2-00
  • kubelet: 1.12.2-00
  • kubernetes: v1.12.2
  • dashboard: v1.10.0
  • weave: 2.5.0
  • pause: 3.1
  • coredns: 1.2.2
  • etcd: 3.2.24

注:需要在每台服务(所有的 master 和 worker 节点)器上安装:

  • docker
  • kubeadm
  • kubelet
  • kubectl

安装 Docker

apt-get install -y apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

apt-get update

// 安装指定版本的docker-ce
// 查询可用版本:apt-cache madison docker-ce
apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu

安装 Kubeadm

// https://kubernetes.io/docs/setup/independent/install-kubeadm/

apt-get update && apt-get install -y apt-transport-https curl
px curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
px apt-get update
px apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

或者使用aliyunkubernetes

// 增加kubernetes aliyun镜像源
apt-get update && apt-get install -y apt-transport-https curl

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update

apt-get install -y kubeadm=1.12.2-00 kubectl=1.12.2-00 kubelet=1.12.2-00

apt-mark hold kubelet kubeadm kubectl

在上述安装 kubeadm 的过程中,kubeadmkubectlkubeletkubernetes-cni 这几个二进制文件都会被自动安装好。

dpkg -l |grep kube
hi  kubeadm                               1.12.2-00                          amd64        Kubernetes Cluster Bootstrapping Tool
hi  kubectl                               1.12.2-00                          amd64        Kubernetes Command Line Tool
hi  kubelet                               1.12.2-00                          amd64        Kubernetes Node Agent
ii  kubernetes-cni                        0.6.0-00                           amd64        Kubernetes CNI

设置 kubectl 命令自动补全

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

关闭swap

swapoff -a

部署 Kubernetes 的 Master 节点

参考笔记:kubeadm init 时异常

kubeadm init --kubernetes-version=v1.12.2
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
  
// 注:参考“安装网络插件:weave”这一小节
  
You can now join any number of machines by running the following on each node
as root:

 kubeadm join 192.168.3.200:6443 --token 02x9mf.bqmy6orso9ka4xj9 --discovery-token-ca-cert-hash sha256:1dde890e407fcfdbbd54ee889a969b67a4a9650ee0a49b02d0aa41bb4d404213

安装网络插件:weave

参考weave文档

安装命令:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

也可以先将文件内容保存到/etc/kubernetes/weave.conf,然后在用如下方式安装:

// setup 1
wget -O /etc/kubernetes/weave.conf "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

// setup 2
kubectl apply -f /etc/kubernetes/weave.conf

确认 pod 状态

kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-cb6pb             1/1     Running   0          9m22s
kube-system   coredns-576cbf47c7-sh82r             1/1     Running   0          9m22s
kube-system   etcd-k8s-master                      1/1     Running   0          72s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          73s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          72s
kube-system   kube-proxy-6682z                     1/1     Running   0          9m22s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          66s
kube-system   weave-net-cnrqv                      2/2     Running   0          100s

检查node状态

kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   8m14s   v1.12.2

部署 Kubernetes 的 Worker 节点

和Master节点的区别在于:在 kubeadm init 的过程中kubelet 启动后,Master 节点上还会自动运行kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。

部署 worker 节点:

第一步:在所有 Worker 节点上执行安装 Docker安装 Kubeadm两个小节的所有步骤。

第二步:执行部署 Master 节点时生成的 kubeadm join 指令:

kubeadm join 192.168.3.200:6443 --token 02x9mf.bqmy6orso9ka4xj9 --discovery-token-ca-cert-hash sha256:1dde890e407fcfdbbd54ee889a969b67a4a9650ee0a49b02d0aa41bb4d404213

此时,在 master 节点上查看节点状态

kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   55m   v1.12.2
k8s-node01   Ready    <none>   93s   v1.12.2

部署 Kubernetes Dashboard (Master节点)

准备 dashboard 镜像

docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 docker image rm mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0

下载并编辑 kubernetes-dashboard.yaml

cd /etc/kubernetes
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

vim kubernetes-dashboard.yaml

做如下修改

// 一, 配置镜像拉取策略:
...
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        imagePullPolicy: Never  // new add
...

// 二,配置端口 
// Service部分
spec:
  type: NodePort  // new add
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001  // new add

创建 dashboard

kubectl apply -f kubernetes-dashboard.yaml

查看 dashboard pod 状态

kubectl -n kube-system get pod

创建一个管理员用户

创建文件:/etc/kubernetes/kubernetes-dashboard-adminUser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

创建admin-user

kubectl create -f kubernetes-dashboard-adminUser.yaml

获取管理员用户的Token

kubectl describe  secret admin-user --namespace=kube-system

添加Token至kubeconfig文件

vim ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxxxxx
    server: https://192.168.3.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: xxxxxxxx
    client-key-data: xxxxxxxx
    token: xxxxxxxx  // new add

导入kubeconfig文件

浏览器访问:https://192.168.3.200:30001,导入配置文件: ~/.kube/config 即可。

至此,kubernetes dashboard 配置完成。

部署容器存储插件:Rook (Master节点)

Rook 项目是一个基于 Ceph 的 Kubernetes 存储插件。具有水平扩展、迁移、灾难备份、监控等大量的企业级功能,这使得这个项目变成了一个完整的、生产级别可用的容器存储插件。

创建 Rook operator

kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

等待 rook-ceph-system 命名空间的所有Pod变为Running状态

// kubectl get namespaces

kubectl -n rook-ceph-system get pod
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-agent-m7dwt                 1/1     Running   0          18s
rook-ceph-operator-7bbc5b99b8-hbnpd   1/1     Running   0          105s
rook-discover-4lj8m                   1/1     Running   0          18s

创建 Rook Cluster

kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml

确认结果

// kubectl get namespaces

kubectl -n rook-ceph get pod
NAME                                     READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-558fbdbb5-8zcdf          1/1     Running     0          3m29s
rook-ceph-mon-a-6c9c4f85cb-7fj2g         1/1     Running     0          4m8s
rook-ceph-mon-b-845976fccc-2gg67         1/1     Running     0          3m57s
rook-ceph-mon-c-5fd6678f46-ffdfr         1/1     Running     0          3m42s
rook-ceph-osd-0-6c7fbc859b-jv6qc         1/1     Running     0          3m7s
rook-ceph-osd-prepare-k8s-node01-6x4sb   0/2     Completed   0          3m19s

Rook 将自己的 Pod 都放在由它自己管理的 Namespace 中。

至此,基于Rook 的持久化存储集群部署完成。

之后在 Kubernetes 上创建的所有 Pod 就能够通过 PV 和 PVC 的方式在容器里挂载由 Ceph 提供的数据卷了。

Rook 项目会负责这些数据卷的生命周期管理、灾难备份等运维工作。

END.

猜你喜欢

转载自blog.csdn.net/jeremy_yangt/article/details/84335537