手工部署kubernetes-1.17.0(不含etcd集群和flannel)

K8S基础环境(推荐大家用adm或其他一键方式部署,手工搭建加深理解):
(一)ETCD集群部署
(二)安装配置 Flannel Docker
(三)手工部署kubernetes-1.17.0
(四)K8S之HelloWorld
(五)K8S之Traefik ingress 、Nginx ingress
(六)K8S之官方dashboard


二进制部署kubernetes-1.17.0(不含etcd集群和flannel,点击跳转

ip地址 Lable Component
192.168.1.54 master apiserver,scheduler,controller-manager,etcd,docker,flannel
192.168.1.65 node kubelet,kube-proxy,docker,flannel
192.168.1.105 node kubelet,kube-proxy,docker,flannel

环境初始化

文件下载

解释一下下面这这3个压缩包文件中的内容,kubernetes-server 中包含了 kubernetes-node 中的文件,kubernetes-node 中包含了 kubernetes-client 中的文件,所以 kubernetes-server 是最全的。之所有有后面2个压缩包的存在,你可以理解为当只需要 kubernetes-nodekubernetes-client 中文件的时候就没有必要下载最全的 kubernetes-server 包。所以后面2个包根据实际需要决定是否下载。

wget https://dl.k8s.io/v1.17.0/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.17.0/kubernetes-node-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.17.0/kubernetes-client-linux-amd64.tar.gz

关闭防火墙和SELINUX

systemctl stop firewalld
systemctl disable firewalld

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

关闭swap并配置docker参数

swapoff -a
注释掉/etc/fstab中swap那一行
echo 0 > /proc/sys/vm/swappiness #使swappiness=0临时生效
cat >  /etc/sysctl.d/k8s.conf <<-EOF
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p #使配置生效

配置cfssl用于创建证书(如果没网络就下载好拷贝过来)

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

创建k8s ca证书

mkdir -p /etc/kubernetes/ssl && cp $_
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成证书和私钥
生成ca所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名

cfssl gencert -initca ca-csr.json | cfssljson -bare ca - && ll

创建etcd server证书(这个etcd的证书本文未使用到,所以也没有执行,跳过)

cat << EOF | tee etcd-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.54",
    "192.168.1.65",
    "192.168.1.105"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

生成apiserver ca证书及私钥
注意hosts中填写的apiserver宿主机IP地址和后面为集群定义的service-cluster-ip-range的IP地址10.96.0.1

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "192.168.1.54",
      "127.0.0.1",
      "10.96.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

创建kube-proxy ca证书及私钥

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

安装kube-apiserver服务

# 解压下载的二进制文件 kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/soft/k8s && cd $_
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp -p kube-apiserver /usr/bin/
mkdir -p /etc/kubernetes && mkdir -p /var/log/kubernetes

token.csv文件的生成

echo "`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" > /etc/kubernetes/token.csv
cat /etc/kubernetes/token.csv
# 生成的token.csv的内容
7a348d935970b45991367f8f02081535,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

配置apiserver文件,设置文件中的etcd地址

cat > /etc/kubernetes/apiserver <<-EOF
KUBE_API_OPTS="--etcd-servers=https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379 \
--service-cluster-ip-range=10.96.0.0/24 \
--service-node-port-range=30000-32767 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota  \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/token.csv \
--v=2 \
--etcd-cafile=/opt/soft/etcd/ssl/ca.pem \
--etcd-certfile=/opt/soft/etcd/ssl/server.pem \
--etcd-keyfile=/opt/soft/etcd/ssl/server-key.pem \
--tls-cert-file=/etc/kubernetes/ssl/server.pem  \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allow-privileged=true"
EOF

配置kube-apiserver系统服务

cat > /usr/lib/systemd/system/kube-apiserver.service <<-EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GooglePlatform/kubernetes
After=etcd.service
Wants=etcd.service
 
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \$KUBE_API_OPTS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

确认运行状态

ps -ef |grep kube-apiserver
systemctl status kube-apiserver

安装kube-controller-manager服务

拷贝执行程序

cd /opt/soft/k8s/kubernetes && cp -p server/bin/kube-controller-manager /usr/bin

配置服务启动参数

cat > /etc/kubernetes/controller-manager <<-EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=2 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--bind-address=127.0.0.1 \
--service-cluster-ip-range=10.96.0.0/24 \
--cluster-name=kubernetes \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
EOF

配置系统服务

cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleDloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
 
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

确认运行状态

ps -ef |grep kube-controller-manager
systemctl status kube-controller-manager

安装kube-scheduler服务

拷贝执行程序

cd /opt/soft/k8s/kubernetes && cp -p server/bin/kube-scheduler /usr/bin

配置服务启动参数

cat << EOF | tee /etc/kubernetes/scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=2 --master=127.0.0.1:8080 --leader-elect"
EOF

配置系统服务

cat > /usr/lib/systemd/system/kube-scheduler.service <<-EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

启动服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

确认运行状态

ps -ef |grep kube-scheduler
systemctl status kube-scheduler

配置环境变量

vim /etc/profile
# 修改PATH
export PATH=/opt/soft/k8s/kubernetes/server/bin:$PATH
# 生效配置
source /etc/profile

查看集群状态

[root@server1 kubernetes]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"} 

PS:命令中的cscomponentstatus的缩写


部署node节点

将压缩包中的 kubeletkube-proxy 二进制文件拷贝node节点的 /usr/bin 目录中

cd /opt/soft/k8s/kubernetes/server/bin
scp kubelet kube-proxy [email protected]:/usr/bin
scp kubelet kube-proxy [email protected]:/usr/bin

创建脚本文件environment.sh(依然在master机器操作)

mkdir -p /etc/kubernetes-node && cd $_ && touch environment.sh

下面是environment.sh文件内容:

# BOOTSTRAP_TOKEN 的值在 /etc/kubernetes/token.csv 文件中
BOOTSTRAP_TOKEN=7a348d935970b45991367f8f02081535

# KUBE_APISERVER 在Master上,所以是Master的IP地址
KUBE_APISERVER="https://192.168.1.54:6443"

# 创建kubelet bootstrapping kubeconfig
 
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

执行脚本environment.sh生成配置文件bootstrap.kubeconfigkube-proxy.kubeconfig

sh environment.sh && ll

创建kubelet参数配置文件

cat << EOF | tee kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.1.54
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.96.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

创建kubelet配置文件
注意下面配置中的k8s.gcr.io/pause:3.1你可能因为蔷的原因拉不下来,离线安装你需要手工导入这个image到主机,或者你可以考虑使用docker.io/xzxiaoshan/pause:3.1

cat << EOF | tee kubelet
KUBELET_OPTS="--logtostderr=true \
--v=2 \
--hostname-override=192.168.1.54 \
--kubeconfig=/etc/kubernetes-node/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes-node/bootstrap.kubeconfig \
--config=/etc/kubernetes-node/kubelet.config \
--cert-dir=/etc/kubernetes-node/ssl \
--pod-infra-container-image=k8s.gcr.io/pause:3.1"
EOF

创建kubelet.service文件

cat << EOF | tee /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/etc/kubernetes-node/kubelet
ExecStart=/usr/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target
EOF

创建kube-proxy-config.yaml文件

cat << EOF | tee /etc/kubernetes-node/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes-node/kube-proxy.kubeconfig"
  qps: 100
bindAddress: 192.168.1.54
healthzBindAddress: 192.168.1.54:10256
metricsBindAddress: 192.168.1.54:10249
enableProfiling: true
clusterCIDR: 10.244.0.0/16
hostnameOverride: 192.168.1.54
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF

创建kube-proxy.service文件

cat << EOF | tee /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
ExecStart=/usr/bin/kube-proxy \\
  --config=/etc/kubernetes-node/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

至此,目录/etc/kubernetes-node中包含了如下几个文件

ll /etc/kubernetes-node
bootstrap.kubeconfig  environment.sh  kubelet  kubelet.config  kube-proxy.kubeconfig  kube-proxy-config.yaml

拷贝文件到node

# 拷贝目录/etc/kubernetes-node到node的/etc/目录
scp -r /etc/kubernetes-node [email protected]:/etc/
scp -r /etc/kubernetes-node [email protected]:/etc/
# 拷贝kubelet.service文件到node的/usr/lib/systemd/system/中
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} [email protected]:/usr/lib/systemd/system/

修改node节点的配置参数address和hostname-override

在每个node上分别修改下面几个文件:
将/etc/kubernetes-node/kubelet.config中的address设置为node主机的ip地址
将/etc/kubernetes-node/kubelet中的hostname-override设置为node主机的ip地址
将/etc/kubernetes-node/kube-proxy-config.yaml中的相关IP地址也设置为node主机的ip地址
PS:可以在node主机上直接使用命令 sed -i “s/192.168.1.54/192.168.1.65/g” /etc/kubernetes-node/{kubelet.config,kubelet,kube-proxy-config.yaml} 快捷全部替换

在master上执行命令将kubelet-bootstrap用户绑定到系统集群角色

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap
# 执行命令后会输出下面这行 
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

启动所有node节点的kubelet服务

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

校验服务状态

ps -ef|grep kubelet
systemctl status kubelet

approve kubelet CSR 请求,下面有关kubectl命令的操作都在master上执行

# 节点的kubelet启动后,会自动向master节点发送验证加入请求,查看CSR列表结果中所有为pending状态的,都是需要master批准的
[root@server1 ~]# kubectl get csr
NAME                                                   AGE         REQUESTOR           CONDITION
node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA   47s         kubelet-bootstrap   Pending
node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8   67s         kubelet-bootstrap   Pending

# 手动approve CSR请求,批准请求(其中参数就是kubectl get csr输出的NAME)
[root@server1 ~]# kubectl certificate approve node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA
certificatesigningrequest.certificates.k8s.io/node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA approved
[root@server1 ~]# kubectl certificate approve node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8
certificatesigningrequest.certificates.k8s.io/node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8 approved

# 再次查看CSR状态CONDITION已经改变
[root@server1 ~]# kubectl get csr
NAME                                                   AGE         REQUESTOR           CONDITION
node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA   3m20s       kubelet-bootstrap   Approved,Issued
node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8   3m40s       kubelet-bootstrap   Approved,Issued

查看node节点

[root@cib-server1 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
192.168.1.105   Ready    <none>   2m16s   v1.17.0
192.168.1.65    Ready    <none>   113s    v1.17.0

给节点打标记
如下示例给192.168.1.54标记master,其他节点标记node(也可以用其他你需要的名字)

kubectl label node 192.168.1.54  node-role.kubernetes.io/master='master'
kubectl label node 192.168.1.65  node-role.kubernetes.io/node='node'
kubectl label node 192.168.1.105 node-role.kubernetes.io/node='node'

然后再用kubectl get nodes查看,ROLES列就有数据了


启动所有node节点的kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

校验服务状态

ps -ef|grep kube-proxy
systemctl status kube-proxy

至此,单master+多node安装部署完成。
本文内容,在Redhat7.4和CentOS7.6部署均正常。


(END)

猜你喜欢

转载自blog.csdn.net/catoop/article/details/104837634