二进制安装k8s - 0.7 node安装 kubelet、kube-proxy 、cni plugins

二进制安装k8s - 0.7 node安装 kubelet、kube-proxy 、cni plugins


创建 node 相关目录

mkdir -p /data/k8s/{
    
    kubelet,kube-proxy,cni,bin,cert}
mkdir -p /data/k8s/cni/net.d/

下载 kubelet,kube-proxy 二进制和基础 cni plugins

[root@node bin]# ls
bridge  host-local  kubelet  kube-proxy  loopback

从master拉取 ca 文件。

[root@node cert]# scp 192.168.100.59:/data/k8s/cert/{
    
    ca.pem,ca-key.pem,ca-config.json} /data/k8s/cert/
[root@node cert]# ls
ca-config.json  ca-key.pem  ca.pem




准备 cni配置文件

vim /data/k8s/cni/net.d/10-default.conf

{
    
    
	"name": "mynet",
   "cniVersion": "0.3.1",
	"type": "bridge",
	"bridge": "mynet0",
	"isDefaultGateway": true,
	"ipMasq": true,
	"hairpinMode": true,
	"ipam": {
    
    
		"type": "host-local",
		"subnet": "{
    
    { pod_cni }}"
	}
}

注: { { pod_cni }} 为pod所能用的网段 我这里设置为 10.244.0.0/16



开启 ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4




kubelet 配置部分

下面直接在master生成 kubelet.kubeconfig 文件,再传到对应的 node 上。

准备kubelet 证书签名请求

master上操作
mkdir -p /data/k8s/node/100.60
vim /data/k8s/node/100.60/kubelet-csr.json

{
    
    
  "CN": "system:node:192.168.100.60",
  "hosts": [
    "127.0.0.1",
    "192.168.100.60",
    "node01"
  ],
  "key": {
    
    
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
    
    
      "C": "CN",
      "ST": "SiChuan",
      "L": "ChengDu",
      "O": "system:nodes",
      "OU": "Lswzw"
    }
  ]
}

注:

  • 上面的ip须要换为node主机host。
  • 证书里面须要添加 hostname 不然在调用 kubectl log 命令时提示,证书不允许节点。
创建 kubelet 证书与私钥

cd /data/k8s/node/100.60

cfssl gencert \
  -ca=/data/k8s/cert/ca.pem \
  -ca-key=/data/k8s/cert/ca-key.pem \
  -config=/data/k8s/cert/ca-config.json \
  -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet
创建kubelet.kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=/data/k8s/cert/ca.pem \
  --embed-certs=true \
  --server={
    
    {
    
     KUBE_APISERVER }} \
  --kubeconfig=kubelet.kubeconfig

注: { { KUBE_APISERVER }} 我这为: https://192.168.100.59:6443

设置客户端认证参数
kubectl config set-credentials system:node:{
    
    {
    
     node_ip }} \
  --client-certificate=/data/k8s/node/100.60/kubelet.pem \
  --embed-certs=true \
  --client-key=/data/k8s/node/100.60/kubelet-key.pem \
  --kubeconfig=kubelet.kubeconfig

注: { { node_ip }} 这里为要生成node的ip。 我这为 192.168.100.60

设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:node:{
    
    {
    
     node_ip }} \
  --kubeconfig=kubelet.kubeconfig

注: { { node_ip }} 这里为要生成node的ip。 我这为 192.168.100.60

选择默认上下文
kubectl config use-context default \
  --kubeconfig=kubelet.kubeconfig
拷贝CA && kubelet.kubeconfig 到对应节点
scp kubelet.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet-key.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet.kubeconfig 192.168.100.60:/data/k8s/kubelet/

创建对应节点用户权限

这个很重要,不然无法创建pod
最后name 是根据 “设置客户端认证参数” 里面设置的参数
vim node60.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: basic-auth-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:node:192.168.100.60

。。。下面为node节点操作。。。

创建kubelet的配置文件

vim /data/k8s/kubelet/kubelet-config.yaml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: {
    
    {
    
     node_ip }}
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /data/k8s/cert/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.44.0.2
clusterDomain: cluster.local.
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3 
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
- pods
- kube-reserved
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 200Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth 
healthzBindAddress: {
    
    {
    
     node_ip }}
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeReservedCgroup: /system.slice/kubelet.service
kubeReserved: {
    
    'cpu':'200m','memory':'500Mi','ephemeral-storage':'1Gi'}
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort 
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: /data/k8s/kubelet/kubelet.pem
tlsPrivateKeyFile: /data/k8s/kubelet/kubelet-key.pem

注: { { node_ip }} 为当前节点的ip。 这里为 192.168.100.60

创建kubelet的 systemd 文件

vim /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/kubelet
ExecStartPre=/bin/mount -o remount,rw '/sys/fs/cgroup'
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStart=/data/k8s/bin/kubelet \
  --config=/data/k8s/kubelet/kubelet-config.yaml \
  --cni-bin-dir=/data/k8s/bin \
  --cni-conf-dir=/data/k8s/cni/net.d \
  --hostname-override={
    
    {
    
     node_name }} \
  --kubeconfig=/data/k8s/kubelet/kubelet.kubeconfig \
  --network-plugin=cni \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
  --root-dir=/data/k8s/kubelet \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

注: { { node_name }} 为当get node 时显示的name。 这里为 node01

启用kubelet 服务
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet




kube-proxy配置部分

从master上拉取kube-proxy.kubeconfig 文件

这个文件在03里面已经生成

scp 192.168.100.59:/data/k8s/conf/kube-proxy.kubeconfig /data/k8s/kube-proxy/
创建kube-proxy 的systemd 文件

vim /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/kube-proxy
ExecStart=/data/k8s/bin/kube-proxy \
  --bind-address={
    
    {
    
     node_ip }} \
  --cluster-cidr=10.244.0.0/16 \
  --hostname-override={
    
    {
    
     node_name }} \
  --kubeconfig=/data/k8s/kube-proxy/kube-proxy.kubeconfig \
  --logtostderr=true \
  --proxy-mode=ipvs
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

注:

  • kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后,kube-proxy 会对访问 Service IP 的请求做 SNAT
  • { { node_ip }} 为node主机ip 这里我的是 192.168.100.60
  • { { node_name }} 为显示的node节点名 我这里是 node01
启用kube-proxy 服务
systemctl start kube-proxy
systemctl status kube-proxy
systemctl enable kube-proxy




master 上验证服务

[root@master conf]# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    <none>   24m   v1.15.6

猜你喜欢

转载自blog.csdn.net/lswzw/article/details/106146370