k8s binary install node node

Some problems with node node deployment

  1. Node node deployment does not involve certificate replication, and authentication and authorization are generated through kubectl config (the access of the entire cluster must pass the two stages of authentication and authorization)
  2. The network of the node node can be based on docker's own cni and network plug-in (flannel) to complete the network function (the difference is that the docker comes with it can only be used by docker)
  3. Authorization is based on the user name, and both kubelet and kube-porxy need to access resources through role authorization
  4. After a node is deployed, other nodes can directly use the matching file (as long as the node display name is modified -hostname-override=k8s-nodex, the same name cannot be added to the cluster)
  5. After the node authentication request, the master section needs to be confirmed. If you encounter a configuration error or modify the configuration, you need to delete all the certificate files generated by the node, then delete the request information of the master, and finally restart the kubelet service and rejoin the cluster

node node

  • kubelet: realize the life cycle of the container
  • kube-proxy: Responsible for writing rules to IPTABLES, IPVS to achieve service mapping access (to achieve access to Pod applications outside the cluster)
mkdir -p /usr/local/k8s/{
    
    conf,logs}
#创建工作目录
#拷贝kubelet和kube-proxy二进制文件到/usr/local/bin

Generate certification file

  • kubelet is based on token authentication
  • kube-proxy certificate authentication

kubele

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
#自动生成一个随机token

cat token.csv 
64c8a4bd1c2e920aa92049044b5197ba,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#格式:token,用户名,UID,用户组

kube-proxy

cat kube-proxy-csr.json
{
    
    
    "CN": "kube-proxy",
    "hosts": [ ],
    "key": {
    
    
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
    
    
            "C": "CN",
            "L": "SHANGHAI",
            "ST": "SHANGHAI"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-proxy-csr.json | cfssljson -bare /opt/etcd/ssl/k8s/kube-proxy
#只要注意-profile指定的名称,这个是ca证书里指定的

Configuration file kubeconfig generation

kubeconfig is used to authenticate kubelet and kube-proxy to access the cluster on node nodes

kubele

kubectl config set-cluster k8s-master   --certificate-authority=/usr/local/k8s/ssl/ca.pem   --embed-certs=true   --server=https://192.168.12.2:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
#配置集群参数,修改ca公钥和集群server地址

kubectl config set-credentials kubelet-bootstrap --token=64c8a4bd1c2e920aa92049044b5197ba --kubeconfig=kubelet-bootstrap.kubeconfig
#配置客户端认证参数,token值就是生成token文件里的

kubectl config set-context default --cluster=k8s-master --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
#设置上下文参数,集群参数和用户参数可以同时设置多对,在上下文参数中将集群参数和用户参数关联起来
#上下文名称default,集群名称k8s-master,访问集群的用户名为kubelet-bootstrap

kube-proxy

kubectl config set-cluster k8s-master   --certificate-authority=/usr/local/k8s/ssl/ca.pem   --embed-certs=true   --server=https://192.168.12.2:6443 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy --client-certificate=/usr/local/k8s/ssl/kube-proxy.pem \ 
--client-key=/usr/local/k8s/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
#不同在于kube-proxy使用认证方式

kubectl config set-context default --cluster=k8s-master --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
#用户kube-proxy 

Copy the generated files to the node

User authorization

kubeconfig completes the authentication of the cluster, but also requires the user's resource authorization to officially access the cluster

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

kubectl create clusterrolebinding kube-proxy \
--clusterrole=system:node-proxier \
--user=kube-proxy
#clusterrole代表的是集群角色,也就是所有的namespace资源
#2个角色的授权资源可以通过命令查看,system:node-bootstrapper和system:node-proxier都是自带的集群角色

Role View

kubectl get clusterrole
#查看所有的集群角色

kubectl get clusterrole system:node-bootstrapper -o yaml
#查看system:node-bootstrapper角色的yaml信息
#verbs授权资源,create、get、list、watch

kubectl get clusterrolebinding kube-proxy -o wide
#查看集群构建kube-proxy详细信息

The preparation phase is over

kubele configuration file

cat kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/usr/local/k8s/logs \
--hostname-override=k8s-node1 \
--kubeconfig=/usr/local/k8s/conf/kubelet.kubeconfig \
--bootstrap-kubeconfig=/usr/local/k8s/conf/kubelet-bootstrap.kubeconfig \
--cert-dir=/usr/local/k8s/ssl \
--cluster-dns=10.2.0.2 \
--cluster-domain=cluster.local. \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet --help

  • --Hostname-override: node display name
  • --Network-plugin: Enable CNI #Use third-party CNI needs to be configured, the default docker can not be used
  • --Kubeconfig: empty path, it will be automatically generated, later used to connect to apiserver
  • –Bootstrap-kubeconfig: first start to apply for a certificate from apiserver
  • –Cert-dir: kubelet certificate generation directory
  • --Pod-infra-container-image: Manage the image of Pod network container

pod-infra-container-image is the pause container, which realizes shared network namespace and mounting

systemcd management

cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/usr/local/k8s/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

systemctl start kubelet.service
#启动服务

kube-proxy configuration

cat  kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/usr/loacl/k8s/logs \
--config=/usr/local/k8s/conf/kube-proxy-config.yml"

--Config stores configuration parameters

cat kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
 kubeconfig: /usr/local/k8s/conf/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.2.0.0/24

kube-proxy --help

Reference parameters: https://github.com/kubernetes/kube-proxy/blob/20569a1933eee4b6a526bfe564d476dd7e29c020/config/v1alpha1/types.go#L136
Configuration template: https://github.com/ReSearchITEng/kubeadm-playbook/blob/master /group_vars/all/KubeProxyConfiguration.yml

systemcd management

 cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/usr/local/k8s/conf/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

Check status

kubectl get node -o wide
#查看节点详细信息

kubectl run nginx --image=nginx --replicas=2
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
#创建测试pod

kubectl get pods
#查看pod,默认namespace=default 

The construction process is to understand each component, and the configuration needs to be adjusted later. The configuration refers to the online and learning videos. The official itself does not provide a binary installation tutorial. All configurations are based on the understanding of the components. At present, the configuration can only be said. It is up and running, which is very helpful for later learning adjustments. The right or wrong configuration can be adjusted or verified by yourself in the later stage.

Guess you like

Origin blog.csdn.net/yangshihuz/article/details/112313055
Recommended