Binary installation k8s-0.7 node installation kubelet, kube-proxy, cni plugins

Binary installation k8s-0.7 node installation kubelet, kube-proxy, cni plugins


Create node related directories

mkdir -p /data/k8s/{
    
    kubelet,kube-proxy,cni,bin,cert}
mkdir -p /data/k8s/cni/net.d/

Download kubelet, kube-proxy binary and basic cni plugins

[root@node bin]# ls
bridge  host-local  kubelet  kube-proxy  loopback

Pull the ca file from the master.

[root@node cert]# scp 192.168.100.59:/data/k8s/cert/{
    
    ca.pem,ca-key.pem,ca-config.json} /data/k8s/cert/
[root@node cert]# ls
ca-config.json  ca-key.pem  ca.pem




Prepare the cni configuration file

vim /data/k8s/cni/net.d/10-default.conf

{
    
    
	"name": "mynet",
   "cniVersion": "0.3.1",
	"type": "bridge",
	"bridge": "mynet0",
	"isDefaultGateway": true,
	"ipMasq": true,
	"hairpinMode": true,
	"ipam": {
    
    
		"type": "host-local",
		"subnet": "{
    
    { pod_cni }}"
	}
}

Note: { {pod_cni }} is the network segment that pod can use. I set it to 10.244.0.0/16 here



Open ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4




kubelet configuration section

Next, directly generate the kubelet.kubeconfig file on the master, and then upload it to the corresponding node.

Prepare kubelet certificate signing request

Operation on the master
mkdir -p /data/k8s/node/100.60
vim /data/k8s/node/100.60/kubelet-csr.json

{
    
    
  "CN": "system:node:192.168.100.60",
  "hosts": [
    "127.0.0.1",
    "192.168.100.60",
    "node01"
  ],
  "key": {
    
    
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
    
    
      "C": "CN",
      "ST": "SiChuan",
      "L": "ChengDu",
      "O": "system:nodes",
      "OU": "Lswzw"
    }
  ]
}

Note:

  • The above ip needs to be changed to the node host host.
  • The hostname needs to be added to the certificate, otherwise it will prompt when calling the kubectl log command that the certificate does not allow nodes.
Create kubelet certificate and private key

cd /data/k8s/node/100.60

cfssl gencert \
  -ca=/data/k8s/cert/ca.pem \
  -ca-key=/data/k8s/cert/ca-key.pem \
  -config=/data/k8s/cert/ca-config.json \
  -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet
Create kubelet.kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=/data/k8s/cert/ca.pem \
  --embed-certs=true \
  --server={
    
    {
    
     KUBE_APISERVER }} \
  --kubeconfig=kubelet.kubeconfig

Note: { {KUBE_APISERVER }} I am: https://192.168.100.59:6443

Set client authentication parameters
kubectl config set-credentials system:node:{
    
    {
    
     node_ip }} \
  --client-certificate=/data/k8s/node/100.60/kubelet.pem \
  --embed-certs=true \
  --client-key=/data/k8s/node/100.60/kubelet-key.pem \
  --kubeconfig=kubelet.kubeconfig

Note: { {node_ip }} Here is the ip of the node to be generated. Mine is 192.168.100.60

Set context parameters
kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:node:{
    
    {
    
     node_ip }} \
  --kubeconfig=kubelet.kubeconfig

Note: { {node_ip }} Here is the ip of the node to be generated. Mine is 192.168.100.60

Choose default context
kubectl config use-context default \
  --kubeconfig=kubelet.kubeconfig
Copy CA && kubelet.kubeconfig to the corresponding node
scp kubelet.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet-key.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet.kubeconfig 192.168.100.60:/data/k8s/kubelet/

Create corresponding node user permissions

This is very important, otherwise the pod cannot be created
Finally, the name is based on the parameter
vim node60.yaml set in "Set Client Authentication Parameters"

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: basic-auth-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:node:192.168.100.60

. . . The following is the node node operation. . .

Create kubelet configuration file

vim /data/k8s/kubelet/kubelet-config.yaml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: {
    
    {
    
     node_ip }}
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /data/k8s/cert/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.44.0.2
clusterDomain: cluster.local.
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3 
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
- pods
- kube-reserved
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 200Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth 
healthzBindAddress: {
    
    {
    
     node_ip }}
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeReservedCgroup: /system.slice/kubelet.service
kubeReserved: {
    
    'cpu':'200m','memory':'500Mi','ephemeral-storage':'1Gi'}
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort 
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: /data/k8s/kubelet/kubelet.pem
tlsPrivateKeyFile: /data/k8s/kubelet/kubelet-key.pem

Note: { {node_ip }} is the ip of the current node. Here is 192.168.100.60

Create kubelet systemd file

vim /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/kubelet
ExecStartPre=/bin/mount -o remount,rw '/sys/fs/cgroup'
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStart=/data/k8s/bin/kubelet \
  --config=/data/k8s/kubelet/kubelet-config.yaml \
  --cni-bin-dir=/data/k8s/bin \
  --cni-conf-dir=/data/k8s/cni/net.d \
  --hostname-override={
    
    {
    
     node_name }} \
  --kubeconfig=/data/k8s/kubelet/kubelet.kubeconfig \
  --network-plugin=cni \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
  --root-dir=/data/k8s/kubelet \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Note: { {node_name }} is the name displayed when get node. Here is node01

Enable kubelet service
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet




kube-proxy configuration section

Pull the kube-proxy.kubeconfig file from the master

This file has been generated in 03

scp 192.168.100.59:/data/k8s/conf/kube-proxy.kubeconfig /data/k8s/kube-proxy/
Create systemd file of kube-proxy

vim /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/kube-proxy
ExecStart=/data/k8s/bin/kube-proxy \
  --bind-address={
    
    {
    
     node_ip }} \
  --cluster-cidr=10.244.0.0/16 \
  --hostname-override={
    
    {
    
     node_name }} \
  --kubeconfig=/data/k8s/kube-proxy/kube-proxy.kubeconfig \
  --logtostderr=true \
  --proxy-mode=ipvs
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Note:

  • kube-proxy judges the internal and external traffic of the cluster according to --cluster-cidr. After specifying the --cluster-cidr or --masquerade-all option, kube-proxy will do SNAT for requests to access the Service IP
  • { {node_ip }} is the node host ip, here mine is 192.168.100.60
  • { {node_name }} is the displayed node name. Here is node01
Enable kube-proxy service
systemctl start kube-proxy
systemctl status kube-proxy
systemctl enable kube-proxy




Verify service on master

[root@master conf]# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    <none>   24m   v1.15.6

Guess you like

Origin blog.csdn.net/lswzw/article/details/106146370