k8s installation essays

Solutions to common errors

Error 1 IsPrivilegedUser

[ERROR IsPrivilegedUser]: user is not running as root [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Users need to use sudo to perform

The sudo kubeadm kubernetes The init-version = v1151 pod-network-cidr = 1024400/16 And service-cidr = 109600/12 And

 错误2: FileContent--proc-sys-net-bridge-bridge-nf-call-iptables

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

# You can manually yourself, but the boot will fail. Boot into force method. Provided behind 
the sudo the bash -C " echo. 1> / proc / SYS / NET / Bridge / Bridge-NF-Call-iptables " the sudo the bash -C " echo. 1> / proc / SYS / NET / Bridge / NF-Call-Bridge- the ip6tables " the sudo the bash -C " echo. 1> / proc / SYS / NET / Bridge / Bridge-NF-Call-iptables "
# boot method of validating

 

Error 3: DirAvailable - var-lib-etcd

[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

Into the / var / lib / etcd directory sudo rm * -rf to that delete useless files

Error 4: ImagePull

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: manifest for k8s.gcr.io/kube-apiserver:v1.15.1 not found

Check your version is specified error. v1.15.1 is wrong

Start log analysis

[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.11]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.11.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.11.11 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.505998 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7y3zbz.r80ie248lqrtof9g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[on Bootstrap-token] Configured RBAC the rules to the allow The csrapprover Controller Automatically Approve CSRs from A the Node on Bootstrap the Token 
[on Bootstrap-token] Configured RBAC the rules to the allow Certificate rotation for All Node Client Certificates in The Cluster 
[on Bootstrap-token] Creating The "Cluster -info "at The ConfigMap in" Kube-public "namespace 
[addons] Applied Essential Addon: CoreDNS 
[addons] Applied Essential Addon: Kube-Proxy 
  the sudo -i CP /etc/kubernetes/admin.conf $ HOME / .kube / config

! Your Kubernetes Control-Plane has initialized successfully
 
the To Start a using your Cluster, you need to RUN at The following AS A Regular the User: # follows these lines is needed in the operation of the current user to perform timely copies of certification documents, due kubectl rely certification the information in this document can provide
   mkdir -p $ the HOME / .kube kubeadm the Join 192.168.11.11:6443 --token 7y3zbz.r80ie248lqrtof9g \

  chown $ sudo (the above mentioned id -u): $ (the above mentioned id -g) $ the HOME / .kube / config
 
. by You Should Deploy A POD now at The Cluster Network to
the Run "kubectl the Apply -f [podnetwork] .yaml" at The Options with One of AT listed:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

the Then you cAN worker of the Join the any Number the Nodes by the each running at the following ON AS root: # following line, said that you can adopt the following the script, on a sub-node execution,
 
    --discovery-token-CA-CERT-hash sha256: a99e0c66d5ff741421dd2b8499f663b0145ab0f7c5007c723efb4ba1991589f0

  

Guess you like

Origin www.cnblogs.com/blueboz/p/11129771.html