Abnormal physical machine off, linux virtual machine disk mount system failure, leading to not start; kubectl connection fails

Virtual Machine CentOS 7 mount the file system failed

On Friday did not close the virtual and physical machines before work,
after today Monday opened a virtual machine, find the operating system failed to start.
The reason with this article described exactly the same .
Solve the problem of file system to mount an operating system after,

kubectl command failed to run

kubectl get nodes, etc. All command error:

The connection to the server 192.168.102.149:6443 was refused - did you specify the right host or port?

Run ss -tnlor netstat -tnlcommand, found that 6443 port is not listening.
The use of Google queries, identify problems that apiserver failed to start.

docker ps -a | grep k8s_kube-apiserver
docker logs fd6330153fc3

Through the above command, I found that the reason is the failure to start apiserver

addrConn.createTransport failed to connect to {127.0.0.1:2379

And the final unable to create storage backend.

Use kubeadm reload k8s (ie Kubernetes) cluster

Tried a variety of ways not fix apiserver, decided to reinstall Kubernetes cluster.
On the Master

kubeadm reset
kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
rm -fr $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

On node1, node2,

kubeadm reset
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
    
kubeadm join 192.168.202.130:6443 --token ozs9n1.xz2k1w58i5ndsaim \
    --discovery-token-ca-cert-hash sha256:3ca6e686aaec53d11ae08ac29d7de3bf328fd513847c2ffb0d9f317d36ccde96 --ignore-preflight-errors=Swap

After the above steps, and finally succeeded resurrection Kubernetes cluster.

kubectl get cs
kubectl get nodes
kubectl get pods
kubectl get pods -n kube-system
kubectl get ns

Results are as follows:

[root@svn ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@svn ~]# kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
app.centos7.com   Ready    <none>   57m   v1.14.2
jks.centos7.com   Ready    <none>   55m   v1.14.2
svn.centos7.com   Ready    master   61m   v1.14.2
[root@svn ~]# kubectl get nodes -n kube-system
NAME              STATUS   ROLES    AGE   VERSION
app.centos7.com   Ready    <none>   57m   v1.14.2
jks.centos7.com   Ready    <none>   55m   v1.14.2
svn.centos7.com   Ready    master   61m   v1.14.2
[root@svn ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   61m
kube-node-lease   Active   61m
kube-public       Active   61m
kube-system       Active   61m
[root@svn ~]# kubectl get pods
No resources found.
[root@svn ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-2zv9n                   1/1     Running   3          62m
coredns-fb8b8dccf-wwmtk                   1/1     Running   3          62m
etcd-svn.centos7.com                      1/1     Running   1          61m
kube-apiserver-svn.centos7.com            1/1     Running   1          61m
kube-controller-manager-svn.centos7.com   1/1     Running   1          61m
kube-flannel-ds-amd64-989ld               1/1     Running   0          48m
kube-flannel-ds-amd64-bdnkg               1/1     Running   1          48m
kube-flannel-ds-amd64-mndjd               1/1     Running   0          48m
kube-proxy-2s2c9                          1/1     Running   0          58m
kube-proxy-5h7gp                          1/1     Running   1          62m
kube-proxy-ms7cr                          1/1     Running   0          57m
kube-scheduler-svn.centos7.com            1/1     Running   1          61m
[root@svn ~]#

Reference material

Guess you like

Origin www.cnblogs.com/chenjo/p/10966901.html