Centos 安装 k8s

一、环境准备

2台主机都要安装 docker https://blog.csdn.net/mshxuyi/article/details/108209796

master1    192.168.2.100

node2       192.168.2.102

1、修改 master 主机名

hostnamectl set-hostname master1

2、关闭 selinux

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

3、禁用swap

# 临时
swapoff -a

# 永久,打开/etc/fstab注释掉swap那一行
sed -i 's/.*swap.*/#&/' /etc/fstab

4、修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 加载
modprobe br_netfilter

# 生效
sysctl -p /etc/sysctl.d/k8s.conf

 5、修改Cgroup Driver

# 消除告警
vim /etc/docker/daemon.json

{
    "registry-mirrors": ["http://hub-mirror.c.163.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}

# 重启生效
systemctl restart docker

 6、设置 kubenetes 源

# 配置 yum 源
vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

7、安装

# 查看版本
yum list kubelet --showduplicates | sort -r 

# 安装指定版本
yum install -y kubelet-1.17.2 kubeadm-1.17.2 kubectl-1.17.2 
  • kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具

  • kubeadm 用于初始化集群,启动集群的命令工具

  • kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

8、启动

启动 kubelet 之后 我们查看一下 kubelet 状态是未启动状态,查看原因发现是 “/var/lib/kubelet/config.yaml”文件不存在,这里可以暂时先不用处理,当kubeadm init 之后会创建此文件

# 启动
systemctl start kubelet && systemctl enable kubelet

# 查看状态
systemctl status kubelet

kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 日 2019-03-31 16:18:55 CST; 7s ago
     Docs: https://kubernetes.io/docs/
  Process: 4564 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 4564 (code=exited, status=255)

一、master 安装

1、初始化 k8s

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.2 \
--apiserver-advertise-address 192.168.1.111 \
--pod-network-cidr=10.244.0.0/16

--image-repository:指定镜像源

--kubernetes-version:指定 k8s 版本,要跟上面安装的保持一致

--apiserver-advertise-address:指定master的interface

--pod-network-cidr:指定Pod网络的范围,这里使用flannel网络方案

成功后,自动创建了 "/var/lib/kubelet/config.yaml" 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

# 记录这行,后面 node 节点加入集群会用到
kubeadm join 192.168.1.111:6443 --token 8a1x7a.84gh8ghc9c3z7uak \
    --discovery-token-ca-cert-hash sha256:16ebeae9143006938c81126050f8fc8527d2a6b1c4991d07b9282f47cf4203d6 

2、加载环境变量

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

3、查看组件状态 

[root@master1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok  

4、查看 node

# 状态为 NotReady,因为没有安装 pod 网络
[root@mysql-master1 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
mysql-master   NotReady   master   105s   v1.17.2

5、安装 pod 网络

k8s cluster 工作必须安装pod网络,否则pod之间无法通信,k8s支持多种方案,这里选择flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6、查看 pod 状态,确保当前 pod 都为 Running

# 查看所有命名空间 pod
[root@master1 ~]# kubectl get pod --all-namespaces -o wide


NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-9d85f5447-qghnb           1/1     Running   1          63m    10.244.0.5      master1   <none>           <none>
kube-system   coredns-9d85f5447-xqsl2           1/1     Running   1          63m    10.244.0.4      master1   <none>           <none>
kube-system   etcd-master1                      1/1     Running   1          63m    192.168.2.100   master1   <none>           <none>
kube-system   kube-apiserver-master1            1/1     Running   1          63m    192.168.2.100   master1   <none>           <none>
kube-system   kube-controller-manager-master1   1/1     Running   1          63m    192.168.2.100   master1   <none>           <none>
kube-system   kube-flannel-ds-amd64-52n6m       1/1     Running   0          9m9s   192.168.2.100   master1   <none>           <none>
kube-system   kube-proxy-xk7gq                  1/1     Running   1          63m    192.168.2.100   master1   <none>           <none>
kube-system   kube-scheduler-master1            1/1     Running   1          63m    192.168.2.100   master1   <none>           <none>

7、查看 node,status 为 Ready

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   66m   v1.17.2

8、安装成功 

二、 node 节点 安装

按上面的环境配置,安装 k8s

hostnamectl set-hostname node1    主机名

1、加入集群

# 显示,在 node节点 上面执行
kubeadm join 192.168.1.111:6443 --token j88bsx.o0ugzfnxqdl5s58e \
--discovery-token-ca-cert-hash sha256:16ebeae9143006938c81126050f8fc8527d2a6b1c4991d07b9282f47cf4203d6


# 查看 token, 在 master 执行
kubeadm token list

# 如果过期,可以在 master 节点重新生成
kubeadm token create --print-join-command

2、查看 节点 网络

[root@mysql-node2~]# ifconfig | grep -A 6 flannel

# flannel 网络正常, node2 节点分配的子网段是 10.244.1.0/24
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether e2:4a:3e:58:f7:80  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

三、查看集群

[root@master1 ~]# kubectl get nodes

# 显示
NAME      STATUS   ROLES    AGE    VERSION
master1   Ready    master   6d4h   v1.17.2
node2     Ready    <none>   80m    v1.17.2

查看 pod

node 节点网络已经加入进来,有几个节点,这里有多几个出来

kube-flannel-ds-amd64-

kube-proxy-

[root@master1 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
default       nginx-5578584966-ch9x4            1/1     Running   0          16m    10.244.1.6      node2     <none>           <none>
kube-system   coredns-9d85f5447-qghnb           1/1     Running   36         6d4h   10.244.0.7      master1   <none>           <none>
kube-system   coredns-9d85f5447-xqsl2           1/1     Running   35         6d4h   10.244.0.6      master1   <none>           <none>
kube-system   etcd-master1                      1/1     Running   6          6d4h   192.168.2.100   master1   <none>           <none>
kube-system   kube-apiserver-master1            1/1     Running   7          6d4h   192.168.2.100   master1   <none>           <none>
kube-system   kube-controller-manager-master1   1/1     Running   6          6d4h   192.168.2.100   master1   <none>           <none>
kube-system   kube-flannel-ds-amd64-h2f4w       1/1     Running   3          6d1h   192.168.2.100   master1   <none>           <none>
kube-system   kube-flannel-ds-amd64-z57qk       1/1     Running   0          80m    192.168.2.102   node2     <none>           <none>
kube-system   kube-proxy-4j8pj                  1/1     Running   0          80m    192.168.2.102   node2     <none>           <none>
kube-system   kube-proxy-xk7gq                  1/1     Running   5          6d4h   192.168.2.100   master1   <none>           <none>
kube-system   kube-scheduler-master1            1/1     Running   7          6d4h   192.168.2.100   master1   <none>           <none>

 如何删除 node

# 删除 node2 节点
kubectl delete node node2

# 重置
kubeadm reset

# 重新加入
kubeadm join

四、部署应用

1、在 master 上 部署一个 nginx 

kubectl run nginx --image=nginx --port=80 --replicas=1

–replicas:指定副本数
nginx:名称
–image:使用的镜像(默认从dockerhub拉取)
–port:容器的端口 

2、查看,nginx已经部署到 node2节点上,IP是 10.244.1.6

[root@master1 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
nginx-5578584966-ch9x4   1/1     Running   0          14m   10.244.1.6   node2   <none>           <none>

3、通过 service 为一组 pod 提供一个统一的入口,并为它们提供负载均衡和自动服务发现 

# service 使用 nodeport 类型进行映射
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

这一步说是将服务暴露出去,实际上是在服务前面加一个负载均衡,因为pod可能分布在不同的结点上。
–port:暴露出去的端口,通过 clusterip 访问
–type=NodePort:使用节点+端口方式访问服务
–target-port:容器的端口

4、查看

[root@master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        6d5h
nginx        NodePort    10.102.220.172   <none>        80:31863/TCP   2s

5、访问, 成功

http://192.168.2.102:31863

猜你喜欢

转载自blog.csdn.net/mshxuyi/article/details/108346500