2、CentOS7 kubernetes 集群应用部署

Centos7下Kubernetes-HA集群应用部署

一、架构的介绍

上一篇文章我们搭建了kubernetes的集群 

今天,我们将要带来入门hello world示例,它是一个web留言板应用,基于PHP+Redis的两层分布式架构的web应用,前端PHP web网站通过访问后端Redis数据库完成用户留言的查询和添加功能,具备读写分离能力,留言板内容是从redis中查询到的,首页中添加留言并提交后,留言会被添加到redis中。 

有三个前端节点:php-frontend,对网站的访问进行负载均衡 

有两个redis后端节点:一个redis-master和两个redis-slave,两个redis-slave从redis-master进行同步数据 

php-frontend进行了读写分离,即在写入的时候写入主库,而读取的时候从从库读取。 

客户通过客户端访问的时候,访问前端相应的地址即可。 

 

整体的架构是这个样子:

 

 

  • 创建redis-master Pod和服务 :

1、先定义RC来创建pod,然后定义与之关联的service。 

为redis-master服务新建一个名为redis-master-controller.yaml的replicationcontroller定义文件,内容为(三台master中的任一台操作

 

[root@k8s ~]# mkdir /etc/k8s_yaml

[root@k8s ~]# vi /etc/k8s_yaml/redis-master-controller.yaml

apiVersion: v1    #指定api版本号

kind: ReplicationController    #创建资源的类型:这里为ReplicationController

metadata:    #资源元数据

  name: redis-master    #资源名称

  labels:     #资源标签

    name: redis-master    #标签名

spec:     #容器的详细定义

  replicas: 1    #副本数量:这里为1

  selector:     #RC通过spec.selector来筛选要控制的Pod

    name: redis-master

  template:    # pod的定义

    metadata:  # pod元数据

      labels:    #pod标签

        name: redis-master

    spec:     #指定资源内容

      containers:    #容器

      - name: master    #容器名

        image: kubeguide/redis-master    #使用的镜像

        ports:     #容器开放对外的端口号:这里为6379

          - containerPort: 6379

 

 

  1. 创建好文件后,执行如下命令(三台master中的任一台操作):

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/redis-master-controller.yaml

replicationcontroller "redis-master" created

 

 

  1. 查看刚才新建的RC信息:(三台master中的任一台操作)

[root@k8s ~]# kubectl get rc

NAME           DESIRED   CURRENT   READY     AGE

redis-master   1         0         0         1m

 

 

  1. 查看pods信息:(三台master中的任一台操作)

遇到问题:

[root@k8s ~]# kubectl get pods

No resources found.

解决方法(所有matser上操作):

  1. $ vi /etc/kubernetes/apiserver
    2、找到这一行 "KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",去掉ServiceAccount,保存退出。
    3、重新启动kube-apiserver服务即可

 

 

[root@k8s ~]# kubectl get pods

NAME                 READY     STATUS              RESTARTS   AGE

redis-master-hv76v   0/1       ContainerCreating   0          1m


5、查看redis-master-hv76v 信息:(三台master中的任一台操作)

[root@k8s ~]# kubectl describe pod redis-master-hv76v

 

这里报错了:

在pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest镜像出现问题了,问题原因是想打开open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt文件不存在。

解决(三个master节点操作)

[root@k8s ~]# cat /etc/docker/daemon.json

{

  "registry-mirrors": ["https://wghlmi3i.mirror.aliyuncs.com"]

}

方案一、尝试去pull这个镜像,发现缺失文件/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt

 

 

查看该缺失的文件,发现为软连接,缺少名叫rhsm的依赖,查找关于rhsm的依赖包经过百度发现,该缺少的rhsm包为:python-rhsm

使用yum安装

[root@k8s ~]# yum install python-rhsm.x86_64 0:1.19.10-1.el7_4

安装后,问题还是没解决。查看还是没有redhat-uep.pem文件:

[root@k8s ~]# ll /etc/rhsm/ca/

total 0

方案二、参考了这个方案参考文件:http://www.mamicode.com/info-detail-2310522.html)

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

查看有了我们想要的文件:

[root@k8s ~]# cat /etc/rhsm/ca/redhat-uep.pem

[root@k8s ~]#  docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

 

 

 

 

 

 

 

 

 

 

 

 

6、创建于redis-master pod相关联的service,文件内容如下(三台master中的任一台操作):

[root@k8s ~]# vi /etc/k8s_yaml/redis-master-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: redis-master

  labels:

    name: redis-master

spec:

  ports:

  - port: 6379    #服务监听的端口号

    targetPort: 6379    #需要转发到后端pod的端口号,就是容器对外开放的端口号

  selector:

    name: redis-master

 

 

7、创建service:(三台master中的任一台操作)

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/redis-master-service.yaml

service "redis-master" created

 

 

8、查看新建的service:(三台master中的任一台操作)

[root@k8s ~]# kubectl get services

NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE

kubernetes     10.254.0.1      <none>        443/TCP    17h

redis-master   10.254.167.16   <none>        6379/TCP   1m

 

 

三、创建redis-slave Pod和服务

1、为redis-slave服务新建一个名为redis-slave-controller.yaml的replicationcontroller定义文件,内容为:

[root@k8s ~]# vi /etc/k8s_yaml/redis-slave-controller.yaml

apiVersion: v1

kind: ReplicationController

metadata:

  name: redis-slave

  labels:

    name: redis-slave

spec:

  replicas: 2

  selector:

    name: redis-slave

  template:

    metadata:

      labels:

        name: redis-slave

    spec:

      containers:

      - name: slave

        image: kubeguide/guestbook-redis-slave

        env:

        - name: GET_HOSTS_FROM

          value: env

        ports:

        - containerPort: 6379

 

 

2、创建好文件后,执行如下命令:

[root@k8s ~]#  kubectl create -f /etc/k8s_yaml/redis-slave-controller.yaml

replicationcontroller "redis-slave" created

 

 

 

3、查看刚才新建的RC信息:

[root@k8s ~]# kubectl get rc redis-slave

NAME          DESIRED   CURRENT   READY     AGE

redis-slave   2         2         0         58s

 

4、配置文件redis-salve-service.yaml内容如下:

[root@k8s ~]# vi /etc/k8s_yaml/redis-slave-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: redis-slave

  labels:

    name: redis-slave

spec:

  ports:

  - port: 6379

  selector:

    name: redis-slave

 

 

  1. 创建service:

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/redis-slave-service.yaml

service "redis-slave" created

 

6、检查service:

[root@k8s ~]# kubectl get service

NAME           CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE

kubernetes     10.254.0.1      <none>        443/TCP    17h

redis-master   10.254.167.16   <none>        6379/TCP   12m

redis-slave    10.254.29.69    <none>        6379/TCP   36s

 

四、创建fronted pod和服务

1、创建frontend的rc:

[root@k8s ~]#  vi /etc/k8s_yaml/frontend-controller.yaml

apiVersion: v1

kind: ReplicationController

metadata:

  name: frontend

  labels:

    name: frontend

spec:

  replicas: 3

  selector:

    name: frontend

  template:

    metadata:

      labels:

        name: frontend

    spec:

      containers:

      - name: frontend

        image: kubeguide/guestbook-php-frontend

        env:

        - name: GET_HOSTS_FROM

          value: env

        ports:

        - containerPort: 80 

 

2、创建rc:

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/frontend-controller.yaml

replicationcontroller "frontend" created

 

 

 

  1. 创建frontend的service,前端的service是需要外部访问的,所以进行如下配置:

[root@k8s ~]#  vi /etc/k8s_yaml/frontend-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: frontend

  labels:

    name: frontend

spec:

  type: NodePort    #外部访问端口形式为:通过node端口形式进行访问

  ports:

  - port: 80    #服务监听的端口号

    nodePort: 30001    #node上开放的外部端口

  selector:

name: frontend

 

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/frontend-service.yaml

service "frontend" created

 

 

  1. 创建好以上rc、pod、service后查看pod情况

[root@k8s ~]#  kubectl get pods

NAME                 READY     STATUS             RESTARTS   AGE

frontend-97mmq       1/1       Running   8          18m

frontend-9nv65       1/1       Running   8          18m

frontend-nk2p6       1/1       Running   8          18m

redis-master-7whhp   1/1       Running            0          3h

redis-slave-dgzc3    1/1       Running            0          3h

redis-slave-dplq8    1/1       Running            0          3h

4、在浏览器中输入任意运行frontend的pod的ip地址加上我们定义好的node port 30001

我这里是192.168.10.101:30001

hello world

 

 

kubernetes的hello world实例到此完成啦。

 

 

 

 

 

 

 

 

 

总结:

1、当遇到三.5节的 ContainerCreating,操作完解决方案的时候,需要删除rc和service再重新创建rc和service,步骤如下:

删除原来创建的rc

查看:

[root@k8s ~]# kubectl get pods

NAME                 READY     STATUS              RESTARTS   AGE

frontend-41658       0/1       ContainerCreating   0          33m

frontend-gptkz       0/1       ContainerCreating   0          33m

frontend-r7fxg       0/1       ContainerCreating   0          33m

redis-master-4xkx9   0/1       ContainerCreating   0          32m

redis-slave-g1p4c    0/1       ContainerCreating   0          33m

redis-slave-l0rgg    0/1       ContainerCreating   0          33m

删除:

[root@k8s ~]#  kubectl delete -f /etc/k8s_yaml/redis-master-controller.yaml

replicationcontroller "redis-master" deleted

[root@k8s ~]#  kubectl delete -f  /etc/k8s_yaml/redis-master-service.yaml

service "redis-master" deleted

[root@k8s ~]#  kubectl delete -f  /etc/k8s_yaml/redis-slave-controller.yaml

replicationcontroller "redis-slave" deleted

[root@k8s ~]#  kubectl delete -f  /etc/k8s_yaml/redis-slave-service.yaml

service "redis-slave" deleted

[root@k8s ~]#  kubectl delete -f   /etc/k8s_yaml/frontend-controller.yaml

replicationcontroller "frontend" deleted

[root@k8s ~]# kubectl delete -f /etc/k8s_yaml/frontend-service.yaml

service "frontend" deleted

 

再查看(要等一下才会消失):

[root@k8s ~]# kubectl get pods

NAME                 READY     STATUS        RESTARTS   AGE

frontend-41658       0/1       Terminating   0          36m

frontend-gptkz       0/1       Terminating   0          36m

frontend-r7fxg       0/1       Terminating   0          36m

redis-master-4xkx9   0/1       Terminating   0          35m

redis-slave-g1p4c    0/1       Terminating   0          36m

redis-slave-l0rgg    0/1       Terminating   0          36m

重新创建:

[root@k8s ~]#  kubectl create -f /etc/k8s_yaml/redis-master-controller.yaml

replicationcontroller "redis-master" created

[root@k8s ~]#  kubectl create -f  /etc/k8s_yaml/redis-master-service.yaml

service "redis-master" created

[root@k8s ~]#  kubectl create -f  /etc/k8s_yaml/redis-slave-controller.yaml

replicationcontroller "redis-slave" created

[root@k8s ~]#  kubectl create -f  /etc/k8s_yaml/redis-slave-service.yaml

service "redis-slave" created

[root@k8s ~]#   kubectl create -f /etc/k8s_yaml/frontend-controller.yaml

replicationcontroller "frontend" created

[root@k8s ~]# kubectl create -f /etc/k8s_yaml/frontend-service.yaml

service "frontend" created

 

 

  1. 当pod处于CrashLoopBackOff状态,还没找到解决方法:

[root@k8s ~]#  kubectl get pods

NAME                 READY     STATUS             RESTARTS   AGE

frontend-3wbr4       0/1       CrashLoopBackOff   6          6m

frontend-4v16m       0/1       CrashLoopBackOff   6          6m

frontend-xzkf4       0/1       CrashLoopBackOff   6          6m

查看日志:

[root@k8s ~]# kubectl log frontend-3wbr4

W0805 08:47:42.889259    3621 cmd.go:337] log is DEPRECATED and will be removed in a future version. Use logs instead.

AH00534: apache2: Configuration error:

解决方法:

CrashLoopBackOff解决方法: https://blog.csdn.net/qq_21816375/article/details/79193011

1、log is DEPRECATED and will be removed in a future version. Use logs instead:旧版本的log被弃用,新版本使用logs。

2、No MPM loaded:同一个模块,分别在 httpd.conf 和 00-mpm.conf 引用了,所以报错。将httpd.conf 或 00-mpm.conf的MPM模块注稀掉即可。

 

 

  1. 常用指令:

kubectl get pods # 查看目前所有的pod

kubectl get rs # 查看目前所有的replica set

kubectl get services# 查看目前所有的service

kubectl get deployment # 查看目前所有的deployment

kubectl describe po my-nginx # 查看my-nginx pod的详细状态

kubectl describe rs my-nginx # 查看my-nginx replica set的详细状态

kubectl describe deployment my-nginx # 查看my-nginx deployment的详细状态


 

猜你喜欢

转载自blog.csdn.net/weixin_41515615/article/details/81436800
今日推荐