在Kubernetes上部署Kibana和Logstash

在Kubernetes上部署Kibana和Logstash

因为在k8s上部署的应用越来越多,本地的四个虚拟机已经有点吃力,所以这一次使用Ansible安装的四节点K8S集群上实验。

1、环境清单

1.1、系统清单

IP Hostname Role OS
192.168.115.210 master K8s-master、nfs Ubuntu 16.04.2 LTS
192.168.115.211 node1 K8s-node Ubuntu 16.04.2 LTS
192.168.115.212 node2 k8s-node Ubuntu 16.04.2 LTS
192.168.115.213 node3 K8s-node Ubuntu 16.04.2 LTS

1.2、镜像清单

  • docker.elastic.co/logstash/logstash:6.2.2
  • docker.elastic.co/logstash/kibana:6.2.2

下载地址参考:

https://www.elastic.co/guide/en/kibana/current/docker.html

https://www.elastic.co/guide/en/logstash/current/docker.html

和elasticsearch不同的是,这两个在默认情况下都是带有x-pack插件的,这里使用默认镜像。

下文附网盘下载链接,不需要x-pack的参考上面链接。

2、部署说明

2.1、部署Kibana

2.1.1、编写YML文件并部署

---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    elastic-app: kibana
  name: kibana
  namespace: ns-elasticsearch
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      elastic-app: kibana
  template:
    metadata:
      labels:
        elastic-app: kibana
    spec:
      containers:
        - name: kibana
          image: 192.168.112.21:5000/elastic/kibana:6.2.2
          ports:
            - containerPort: 5601
              protocol: TCP
          env:
            - name: "ELASTICSEARCH_URL"
              value: "http://elasticsearch-service:9200"
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---
kind: Service
apiVersion: v1
metadata:
  labels:
    elastic-app: kibana
  name: kibana-service
  namespace: ns-elasticsearch
spec:
  ports:
    - port: 5601
      targetPort: 5601
  selector:
    elastic-app: kibana
  type: NodePort

镜像是我的私库地址,需要修改为你自己的

ES集群地址通过环境变量注入,指向上一篇文章搭建的ES集群服务

For compatibility with container orchestration systems, these environment variables are written in all capitals, with underscores as word separators. The helper translates these names to valid Kibana setting names.

### 部署应用 ###
root@master:~/kubernetes/kibana/yml# kubectl apply -f kibana.yml 

root@master:~/kubernetes/kibana/yml# kubectl get pods --all-namespaces
NAMESPACE          NAME                                                  READY     STATUS    RESTARTS   AGE
kube-system        calico-node-5s598                                     1/1       Running   0          1d
kube-system        calico-node-bw4wm                                     1/1       Running   0          1d
kube-system        calico-node-qdzcg                                     1/1       Running   0          1d
kube-system        calico-node-vp6cr                                     1/1       Running   0          1d
kube-system        kube-apiserver-master                                 1/1       Running   0          1d
kube-system        kube-controller-manager-master                        1/1       Running   0          1d
kube-system        kube-dns-7d9c4d7876-tnzmc                             3/3       Running   0          1d
kube-system        kube-dns-7d9c4d7876-x94jm                             3/3       Running   0          1d
kube-system        kube-proxy-8s5d2                                      1/1       Running   0          1d
kube-system        kube-proxy-j5wvt                                      1/1       Running   0          1d
kube-system        kube-proxy-mvbbn                                      1/1       Running   0          1d
kube-system        kube-proxy-rl7mm                                      1/1       Running   0          1d
kube-system        kube-scheduler-master                                 1/1       Running   0          1d
kube-system        kubedns-autoscaler-564b455d77-lggwt                   1/1       Running   0          1d
kube-system        kubernetes-dashboard-767994d8b8-wch2v                 1/1       Running   0          1d
kube-system        nginx-proxy-node1                                     1/1       Running   0          1d
kube-system        nginx-proxy-node2                                     1/1       Running   0          1d
kube-system        nginx-proxy-node3                                     1/1       Running   0          1d
kube-system        tiller-deploy-798bc759cc-shx7z                        1/1       Running   0          1d
ns-elasticsearch   elasticsearch-data-7bdd87cc67-sxnjd                   1/1       Running   0          1d
ns-elasticsearch   elasticsearch-data-7bdd87cc67-xhdfd                   1/1       Running   0          1d
ns-elasticsearch   elasticsearch-master-59b7d775f8-w4tfb                 1/1       Running   0          1d
ns-elasticsearch   elasticsearch-master-59b7d775f8-wc67k                 1/1       Running   0          1d
ns-elasticsearch   elasticsearch-master-59b7d775f8-z9bjv                 1/1       Running   0          1d
ns-elasticsearch   kibana-7d9897fbc4-gkqqr                               1/1       Running   0          6h

root@master:~/kubernetes/kibana/yml# kubectl get svc --all-namespaces
NAMESPACE          NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
default            kubernetes                  ClusterIP   10.233.0.1      <none>        443/TCP          1d
kube-system        kube-dns                    ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP    1d
kube-system        kubernetes-dashboard        ClusterIP   10.233.55.254   <none>        443/TCP          1d
kube-system        tiller-deploy               ClusterIP   10.233.45.163   <none>        44134/TCP        1d
ns-elasticsearch   elasticsearch-discovery     ClusterIP   10.233.6.191    <none>        9300/TCP         1d
ns-elasticsearch   elasticsearch-service       NodePort    10.233.46.107   <none>        9200:32366/TCP   1d
ns-elasticsearch   kibana-service              NodePort    10.233.20.112   <none>        5601:30730/TCP   6h

2.1.2、测试并验证应用状态

浏览器访问:http://192.168.115.210:30730/status(你的端口号和我的不一样)

这里写图片描述

还可以看看ES集群的各项运行指标:

这里写图片描述

默认情况下会提示你类似:Your Basic license will expire on March 7, 2019.,也就是一个月之后过期,你可点击链接,免费申请Basic License,有效期一年。当然,Basic License并不能使用所有的功能!

2.2、部署Logstash

2.2.1、安装部署NFS服务

root@master:~# apt install nfs-kernel-server

root@master:~# vi /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#

/nfs *(rw,sync,no_root_squash)

root@master:~# systemctl restart nfs-kernel-server
root@master:~# exportfs -r
root@master:~# showmount -e
Export list for master:
/nfs *

参考链接:https://help.ubuntu.com/lts/serverguide/network-file-system.html

CentOS参考:http://blog.csdn.net/chenleiking/article/details/78577657#t29

root@node1:~# apt install nfs-common
root@node2:~# apt install nfs-common
root@node3:~# apt install nfs-common

所有node节点上需要安装nfs-common,否则pod启动时会报错。参考:http://blog.csdn.net/wangtaoking1/article/details/50479390

2.2.2、部署Logstash

  • 在NFS中编写logstash配置文件
root@master:~/kubernetes/logstash/yml# vi /nfs/logstash.conf
input {
        http {

        }
}

filter {
        json {
                source => "message"
        }
}

output {
        elasticsearch {
                hosts => ["elasticsearch-service:9200"]
        }
}
  • 编写YML文件并部署logstash
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "master-nfs-pv"
  labels:
    release: stable
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /nfs
    server: 192.168.115.210

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: logstash-conf-pvc
  namespace: ns-elasticsearch
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      release: stable

---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    elastic-app: logstash
  name: logstash
  namespace: ns-elasticsearch
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      elastic-app: logstash
  template:
    metadata:
      labels:
        elastic-app: logstash
    spec:
      containers:
        - name: logstash
          image: 192.168.112.21:5000/elastic/logstash:6.2.2
          volumeMounts:
            - mountPath: /usr/share/logstash/pipeline
              name: logstash-conf-volume
          ports:
            - containerPort: 8080
              protocol: TCP
          env:
            - name: "XPACK_MONITORING_ELASTICSEARCH_URL"
              value: "http://elasticsearch-service:9200"
          securityContext:
            privileged: true
      volumes:
        - name: logstash-conf-volume
          persistentVolumeClaim:
            claimName: logstash-conf-pvc
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---
kind: Service
apiVersion: v1
metadata:
  labels:
    elastic-app: logstash
  name: logstash-service
  namespace: ns-elasticsearch
spec:
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    elastic-app: logstash
  type: NodePort

镜像是我的私库地址,需要修改为你自己的

ES集群地址通过环境变量注入,指向上一篇文章搭建的ES集群服务

For compatibility with container orchestration systems, these environment variables are written in all capitals, with underscores as word separators. The helper translates these names to valid Kibana setting names.

Logstash配置文件通过挂载NFS到容器内部/usr/share/logstash/pipeline路径下

不同的input plugin可能需要开放不同的端口

root@master:~/kubernetes/logstash/yml# kubectl apply -f logstash.yml

root@master:~/kubernetes/logstash/yml# kubectl get pods -n ns-elasticsearch
NAME                                    READY     STATUS    RESTARTS   AGE
elasticsearch-data-7bdd87cc67-sxnjd     1/1       Running   0          1d
elasticsearch-data-7bdd87cc67-xhdfd     1/1       Running   0          1d
elasticsearch-master-59b7d775f8-w4tfb   1/1       Running   0          1d
elasticsearch-master-59b7d775f8-wc67k   1/1       Running   0          1d
elasticsearch-master-59b7d775f8-z9bjv   1/1       Running   0          1d
kibana-7d9897fbc4-gkqqr                 1/1       Running   0          7h
logstash-559956d449-cww6v               1/1       Running   0          8m

root@master:~/kubernetes/logstash/yml# kubectl get svc -n ns-elasticsearch
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
elasticsearch-discovery   ClusterIP   10.233.6.191    <none>        9300/TCP         1d
elasticsearch-service     NodePort    10.233.46.107   <none>        9200:32366/TCP   1d
kibana-service            NodePort    10.233.20.112   <none>        5601:30730/TCP   7h
logstash-service          NodePort    10.233.9.191    <none>        8080:30040/TCP   9m

2.2.3、测试并验证应用状态

  • 访问Kibana的monitoring,可以看到logstash的相关信息

这里写图片描述

  • 向Logstash发送消息,在Kibana查看ES数据
root@master:~/kubernetes/logstash/yml# curl -XPOST 'http://192.168.115.210:30040' -H 'Content-Type: application/json' -d'
{
    "Say" : "Hello world!"
}
'

注意你的端口号

这里写图片描述

这里写图片描述

3、注意事项

  • 在没有指定Kibana和Logstash名称的情况下,每一次重启在Kibana看开就产生了一个新的实例,所以我的截图中Logstash的Nodes是2,也可能是monitoring记录的索引数据导致的。
  • 使用NFS时,在ubuntu系统下的每一个K8S的Node节点都需要安装nfs-common,否则POD启动会提示挂载失败。
  • Logstash的HTTP输入插件默认是8080 端口。

4、附件下载

网盘地址:https://pan.baidu.com/s/1E7XFaj66VIcoG9ku3gogPA

5、参考资料

猜你喜欢

转载自blog.csdn.net/chenleiking/article/details/79466158
今日推荐