+ helm deployment Filebeat ELK
System Architecture:
1) a plurality Filebeat the respective Node feed line log collection and upload it to Logstash
2) a plurality Logstash parallel nodes (load balancing, not as a cluster), the log recording filtration treatment, and then uploaded to the cluster Elasticsearch
3) more Elasticsearch up the cluster service, providing the ability to index and store logs
4) Kibana responsible for Elasticsearch the log data retrieval, analysis
1. elasticsearch deployment
The official chart Address: https://github.com/elastic/helm-charts/tree/master/elasticsearch
Creating logs namespace
kubectl create ns logs
Add elastic helm charts warehouse
helm repo add elastic https://helm.elastic.co
installation
helm install --name elasticsearch elastic/elasticsearch --namespace logs
Parameter Description
image: "docker.elastic.co/elasticsearch/elasticsearch" imageTag: "7.2.0" imagePullPolicy: "IfNotPresent" podAnnotations: {} esJavaOpts: "-Xmx1g -Xms1g" resources: requests: cpu: "100m" memory: "2Gi" limits: cpu: "1000m" memory: "2Gi" volumeClaimTemplate: accessModes: [ "ReadWriteOnce" ] storageClassName: "nfs-client" resources: requests: storage: 50Gi
2. Filebeat deployment
The official chart Address: https://github.com/elastic/helm-charts/tree/master/filebeat
Add the elastic helm charts repo
helm repo add elastic https://helm.elastic.co
Install it
helm install --name filebeat elastic/filebeat --namespace logs
Parameter Description:
image: "docker.elastic.co/beats/filebeat" imageTag: "7.2.0" imagePullPolicy: "IfNotPresent" resources: requests: cpu: "100m" memory: "100Mi" limits: cpu: "1000m" memory: "200Mi"
So the question is, filebeat default docker host the collection log path: / var / lib / docker / containers. If we modify the installation path docker how to collect it, very simple modification chart in the DaemonSet file inside hostPath parameters:
- name: varlibdockercontainers hostPath: path: / var / lib / docker / # Containers installation path to docker
Java program given to the exception log multi-line combined with a regular defined to match multiline.
filebeatConfig: filebeat.yml: | filebeat.inputs: - type: docker containers.ids: - '*' multiline.pattern: '^[0-9]' multiline.negate: true multiline.match: after processors: - add_kubernetes_metadata: in_cluster: true output.elasticsearch: hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
3. Kibana department
The official chart Address: https://github.com/elastic/helm-charts/tree/master/kibana
Add the elastic helm charts repo
helm repo add elastic https://helm.elastic.co
Install it
helm install --name kibana elastic/kibana --namespace logs
Parameter Description:
elasticsearchHosts: "http://elasticsearch-master:9200" replicas: 1 image: "docker.elastic.co/kibana/kibana" imageTag: "7.2.0" imagePullPolicy: "IfNotPresent" resources: requests: cpu: "100m" memory: "500m" limits: cpu: "1000m" memory: "1Gi"
4. Logstash deployment
The official chart Address: https://github.com/helm/charts/tree/master/stable/logstash
installation
$ helm install --name logstash stable/logstash --namespace logs
Parameter Description:
image: repository: docker.elastic.co/logstash/logstash-oss tag: 7.2.0 pullPolicy: IfNotPresent persistence: enabled: true storageClass: "nfs-client" accessMode: ReadWriteOnce size: 2Gi
5. Elastalert deployment
The official chart Address: https://github.com/helm/charts/tree/master/stable/elastalert
installation
helm install -n elastalert ./elastalert --namespace logs
Renderings: