Installation elasticsearch:
1. Find the installation package using the helm
Provided ready to create a namespace and create 5 pv (3 and 2 th master data: master PV application is not less than 5Gi, data pv application is not less than 30Gi)
es mirrored version: docker.elastic.co/elasticsearch/elasticsearch: 6.7.0
kubectl create ns elk-logging
Find installation package
helm search elasticsearch
2. Download the installation package
cd ~/.helm/cache/archive/
helm fetch stable/elasticsearch
3. Modify the default values file
tar -zxvf elasticsearch-1.30.0.tgz
vim elasticsearch/values.yaml
Values.yaml modify configuration files
Start es internal control:
image.repository: docker.elastic.co/elasticsearch/elasticsearch
cluster.xpackEnable: true
cluster.env.XPACK_MONITORING_ENABLED: true
kibana repo image.repository: docker.elastic.co/kibana/kibana
rather than the oss
version
Client Service HTTP NodePort port number. Client.serviceType If not, an invalid NodePort (disposed access port)
110 Client:
111 name: Client
112 Replicas:. 1
113 serviceType: NodePort
114 ## Coupled with the If serviceType = "NodePort", the this Will The SET A specific Client to nodePort Port HTTP
115 httpNodePort: 30920
4. The application package is installed
helm install stable/elasticsearch -n efk-es --namespace elk-logging -f elasticsearch/values.yaml
Browser View: http://192.1.80.39:30920/
5. Test:
[root@k8s-master ~/.helm/cache/archive]# kubectl get svc -n elk-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efk-es-elasticsearch-client NodePort 10.102.193.144 <none> 9200:30920/TCP 3m30s
efk-es-elasticsearch-discovery ClusterIP None <none> 9300/TCP 3m29s
[root@k8s-master ~/.helm/cache/archive]# kubectl get pod -n elk-logging
NAME READY STATUS RESTARTS AGE
efk-es-elasticsearch-client-6cb7f4b864-57kx7 1/1 Running 0 33h
efk-es-elasticsearch-client-6cb7f4b864-svmtz 1/1 Running 0 33h
efk-es-elasticsearch-data-0 1/1 Running 0 33h
efk-es-elasticsearch-data-1 1/1 Running 0 11h
efk-es-elasticsearch-master-0 1/1 Running 0 33h
efk-es-elasticsearch-master-1 1/1 Running 0 33h
efk-es-elasticsearch-master-2 1/1 Running 0 29h
[root@k8s-master ~/.helm/cache/archive]#
By requesting a REST API to check whether Elasticsearch cluster is working properly
[root@k8s-master ~/.helm/cache/archive]# kubectl port-forward efk-es-elasticsearch-master-0 9200:9200 --namespace=elk-logging
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
By container after deployment cirros into the mirror and start to try to access the ES service is normal, successful deployment can be tested by ES [root @ K8S-Master ~ / .helm / Cache / Archive] # kubectl Port-Forward efk- Master--elasticsearch-ES 0 9200: 9200 = Elk---namespace the logging the Forwarding from 127.0.0.1:9200 -> 9200 the Forwarding from [::. 1]: 9200 -> 9200 ^ C [K8S the root-Master @ ~ /. Helm / Cache / Archive] # kubectl Expediting IT --rm cirror- $ RUN = cirros the RANDOM --image - / bin / SH kubectl RUN = --generator Deployment / apps.v1 IS DEPRECATED and Will BE removed in Version A Future . --generator the Use kubectl = RUN RUN-POD / V1 or kubectl Create INSTEAD. the If you do Command Not See A prompt, the try pressing Enter. / # eFK the nslookup-ES-elasticsearch-client.elk-logging.svc.cluster .local Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: efk-es-elasticsearch-client.elk-logging.svc.cluster.local
Address 1: 10.102.193.144 efk-es-elasticsearch-client.elk-logging.svc.cluster.local
/ # curl efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200
{
"name" : "efk-es-elasticsearch-client-b5694c87-n9kqx",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "JZf_DIIMTxan7KblnRmZEg",
"version" : {
"number" : "6.7.0",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "8453f77",
"build_date" : "2019-03-21T15:32:29.844721Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version": "5.6.0",
"minimum_index_compatibility_version": "5.0.0"
},
"the tagline": "by You Know, for Search"
}
/ # eFK-curl-ES-elasticsearch client.elk-logging.svc. cluster.local: 9200 / _cat
/ _cat / Allocation
/ _cat / Shards
/ _cat / Shards / index} {
/ _cat / Master ####### checks whether the service name resolve ES nslookup efk-es-elasticsearch- client.efk.svc.cluster.local ## ES access service is normal curl eFK-es-elasticsearch-client.efk.svc.cluster.local: 9200 ## View ES library curl efk-es-elasticsearch-client . efk.svc.cluster.local: 9200 / _cat ## View nodes ES
curl efk-es-elasticsearch-client.efk.svc.cluster.local:9200/_cat/nodes
##查看ES中的索引
curl efk-es-elasticsearch-client.efk.svc.cluster.local:9200/_cat/indices
Installation and deployment Fluentd
Prerequisite: tests using stable / fluentd-elasticsearch deployment environment component issues, yet the solution, where the use of other sources:
fluent Mirror: quay.io/fluentd_elasticsearch/fluentd: v2.6.0
-
Installation kiwigrid source
-
helm repo add kiwigrid https://kiwigrid.github.io
1. Find the installation package
helm search fluentd-elasticsearch
2. Download
cd ~/.helm/cache/archive
helm fetch kiwigrid/fluentd-elasticsearch
3. Modify the configuration file
tar -zxvf fluentd-elasticsearch-0.7.2.tgz
ls
vim fluentd-elasticsearch/values.yaml
-
Editorial changes values.yaml, specify the location of the cluster elasticsearch
elasticsearch:
host: 'efk-es-elasticsearch-client.elk-logging.svc.cluster.local'
port: 9200
-
If you are using prometheus monitoring rules should open prometheusRole
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "24231"
service:
type: ClusterIP
ports:
- name: "monitor-agent"
port: 24231
4. Deploy
helm install kiwigrid/fluentd-elasticsearch --name efk-flu --namespace elk-logging -f fluentd-elasticsearch/values.yaml
View
-
Whether to generate an index, direct access elasticsearch use of RESTfull API interface.
$ kubectl run cirros1 --rm -it --image=cirros -- /bin/sh
/ # curl efk-cs-elasticsearch-client.elk-logging.svc.cluster.local:9200/_cat/indices
green open logstash-2019.05.10 a2b-GyKsSLOZPqGKbCpyJw 5 1 158 0 84.2kb 460b
green open logstash-2019.05.09 CwYylNhdRf-A5UELhrzHow 5 1 71418 0 34.3mb 17.4mb
green open logstash-2019.05.12 5qRFpV46RGG_bWC4xbsyVA 5 1 34496 0 26.1mb 13.2mb
fluentd installation https://github.com/nttcom/fluent-plugin-rabbitmq
Installation and deployment kibana
kibana 镜像: docker.elastic.co/kibana/kibana: 6.7.0
1. Download kibana
helm fetch stable/kibana
2. modify the configuration file values.yaml
-
Edit values.yaml, modify elasticsearch point elasticsearch cluster address
elasticsearch.hosts: http://efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200
-
Modify the mode of service, so that you can access from outside the cluster
service:
type: NodePort
nodePort:30049
Note: kibana Speaking: i18n.locale: "zh-CN"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
## For kibana < 6.6, use elasticsearch.url instead
elasticsearch.hosts: http://efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200
i18n.locale: "zh-CN"
3. Deploy
helm install stable/kibana -n efk-kibana --namespace elk-logging -f kibana/values.yaml
4. Access service port
[root@k8s-master ~/.helm/cache/archive]# kubectl get svc -n elk-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efk-es-elasticsearch-client NodePort 10.102.193.144 <none> 9200:30920/TCP 3h46m
efk-es-elasticsearch-discovery ClusterIP None <none> 9300/TCP 3h46m
efk-flu-fluentd-elasticsearch ClusterIP 10.110.89.85 <none> 24231/TCP 54m
kibana NodePort 10.101.94.164 <none> 443:30049/TCP 39m
5. Test
-
Since the service work in NodePort mode, so you can access the outside of the cluster
Installation and deployment logstash
logstash Mirror: docker.elastic.co/logstash/logstash: 6.7.0
Created in advance PV, minimum allocation 5Gi
1. Download image package logstash
helm fetch stable/logstash
2. modify the configuration file values.yaml
Modify image:
image:
repository: docker.elastic.co/logstash/logstash
tag: 6.7.0
设置X-Pack monitoring in Logstash (config:)
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: "http://efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200"
Setting elasticsearch output
elasticsearch:
host: efk-es-elasticsearch-client.elk-logging.svc.cluster.local
port: 9200
Data input data filebeat
beats { port => 5044 }
Data filter settings:
filters:
Data output setting: output set here es
elasticsearch { hosts => ["ELASTICSEARCH_HOST"}:${ELASTICSEARCH_PORT}"] manage_template => false index => "%{@metadata}-%{+YYYY.MM.dd}" }
3. Deploy
helm install stable/logstash -n logstash --namespace elk-logging -f logstash/values.yaml
4. Test
root@chengtai:~/.helm/cache/archive# kubectl get pods -n elk-logging
NAME READY STATUS RESTARTS AGE
efk-es-elasticsearch-client-7d6f8bf48f-h7zql 1/1 Running 0 3d
efk-es-elasticsearch-client-7d6f8bf48f-pmdf4 1/1 Running 0 3d
efk-es-elasticsearch-data-0 1/1 Running 0 3d
efk-es-elasticsearch-data-1 1/1 Running 0 3d
efk-es-elasticsearch-master-0 1/1 Running 0 3d
efk-es-elasticsearch-master-1 1/1 Running 0 3d
efk-es-elasticsearch-master-2 1/1 Running 0 3d
efk-flu-fluentd-elasticsearch-545vn 1/1 Running 0 3d
efk-kibana-5488995d-w7n7m 1/1 Running 0 2d6h
filebeat-6b97c4f688-kd2l9 1/1 Running 0 6h45m
logstash-0 1/1 Running 0 19m
备注:host: logstash.elk-logging.svc.cluster.local
port: 5044
Installation and deployment filebeat
NOTE: Use a mirror: docker.elastic.co/beats/filebeat: 6.7.0
1. Download the helm source
helm fetch stable/filebeat
2. modify the configuration file values.yaml
First unzip the downloaded installation package helm
filebeat.modules: - module: system processors: - add_cloud_metadata: filebeat.inputs: - type: log enabled: true paths: - /var/log/*.log - /var/log/messages - /var/log/syslog - type: docker containers.ids: - "*" processors: - add_kubernetes_metadata: in_cluster: true - drop_event: when: equals: kubernetes.container.name: "filebeat" xpack.monitoring.enabled: true #xpack.monitoring.elasticsearch: #hosts: ["efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200"] output.elasticsearch: hosts: ['efk-es-elasticsearch-client.elk-logging.svc.cluster.local:9200'] #output.logstash: #hosts: ['logstash.elk-logging.svc.cluster.local:5044'] output.file: enabled: false logging.level: info # When a key contains a period, use this format for setting values on the command line: # --set config."http\.enabled"=true http.enabled: true http.port: 5066
Remarks:
filebeat.modules: configuration uses the default module, reference: https://www.elastic.co/guide/en/beats/filebeat/6.7/filebeat-modules.html
filebeat.inputs: Configure input rules logging.level: info
xpack.monitoring.enabled: true = "monitor configuration kibana
output.elasticsearch.hosts: [ 'efk-es-elasticsearch-client.elk-logging.svc.cluster.local: 9200']: es configured to output
output.logstash.hosts: [ 'logstash.elk-logging.svc.cluster.local: 5044']: an output configured to logstash
output.file enabled:. false: turn off the default output, configure additional output, or will be error
logging.level: info: log level
3. Deploy
helm install stable/filebeat -n filebeat --namespace elk-logging -f filebeat/values.yaml
4. Is the successful deployment
root@chengtai:~/.helm/cache/archive# kubectl get pods -n elk-logging NAME READY STATUS RESTARTS AGE efk-es-elasticsearch-client-7d6f8bf48f-6l62s 1/1 Running 0 32d efk-es-elasticsearch-client-7d6f8bf48f-qtfm7 1/1 Running 0 32d efk-es-elasticsearch-data-0 1/1 Running 0 32d efk-es-elasticsearch-data-1 1/1 Running 0 32d efk-es-elasticsearch-master-0 1/1 Running 0 32d efk-es-elasticsearch-master-1 1/1 Running 0 32d efk-es-elasticsearch-master-2 1/1 Running 0 32d efk-kibana-b57fd4c6d-nvfms 1/1 Running 0 29d elastalert-6977858ccf-r68pz 0/1 CrashLoopBackOff 1269 15d elastalert-elastalert-7c7957c9c6-cdkfv 1/1 Running 0 10d filebeat-z9njz 2/2 Running 0 16d metricbeat-ststg 1/1 Running 0 14d
root@chengtai:~/.helm/cache/archive# kubectl get svc -n elk-logging NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE efk-es-elasticsearch-client NodePort 10.100.114.154 <none> 9200:30920/TCP 32d efk-es-elasticsearch-discovery ClusterIP None <none> 9300/TCP 32d efk-kibana NodePort 10.97.169.99 <none> 443:30049/TCP 29d elastalert NodePort 10.108.55.119 <none> 3030:30078/TCP 15d filebeat-metrics ClusterIP 10.109.202.198 <none> 9479/TCP 16d