kubernetes deployment Elasticsearch eck

Introduction to eck

Elastic Cloud on Kubernetes (ECK) can automatically deploy, manage, and orchestrate Elasticsearch, Kibana, and APM Server services in kubernetes clusters based on kubernetes operator.

The function of ECK is by no means limited to the task of simplifying the deployment of Elasticsearch and Kibana on Kubernetes. ECK focuses on simplifying all post-operation work, such as:

  • Manage and monitor multiple clusters
  • Easily upgrade to new cluster versions
  • Expand or shrink cluster capacity
  • Change cluster configuration
  • Dynamically resize local storage (including Elastic Local Volume, a local storage drive)
  • Perform backup

All Elasticsearch clusters launched on ECK are protected by default, meaning encryption is enabled upon initial creation and protected by a strong default password.

Official website: https://www.elastic.co/cn/elastic-cloud-kubernetes

Project address: https://github.com/elastic/cloud-on-k8s

Deploy ECK

reference:

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html

https://github.com/elastic/cloud-on-k8s/tree/master/config/recipes/beats

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html

https://github.com/elastic/cloud-on-k8s/tree/master/config/samples

Environment information:
Prepare 3 nodes. Here configure the master node to schedule pods:

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   7d    v1.18.2
node01     Ready    <none>   7d    v1.18.2
node02     Ready    <none>   7d    v1.18.2

eck deployment version:eck v1.1.0

Prepare nfs storage

The eck data needs to be persisted. You can use an emptydir-type temporary volume for simple testing, or you can use persistent storage such as nfs or rook. As a test, docker is used to temporarily deploy the nfs server on the master01 node to provide the storage resources required by pvc.

docker run -d 
--name nfs-server 
--privileged 
--restart always 
-p 2049:2049 
-v /nfs-share:/nfs-share 
-e SHARED_DIRECTORY=/nfs-share 
itsthenetwork/nfs-server-alpine:latest

Deploy nfs-client-provisioner and dynamically apply for nfs storage resources. 192.168.93.11 is the IP address of the master01 node. The nfs.path of the nfsv4 version can be specified as /.
Here, helm is used to deploy nfs-client-provisioner from the Alibaba Cloud helm warehouse.

helm repo add apphub https://apphub.aliyuncs.com

helm install nfs-client-provisioner 

–set nfs.server=192.168.93.11
–set nfs.path=/
apphub/nfs-client-provisioner

View the created storageClass. The default name is nfs-client. This name will be used when deploying elasticsearch below:

[root@master01 ~]# kubectl get sc
NAME         PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-client-provisioner   Delete          Immediate           true                   172m

Install nfs client on all nodes and enable rpcbind service

yum install -y nfs-utils
systemctl enable --now rpcbind

Install eck operator

Deploy version 1.1.0 eck

kubectl apply -f https://download.elastic.co/downloads/eck/1.1.0/all-in-one.yaml

View the created pods

[root@master01 ~]# kubectl -n elastic-system get pods
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   1          17m

View the created crds. Three crds were created, apmserver, elasticsearche and kibana.

[root@master01 ~]# kubectl get crd | grep elastic
apmservers.apm.k8s.elastic.co                                2020-04-27T16:23:08Z
elasticsearches.elasticsearch.k8s.elastic.co                 2020-04-27T16:23:08Z
kibanas.kibana.k8s.elastic.co                                2020-04-27T16:23:08Z

Deploy es and kibana

Download the sample yaml of the release version in github locally. Download version 1.1.0 here:

curl -LO https://github.com/elastic/cloud-on-k8s/archive/1.1.0.tar.gz
tar -zxf cloud-on-k8s-1.1.0.tar.gz
cd cloud-on-k8s-1.1.0/config/recipes/beats/

Create namespace

kubectl apply -f 0_ns.yaml

Deploy es and kibana. The count is 3, which specifies the deployment of 3 es nodes. You can also deploy a single node in the early stage, and then expand the capacity. Specify the storageClassName as nfs-client, and add the http part to specify the service type as nodePort.

$ cat 1_monitor.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  nodeSets:
  - name: mdi
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: nfs-client
  http:
    service:
      spec:
        type: NodePort
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  count: 1
  elasticsearchRef:
    name: "monitor"
  http:
    service:
      spec:
        type: NodePort

Execute yaml file to deploy es and kibana

kubectl apply -f 1_monitor.yaml

If the image cannot be pulled, you can manually replace it with the dockerhub image:

docker pull elastic/elasticsearch:7.6.2
docker pull elastic/kibana:7.6.2
docker tag elastic/elasticsearch:7.6.2 docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker tag elastic/kibana:7.6.2 docker.elastic.co/kibana/kibana:7.6.2

View created Elasticsearch and kibana resources, including health status, version and number of nodes

[root@master01 ~]# kubectl get elasticsearch
NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    3       7.6.2     Ready   77m

[root@master01 ~]# kubectl get kibana
NAME         HEALTH   NODES   VERSION   AGE
quickstart   green    1       7.6.2     137m

View the created pods:

[root@master01 ~]# kubectl -n beats get pods
NAME                          READY   STATUS    RESTARTS   AGE
monitor-es-mdi-0              1/1     Running   0          109s
monitor-es-mdi-1              1/1     Running   0          9m
monitor-es-mdi-2              1/1     Running   0          3m26s
monitor-kb-54cbdf6b8c-jklqm   1/1     Running   0          9m

View the created pv and pvc

[root@master01 ~]# kubectl -n beats get pvc
NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-monitor-es-mdi-0   Bound    pvc-882be3e2-b752-474b-abea-7827b492d83d   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-1   Bound    pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-2   Bound    pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   10Gi       RWO            nfs-client     3m33s

[root@master01 ~]# kubectl -n beats get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-2   nfs-client              3m35s
pvc-882be3e2-b752-474b-abea-7827b492d83d   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-0   nfs-client              3m35s
pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-1   nfs-client              3m35s

The actual data is stored in the master01 node/nfs-share directory:

[root@master01 ~]# tree /nfs-share/ -L 2
/nfs-share/
├── beats-elasticsearch-data-monitor-es-mdi-0-pvc-250c8eef-4b7e-4230-bd4f-36b911a1d61b
│   └── nodes
├── beats-elasticsearch-data-monitor-es-mdi-1-pvc-c1a538df-92df-4a8e-9b7b-fceb7d395eab
│   └── nodes
└── beats-elasticsearch-data-monitor-es-mdi-2-pvc-dc21c1ba-4a17-4492-9890-df795c06213a
    └── nodes

Check the created service. During deployment, the es and kibana service types have been changed to NodePort to facilitate access from outside the cluster.

[root@master01 ~]# kubectl -n beats get svc
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
monitor-es-http   NodePort    10.96.82.186    <none>        9200:31575/TCP   9m36s
monitor-es-mdi    ClusterIP   None            <none>        <none>           9m34s
monitor-kb-http   NodePort    10.97.213.119   <none>        5601:30878/TCP   9m35s

By default, elasticsearch has authentication enabled. Get the password of the elastic user:

PASSWORD=$(kubectl -n beats get secret monitor-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)                          

echo $PASSWORD

Visit elasticsearch

Browser access elasticsearch:

https://192.168.93.11:31575/

Or access the elasticsearch endpoint from within the Kubernetes cluster:

[root@master01 ~]# kubectl run -it --rm centos--image=centos -- sh                          
sh-4.4#
sh-4.4# PASSWORD=gf4mgr5fsbstwx76b8zl8m2g
sh-4.4# curl -u "elastic:$PASSWORD" -k "https://monitor-es-http:9200"
{
  "name" : "quickstart-es-default-2",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "mrDgyhp7QWa7iVuY8Hx6gA",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Access kibana.
Access kibana in the browser. The user password is the same as elasticsearch. Select Explore on my own. You can see that the index has not been created yet.

https://192.168.93.11:30878/

Deploy filebeat

Using the dockerhub image, the version is changed to 7.6.2.

sed -i 's#docker.elastic.co/beats/filebeat:7.6.0#elastic/filebeat:7.6.2#g' 2_filebeat-kubernetes.yaml

kubectl apply -f 2_filebeat-kubernetes.yaml

View created pods

[root@master01 beats]# kubectl -n beats get pods -l k8s-app=filebeat
NAME             READY   STATUS    RESTARTS   AGE
filebeat-dctrz   1/1     Running   0          9m32s
filebeat-rgldp   1/1     Running   0          9m32s
filebeat-srqf4   1/1     Running   0          9m32s

If the image cannot be pulled, you can pull it manually:

docker pull elastic/filebeat:7.6.2
docker tag elastic/filebeat:7.6.2 docker.elastic.co/beats/filebeat:7.6.2

docker pull elastic/metricbeat:7.6.2
docker tag elastic/metricbeat:7.6.2 docker.elastic.co/beats/metricbeat:7.6.2

Visit kibana. At this time, you can search for the filebeat index, fill in the index pattern, select @timestrap and create the index.

Insert image description here

View collected logs

Insert image description here

Deploy metricbeat

sed -i 's#docker.elastic.co/beats/metricbeat:7.6.0#elastic/metricbeat:7.6.2#g' 3_metricbeat-kubernetes.yaml

View created pods

[root@master01 beats]# kubectl -n beats get pods -l  k8s-app=metricbeat
NAME                          READY   STATUS    RESTARTS   AGE
metricbeat-6956d987bb-c96nq   1/1     Running   0          76s
metricbeat-6h42f              1/1     Running   0          76s
metricbeat-dzkxq              1/1     Running   0          76s
metricbeat-lffds              1/1     Running   0          76s

At this time, when you visit kibana, you can see that metricbeat is added to the index.

Guess you like

Origin blog.csdn.net/web13618542420/article/details/126746821