"Cloud native" Elasticsearch + Kibana on k8s explanation and actual operation

I. Overview

Elasticsearch is a Lucene-based search engine. It provides a distributed, multi-tenant capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and released as open source under the terms of the Apache license.

Official document: https://www.elastic.co/guide/en/elasticsearch/reference/master/getting-started.html
GitHub: https://github.com/elastic/elasticsearch

You can also refer to my article: Distributed real-time search and analysis engine - Elasticsearch

2. Elasticsearch Orchestration Deployment

Address: https://artifacthub.io/packages/helm/elastic/elasticsearch

1) Add source and download orchestration deployment package

helm repo add elastic https://helm.elastic.co
helm pull elastic/elasticsearch --version 7.17.3
tar -xf elasticsearch-7.17.3.tgz

2) Build a mirror image

The download address of each version of Elasticsearch: https://www.elastic.co/cn/downloads/past-releases#elasticsearch
I will not rebuild the mirror here. If you don’t know how to build a mirror, you can leave me a message or private message, here It is to push the remote image to our local harbor to speed up the pull of the image.

docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.3

docker tag docker.elastic.co/elasticsearch/elasticsearch:7.17.3  myharbor.com/bigdata/elasticsearch:7.17.3

# 上传镜像
docker push myharbor.com/bigdata/elasticsearch:7.17.3

# 删除镜像
docker rmi myharbor.com/bigdata/elasticsearch:7.17.3
crictl rmi myharbor.com/bigdata/elasticsearch:7.17.3

3) Modify the yaml layout

  • elasticsearch/values.yaml
image: "myharbor.com/bigdata/elasticsearch"

...

...
### 去掉这几行
volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 30Gi
....

persistence:
  enabled: true
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  annotations: {}
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  storageClass: "elasticsearch-local-storage"
  local:
  - name: elasticsearch-0
    host: "local-168-182-110"
    path: "/opt/bigdata/servers/elasticsearch/data/data1"
  - name: elasticsearch-1
    host: "local-168-182-111"
    path: "/opt/bigdata/servers/elasticsearch/data/data1"
  - name: elasticsearch-2
    host: "local-168-182-112"
    path: "/opt/bigdata/servers/elasticsearch/data/data1"

...

protocol: http
httpPort: 9200
transportPort: 9300
service:
  enabled: true
  type: NodePort
  nodePort: 30920
  httpPortName: http
  • elasticsearch/templates/storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {
   
   { .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner
  • elasticsearch/templates/pv.yaml
{
   
   {- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {
   
   { .name }}
  labels:
    name: {
   
   { .name }}
spec:
  storageClassName: {
   
   { $.Values.persistence.storageClass }}
  capacity:
    storage: {
   
   { $.Values.persistence.size }}
  accessModes:
  {
   
   {- range $.Values.persistence.accessModes }}
    - {
   
   { . | quote }}
  {
   
   {- end }}
  local:
    path: {
   
   { .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {
   
   { .host }}
---
{
   
   {- end }}
  • elasticsearch/templates/statefulset.yaml
spec:
  volumeClaimTemplates:
    spec:
# 去掉这行
{
   
   { toYaml .Values.volumeClaimTemplate | indent 6 }}

# 新增以下内容:
      accessModes:
      {
   
   {- range .Values.persistence.accessModes }}
      - {
   
   { . | quote }}
      {
   
   {- end }}
      resources:
        requests:
          storage: {
   
   { .Values.persistence.size | quote }}
    {
   
   {- if .Values.persistence.storageClass }}
    {
   
   {- if (eq "-" .Values.persistence.storageClass) }}
      storageClassName: ""
    {
   
   {- else }}
      storageClassName: "{
   
   { .Values.persistence.storageClass }}"
    {
   
   {- end }}
    {
   
   {- end }}

4) Start deployment

# 先创建本地存储目录
mkdir -p /opt/bigdata/servers/elasticsearch/data/data1
chmod -R 777 /opt/bigdata/servers/elasticsearch/data/data1

helm install my-elasticsearch ./elasticsearch -n elasticsearch --create-namespace
# 查看
helm get notes my-elasticsearch -n elasticsearch
kubectl get pods,svc -n elasticsearch -owide

NOTES

NAME: my-elasticsearch
LAST DEPLOYED: Wed Oct 12 23:47:17 2022
NAMESPACE: elasticsearch
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=elasticsearch -l app=elasticsearch-master -w2. Test cluster health using Helm test.
  $ helm --namespace=elasticsearch test my-elasticsearch

5) Test verification

 curl http://192.168.182.110:30920/
curl http://192.168.182.110:30920/_cat/nodes
curl http://192.168.182.110:30920/_cat/health?pretty

6)elasticsearch-head

elasticsearch-head GitHub download address: https://github.com/mobz/elasticsearch-head
Google browser elasticsearch-head plug-in:

Link: https://pan.baidu.com/s/1kYcTjBDPmSWVzsku2hEW7w?pwd=67v4
Extraction code: 67v4

7) Uninstall

helm uninstall my-elasticsearch -n elasticsearch
kubectl delete ns elasticsearch --force

rm -fr /opt/bigdata/servers/elasticsearch/data/data1/*
ssh local-168-182-111 "rm -fr /opt/bigdata/servers/elasticsearch/data/data1/*"
ssh local-168-182-112 "rm -fr /opt/bigdata/servers/elasticsearch/data/data1/*"

docker rmi myharbor.com/bigdata/elasticsearch:7.17.3
crictl rmi myharbor.com/bigdata/elasticsearch:7.17.3
ssh local-168-182-111 "crictl rmi myharbor.com/bigdata/elasticsearch:7.17.3"
ssh local-168-182-112 "crictl rmi myharbor.com/bigdata/elasticsearch:7.17.3"

3. Kibana Orchestration Deployment

Address: https://artifacthub.io/packages/helm/bitnami/kibana?modal=install

1) Add source and download orchestration deployment package

helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/kibana --version 10.2.6
tar -xf kibana-10.2.6.tgz

2) Build a mirror image

The image is not rebuilt here, but the image is pushed to the local harbor for acceleration. If you are not sure about building the image, you can leave a message or send a private message. [Note] The version number needs to correspond to ES, if the version is different, incompatibility may be stored.

docker pull docker.io/bitnami/kibana:7.17.3
docker tag docker.io/bitnami/kibana:7.17.3 myharbor.com/bigdata/kibana:7.17.3

# 上传镜像
docker push myharbor.com/bigdata/kibana:7.17.3

# 删除镜像
docker rmi myharbor.com/bigdata/kibana:7.17.3
crictl rmi myharbor.com/bigdata/kibana:7.17.3

3) Modify the yaml layout

  • kibana/values.yaml
image:
  registry: myharbor.com
  repository: bigdata/kibana
  tag: 8.4.3-debian-11-r1

...

replicaCount: 1

...

persistence:
  enabled: true
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  storageClass: "kibana-local-storage"
  local:
  - name: kibana-0
    host: "local-168-182-111"
    path: "/opt/bigdata/servers/kibana/data/data1"

...

service:
  ports:
    http: 5601
  type: NodePort
  nodePorts:
    http: "30601"

...

elasticsearch:
  hosts:
    - elasticsearch-master.elasticsearch
  port: "9200"
  • kibana/templates/values.yaml
{
   
   {- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {
   
   { .name }}
  labels:
    name: {
   
   { .name }}
spec:
  storageClassName: {
   
   { $.Values.persistence.storageClass }}
  capacity:
    storage: {
   
   { $.Values.persistence.size }}
  accessModes:
  {
   
   {- range $.Values.persistence.accessModes }}
    - {
   
   { . | quote }}
  {
   
   {- end }}
  local:
    path: {
   
   { .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {
   
   { .host }}
---
{
   
   {- end }}
  • kibana/templates/storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {
   
   { .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner
  • kibana/templates/pv.yaml
{
   
   {- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {
   
   { .name }}
  labels:
    name: {
   
   { .name }}
spec:
  storageClassName: {
   
   { $.Values.persistence.storageClass }}
  capacity:
    storage: {
   
   { $.Values.persistence.size }}
  accessModes:
  {
   
   {- range $.Values.persistence.accessModes }}
    - {
   
   { . | quote }}
  {
   
   {- end }}
  local:
    path: {
   
   { .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {
   
   { .host }}
---
{
   
   {- end }}

4) Start deployment

# 先创建本地存储目录
mkdir -p /opt/bigdata/servers/kibana/data/data1
chmod -R 777 /opt/bigdata/servers/kibana/data/data1

helm install my-kibana ./kibana -n kibana --create-namespace
# 查看
helm get notes my-kibana -n kibana 
kubectl get pods,svc -n kibana -owide

NOTES

NAME: my-kibana
LAST DEPLOYED: Thu Oct 13 22:43:30 2022
NAMESPACE: kibana
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kibana
CHART VERSION: 10.2.6
APP VERSION: 8.4.3

** Please be patient while the chart is being deployed **######################################################################################################
### ERROR: You did not provide the Elasticsearch external host or port in your 'helm install' call ###
######################################################################################################

Complete your Kibana deployment by running:

  helm upgrade --namespace kibana my-kibana my-repo/kibana \
    --set elasticsearch.hosts[0]=YOUR_ES_HOST,elasticsearch.port=YOUR_ES_PORT

Replacing "YOUR_ES_HOST" and "YOUR_ES_PORT" placeholders by the proper values of your Elasticsearch deployment.

5) Test verification

web:http://192.168.182.111:30601

6) Uninstall

helm uninstall my-kibana -n kibana
kubectl delete ns kibana --force

ssh local-168-182-111 rm -fr /opt/bigdata/servers/kibana/data/data1/*

docker rmi myharbor.com/bigdata/bigdata:8.4.3-debian-11-r1
crictl rmi myharbor.com/bigdata/bigdata:8.4.3-debian-11-r1

elasticsearch-on-k8s download address: https://gitee.com/hadoop-bigdata/elasticsearch-on-k8s
kibana-on-k8s download address: https://gitee.com/hadoop-bigdata/kibana-on-k8s

Elasticsearch + kibana on k8s arrangement and deployment explanation and actual operation are here first. Here is just a simple demonstration of query. For more operations, you can refer to the official document. If you have any questions, please leave me a message, and the follow-up will continue to update

Guess you like

Origin blog.csdn.net/dyuan134/article/details/130221889