helm, efk logging system

helm: store inventory with a single    chart chart chart warehouse

chart,helm-->Tiller-->api server -->kube_cluster

chart--->release

 

helm:     

Core terms

chart: a helm package, deploy list of definitions, including the relationship between resources, mirroring the definition does not include mirror,          

repository: chart warehouses, storage chart , is a https / http server

release: specific chart deployed in a cluster instance on the target

chart -->configmap --> Relese

      values.yaml

 

Procedural framework:

helm: clients, manage local chart storage, management chart, and Tiller server interaction for sending chart, examples of installation, query, unloading and other operations

Tiller: server listening for helm request receiving helm sent chart with config, combined generating Release;

 

Deployment helm

https://github.com/helm/helm/releases/tag/v2.9.1

 mkdir helm && cd helm

tar xf helm-v2.9.1-linux-amd64.tar.gz && cd linux-amd64/

mv helm /usr/bin/

helm --help

 

Deployment Tiller

We recommend deploying in k8s on the cluster

ls .kube / config is kubectl files can be helm acquired

Services account is tiller, requires a lot of authority, with clusterrolebing bound cluster-admin on role

kubeadm installation Enforce the rbac

rbac file : https: //github.com/helm/helm/blob/master/docs/rbac.md

vim  tiller-rbac.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: tiller

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: tiller

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

  - kind: ServiceAccount

    name: tiller

    namespace: kube-system

 

kubectl apply -f tiller-rbac.yaml

Initialization helm

export NO_PROXY="172.20.0.0/16,127.0.0.0/8"

helm init --service-account tiller --upgrade --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0

kubectl get pods -n kube-system to see pods is running up

vim ~/.helm/repository/repositories.yaml

url: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts   modify the remote charts warehouse

 

Use helm

    source <(helm completion bash) helm command autocomplete

helm repo update   update warehouse

HTTPS: // helm.sh  official website

HTTPS: //hub.kubeapps. COM  official of available charts list

helm repo list   to view the available warehouse

helm search   lists the available charts

helm inspect stable / mysql  view charts Details

            charts name

 

Installation use

helm install --name mem1 stable/memcached

Install a named release    warehouse name / installation of the use of the name

 

verification

kubectl get pods --namespace default -l "app=mem1-memcached" -o jsonpath="{.items[0].metadata.name}”

Or kubectl get pods

 

Uninstall application

helm delete --help

helm delete mem1

 

View release

helm list

helm list --all

 

Rolling updates

helm upgrade

 

Rollback

helm rollback

 

Download charts

helm fetch stable/mysql

 

release管理:install ,delete, upgrade/rollback ,list, histoty,status

charts管理: fetch,create,inspect,verify,package

 

Each installation of a charts, will download charts to ~ .helm / cache / archive / directory

 

values.yaml   define default values to modify default and individually refer to the file, it will take effect

cp values.yaml  ~

vim values.yaml

replicaCount: 2   Modify the deployment of nodes

helm install --name mem3 stable/memcached -f values.yaml

For redis

persistence:

  enabled: false  close persistent storage

  annotations:

prometheus.io/scrape: "true"   allows prometheus crawl data

  

charts Introduction

https://helm.sh/docs/developing_charts/#charts

helm dep up foochart download dependencies chart

templates and values

Template syntax : https: //golang.org/pkg/text/template/

Template examples

apiVersion: v1

kind: ReplicationController

metadata:

  name: deis-database

  namespace: deis

  labels:

    app.kubernetes.io/managed-by: foothold

spec:

  replicas: 1

  selector:

    app.kubernetes.io/name: deis-database

  template:

    metadata:

      labels:

        app.kubernetes.io/name: deis-database

    spec:

      serviceAccount: deis-database

      containers:

        - name: deis-database

          image: {{.Values.imageRegistry}} / postgres: {. {Values.dockerTag}} .Value expressed from values.yaml this document a field or fields two

          imagePullPolicy: {{.Values.pullPolicy}}

          ports:

            - containerPort: 5432

          env:

            - name: DATABASE_STORAGE

              value: {{default "minio"  .Values.storage}} if values.yaml not provided storage of this key, use this default value

 

Corresponding values.yaml

imageRegistry "quay.io/deis"

dockerTag: "latest"

pullPolicy: "Always"

storage: "s3"

                                       

Built-in variables

Predefined Values

 

You can use a custom file

wordpress the charts composed as follows

wordpress:

  Chart.yaml

  requirements.yaml

  # ...

  charts/

    apache/

      Chart.yaml

      # ...

    mysql/

      Chart.yaml

      # ...

 

charts /   express wordpress rely on this chart in the catalog chart

helm install --values=myvals.yaml wordpress

         Custom template parameter file or install wordpress this chart

Customize chart

helm create myapp  automatically generated myapp directory, which is automatically generated chart file

myapp This chart is composed as follows

myapp/

├── charts

├── Chart.yaml

├── templates

│ ├── deployment.yaml

│   ├── _helpers.tpl

│ ├── ingress.yaml

│   ├── NOTES.txt

│ └── service.yaml

└── values.yaml

 

vim Chart.yaml

apiVersion: v1

appVersion: "1.0"

description: A Helm chart for Kubernetes myapp chart

name: myapp

version: 0.0.1

maintainer:

- name: mageedu

  email: [email protected]

  url: https://www.baidu.com

 

Rely on other charts documentation of

vim requirements.yaml

 

cd templates/

NOTES.txt release information

 _helpers.tpl template file syntax help

 

vim values.yaml

replicaCount: 2 The following is the change of place

image:

  repository: ikubernetes/myapp:v1

  tag: v1

resources:

  limits:

    cpu: 100m

    memory: 128Mi

  requests:

    cpu: 100m

    memory: 128Mi

 

Syntax checking template

helm lint ../myapp

 

Packaged into charts

helm package --help

helm package myapp/

           For myapp directory package did not write path, the default in the current directory myapp-0.0.1.tgz

 

In the current store charts open directory helm warehousing services

helm serve

 

Open another terminal test

helm search myapp

 

Modify NOTE.txt in order to display the correct release after installation information

 

Test helm installation

helm install --name myapp local/myapp

               release the name of the repository name / charts name

 

helm delete --purge myapp1

            Delete release installation pod as well as the release name is also deleted this release names can be reused

 

Add warehouse

https://hub.kubeapps.com/charts/incubator

helm repo  add --help

helm repo  add stable1 https://kubernetes-charts.storage.googleapis.com

helm repo  add incubator https://storage.googleapis.com/kubernetes-charts-incubator

helm repo remove incubator   delete incubator warehouse

 

EFK log system component deployment

Deployment elasticsearch

elasticsearch架构 data<-->master<--client

cd helm/

docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2  tow mirrors

elm repo  add incubator https://storage.googleapis.com/kubernetes-charts-incubator

helm fetch incubator/elasticsearch

tar xf elasticsearch-1.10.2.tgz

cd elasticsearch/

vim values.yaml

appVersion: "6.4.2"

image:

  repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"

  tag: "6.4.2"   version with kibana consistent version

cluster:

  name: "elasticsearch"

    config:

    MINIMUM_MASTER_NODES: "1"  minimum number of nodes

client:

  name: client

  replicas: 1  Enable 1 copy

master:

  name: master

  exposeHttp: false

  persistence:

enabled: false

  replicas: 1

data:

  name: data

  exposeHttp: false

  persistence:

enabled: false  close persistent storage

  replicas: 1   based on node adjust resources pod number

 

kubectl create namespace efk

helm install --name els6 --namespace=efk -f values.yaml incubator/elasticsearch

 

els1-elasticsearch-client.efk.svc.cluster.local service is accessed

 

test

kubectl run cirror- $ RANDOM --rm -it --image  = cirros - / bin / sh open a pod and a terminal

curl els6-elasticsearch-client.efk.svc.cluster.local

curl els6-elasticsearch-client.efk.svc.cluster.local: 9200   / _cat / nodes to see how many nodes

curl els6-elasticsearch-client.efk.svc.cluster.local: 9200   / _cat / indices to see how much the index generation

 

Deployment fluentd-elasticsearch  collection tool

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/fluentd-elasticsearch:v2.3.2  tow mirrors

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/fluentd-elasticsearch:v2.3.2 gcr.io/google-containers/fluentd-elasticsearch:v2.3.2  change the image name

 

helm  fetch incubator/fluentd-elasticsearch

tar xf fluentd-elasticsearch-2.0.7.tgz && cd fluentd-elasticsearch/

elasticsearch:

  host: 'els1-elasticsearch-client.efk.svc.cluster.local  ' add elasticsearch cluster services

podAnnotations:   Adding been prometheus monitoring

  prometheus.io/scrape: "true"

  prometheus.io/port: "24231"

service:   Adding monitoring service

  type: ClusterIP

  ports:

    - name: "monitor-agent"

      port: 24231

tolerations:   Adding master stain tolerance

  - key: node-role.kubernetes.io/master

   operator: Exists

   effect: NoSchedule

helm install --name flu1 --namespace=efk -f values.yaml incubator/fluentd-elasticsearch

 

Department kibana

helm fetch stable/kibana

tar xf kibana-0.2.2.tgz

cd kibana /

vim values.yaml

image:

  repository: "docker.elastic.co/kibana/kibana-oss"

  tag: "6.4.2" version elasticsearch consistent version

service:

  type: NodePort

  ELASTICSEARCH_URL: http://els6-elasticsearch-client.efk.svc.cluster.local:9200

 

helm install --name kib1 --namespace=efk -f values.yaml stable/kibana

          release name specified ns        specify a custom template warehouse / charts name

 

test

kubectl get svc -n efk

kib1-kibana   NodePort  10.105.194.250   <none>  443:30240/TCP

http://192.168.81.10:30240

Matching field

logstash-*

@timestamp

Finally click create index pattern

Guess you like

Origin www.cnblogs.com/leiwenbin627/p/11366900.html
efk