Kubernetes Helm and its functional components

The package management tool
yum solves the problem of rpm package dependency

In the k8s package, it mainly solves the installation service problem.
Download the desired yaml file through the warehouse, and install the desired yaml file by modifying the properties of the yaml file.

1. Introduction to Helm

Before using helm, to deploy applications to kubernetes, we have to deploy deployment, svc, etc. in turn, which is more complicated. Moreover, with the microservices of many projects, the deployment and management of complex applications in containers is more complicated. Helm supports release management and control through packaging, which greatly simplifies the deployment and management of Kubernetes applications.

The essence of Helm is to make K8s application management (Deployment, Service, etc.) configurable and dynamically generated, by dynamically generating K8s resource manifest files (deployment.yaml, service.yaml), and then calling Kubectl to automatically perform K8s resource deployment

The essence of Helm is to make K8s application management (Deployment, Service, etc.) configurable and dynamically generated, by dynamically generating K8s resource manifest files (deployment.yaml, service.yaml), and then calling Kubectl to automatically perform K8s resource deployment

Helm is an officially provided package manager similar to YUM, which is a process package for the deployment environment. Helm has two important concepts: chart and releasechart

  • A chart is a collection of information for creating an application, including configuration templates, parameter definitions, dependencies, and documentation for various Kubernetes objects. A chart is a self-contained logical unit of application deployment. Think of chart as a software installation package in apt and yum
  • Release is a running instance of the chart, which represents a running application. When the chart is installed in the Kubernetes cluster, a release is generated. Chart can be installed to the same cluster multiple times, and each installation is a release

Helm consists of two components: Helm client and Tiller server.
Insert picture description here
Helm client is responsible for the creation and management of charts and releases and the interaction with Tiller.

The Tiller server runs in a Kubernetes cluster. It processes Helm client requests and interacts with the Kubernetes API Server

Two, Helm deployment

1. Download the helm command line tool

Download the helm command line tool to /usr/local/install-k8s of the master node node1. The version 2.13.1 is downloaded here:

ntpdate np1.aliyun.com
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
tar -zxvf helm-v2.13.1-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin/
chmod a+x /usr/local/bin/helm

Insert picture description here

2. Install server tiller

In order to install the server-side tiller, you also need to configure the kubecti tool and kubeconfig file on this machine to ensure that the kubectl tool can access the apiserver on this machine and use it normally. The node1 node here and kubectl are configured

Because Kubernetes APIServer enables RBAC access control, you need to create a service account: tiller used by tiller and assign a suitable role to it. For details, please refer to [Role-based Access Control (https://docs.helm.sh/
using helm/#role-based-access-control) in the helm document . For simplicity, assign the built-in ClusterRole cluster-admin directly to it.

Create rbac-config.yaml file:
vim rbac-config.yaml

apiVersion: v1
kind: ServiceAccount # SA创建
metadata:  
  name: tiller  
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding # 集群角色绑定
metadata:  
  name: tiller
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: cluster-admin # 集群管理员角色
subjects:  
  - kind: ServiceAccount    
    name: tiller    
    namespace: kube-system
kubectl create -f rbac-config.yaml

After deploying the yaml file, use the helm init --service-account tiller --skip-refresh command to initialize Heml

helm init --service-account tiller --skip-refresh

Insert picture description here

If you fail to download the image, you need to download the image and import it into Docker (three nodes)

[root@k8s-master01 helm]# kubectl describe pod tiller-deploy-58565b5464-brcbb -n kube-system

Insert picture description here
gcr.io/kubernetes-helm/tiller: v2.13.1 cannot be downloaded, three nodes import docker load -i

[root@k8s-master01 helm]# docker load -i helm-tiller.tar 
[root@k8s-master01 helm]# kubectl get pod -n kube-system
[root@k8s-master01 helm]# helm init --service-account tiller --skip-refresh
[root@k8s-master01 helm]# helm version

Insert picture description here

Visit the helm warehouse, there are instructions on how to install
hub.helm.sh/charts

3. Helm custom template

# 创建文件夹
[root@k8s-master01 helm]# pwd
/usr/local/install-k8s/helm
[root@k8s-master01 helm]# mkdir test
[root@k8s-master01 helm]# cd test
# 创建自描述文件 Chart.yaml,这个文件必须有name和version定义
[root@k8s-master01 test]# cat <<'EOF' > ./Chart.yaml
name: hello-world
version: 1.0.0
EOF
# 创建模板文件,用于生成 Kubernetes资源清单(mainfests)
[root@k8s-master01 test]# mkdir ./templates  必须是templates,固定,template目录下的yaml文件都会执行一遍
[root@k8s-master01 test]# cat <<'EOF' > ./templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
        - name: hello-world
          image: hub.atguigu.com/library/myapp:v1
          ports:
            - containerPort: 80
              protocol: TCP
EOF
[root@k8s-master01 test]# cat <<'EOF' > ./templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: hello-world
EOF
# 使用命令 helm install RELATIVE_PATH_TO_CHART 创建一次Release
$ helm install .

Insert picture description here

# 列出已经部署的
helm ls

Insert picture description here
 

# 更新
[root@k8s-master01 test]# pwd
/usr/local/install-k8s/helm/test
[root@k8s-master01 test]# ls
Chart.yaml  templates

[root@k8s-master01 test]# helm upgrade unrealized-rat .
[root@k8s-master01 test]# helm history unrealized-rat

# 查看状态
helm status unrealized-rat  

Insert picture description here

Visit: http://10.0.100.10:31091
Insert picture description here
In the helm of k8s, the deployment plan of a cluster is written into the charts, a cluster is deployed through the charts, and the corresponding replicas are generated.

 
To achieve the effect: change the values ​​file, you can change the purpose of pod mirroring

# 配置体现在配置文件 values.yaml
cat << 'EOF' > ./values.yaml
image:
  repository: wangyanglinux/myapp
  tag: 'v2'
EOF

# 这个文件中定义的值,在模板文件中可以通过 .values对象访问到
cat <<'EOF' > ./templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
        - name: hello-world
          image: {
    
    {
    
     .Values.image.repository }}:{
    
    {
    
     .Values.image.tag }}
          ports:
            - containerPort: 80
              protocol: TCP
EOF

# 升级版本
[root@k8s-master01 test]# cat values.yaml 
image:
  repository: wangyanglinux/myapp
  tag: 'v2'

# 版本v2
helm upgrade -f values.yaml unrealized-rat .

#整个更新
#helm upgrade unrealized-rat .

kubectl get pod

Insert picture description here
Browser access, get the result v2
Insert picture description here

# 在values.yaml 中的值可以被部署 release时用到的参数 --values YAML_FILE_PATH
# 或 --set key1=value1,key2=value2 覆盖掉
helm upgrade unrealized-rat --set image.tag='v3' .

Insert picture description here

4. Command supplement

Insert picture description here
Insert picture description here
Use purge to delete it completely
Insert picture description here

Three, use Helm to deploy Dashboard

Tools for managing clusters in BS structure

kubernetes-dashboard.yaml :

image:
  repository: k8s.gcr.io/kubernetes-dashboard-amd64
  tag: v1.10.1
ingress:
  enabled: true
  hosts:
    - k8s.frognew.com
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  tls:
    - secretName: frognew-com-tls-secret
      hosts:
      - k8s.frognew.com
rbac:
  clusterAdminRole: true
更新仓库
#建议将helm的源换成阿里的,避免更新失败
helm repo remove stable
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update

helm fetch stable/kubernetes-dashboard-0.6.0   # 要具有科学上网条件
tar -zxvf kubernetes-dashboard-0.6.0.tgz
cd kubernetes-dashboard

helm install . -n kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml

kubectl get pod -n kube-system -o wide

Insert picture description here
 
Upload dashboard.tar, load every node

docker load -i dashboard.tar
kubectl get pod -n kube-system -o wide

Insert picture description here
kubectl get svc -n kube-system
If you want browser access, change to NodePort type
kubectl edit svc kubernetes-dashboard -n kube-system
kubectl get svc -n kube-system
Insert picture description here
visit https://10.0.100.10:30509
Google visit Need: /etc/kubernetes/pki/ca.crt
Firefox access: consent
Insert picture description here

kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubectl describe secret kubernetes-dashboard-token-wszpz -n kube-system

复制token到网页,通过令牌方式登录

Insert picture description here
Insert picture description here
Use test
Insert picture description here
Insert picture description here

 

Four, Prometheus

1. Component description

1. MetricServer: It is the aggregator of the resource usage of the kubernetes installation group, which collects data for use in the kubernetes cluster, such as kubectl, hpa, scheduler, etc.
2. PrometheusOpdrator: is a system detection and alerting toolbox used to store monitoring data.
3. NodeExporter: used for node key metrics status data.
4. KubeStateMetrics: Collect the resource object data in the ubernetes cluster, and formulate rules such as reporting. 5. Prometheus: Collect apiserver, scheduler, controller-manager, kubelet component data in a pull mode, and transmit the appearance through the http protocol.
6.Grafana: is a platform for visual data statistics and monitoring.

2. Create

[root@k8s-master01 ~]# cd /usr/local/install-k8s/plugin/
[root@k8s-master01 plugin]# pwd
/usr/local/install-k8s/plugin
[root@k8s-master01 plugin]# mkdir prometheus
[root@k8s-master01 plugin]# cd prometheus/
[root@k8s-master01 prometheus]# ls
[root@k8s-master01 prometheus]# git clone https://github.com/coreos/kube-prometheus.git

3. Modify

[root@k8s-master01 prometheus]# cd kube-prometheus/manifests

(1) Modify the grafana-service.yaml file and use nodepode to access grafana:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort #添加内容
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 30100 #添加内容
  selector:
    app: grafana

(2) Modify prometheus-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort #添加内容
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 30200 #添加内容
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP

(3) Modify alertmanager-service.yaml to nodepode

apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
  name: alertmanager-main
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9093
    targetPort: web
    nodePort: 30300
  selector:
    alertmanager: main
    app: alertmanager
  sessionAffinity: ClientIP

4. Import the image

Import the following three files into, /usr/local/install-k8s/plugin/prometheus/kube-prometheus
Insert picture description here

[root@k8s-master01 prometheus]# tar -zxvf prometheus.tar.gz 
[root@k8s-master01 prometheus]# cat load-images.sh
[root@k8s-master01 prometheus]# mv prometheus load-images.sh /root/
[root@k8s-master01 prometheus]# cd
[root@k8s-master01 ~]# chmod a+x load-images.sh
[root@k8s-master01 ~]# ./load-images.sh
[root@k8s-master01 ~]# scp -r prometheus/ load-images.sh root@k8s-node01:/root/

Insert picture description here

[root@k8s-master01 ~]# scp -r prometheus/ load-images.sh root@k8s-node01:/root/
[root@k8s-master01 ~]# scp -r prometheus/ load-images.sh root@k8s-node02:/root/

[root@k8s-node01 ~]# ./load-images.sh
[root@k8s-node02 ~]# ./load-images.sh

Insert picture description here

[root@k8s-master01 manifests]# pwd
/usr/local/install-k8s/plugin/prometheus/kube-prometheus/manifests
[root@k8s-master01 manifests]# kubectl apply -f ../manifests/
多运行几次,因为要互相链接

运行始终报错的话,先进入到
[root@k8s-master01 manifests]# cd setup/
[root@k8s-master01 setup]# pwd
/usr/local/install-k8s/plugin/prometheus/kube-prometheus/manifests/setup
[root@k8s-master01 setup]# kubectl apply -f .


[root@k8s-master01 manifests]# kubectl get pod
[root@k8s-master01 manifests]# kubectl get pod -n monitoring

数据显示查看
[root@k8s-master01 manifests]# kubectl top node
[root@k8s-master01 manifests]# kubectl top pod

查看当前访问的状态
[root@k8s-master01 manifests]# kubectl get svc -n --all-namespace

Insert picture description here

I have also failed to run, even after many attempts. Others' renderings:


The nodeport port corresponding to prometheusprometheus is 30200. Visit http://MasterIP:30200.
Insert picture description here
By visiting http://MasterIP:30200/target (under Status), you can see that prometheus has successfully connected to the k8s apiserver
node and all are healthy
Insert picture description here

provides basic query K8S each cluster POD CPU usage on the WEB interface prometheus
sum by (pod_name) (rate ( container_cpu_usage_seconds_total {image! = "", pod_name! = ""} [1m]))
Insert picture description here
above query There is data, indicating that node-exporter writes data to prometheus normally

 
Visit grafana to view
the port number exposed by the grafana service:

kubectl getservice-n monitoring  | grep grafana
grafana         NodePort    10.107.56.143    <none>        3000:30100/TCP    

Browser visit http://MasterIP:30100
username and password default admin/admin
Insert picture description here

View the data of the Kubernetes API server
Insert picture description here

5. Stress test HPA

Horizontal Pod Autoscaling can automatically scale the number of Pods in a Replication Controller, Deployment or Replica Set according to CPU utilization

Insert picture description here

Import hpa-example.tar, which is developed by Google with PHP, which can cause resource consumption on the machine.

三个节点
docker load -i hpa-example.tar

--request-cpu=200m ,容器最大的CPU利用,防止出现OMM机制,把一些重要的进程杀死
--cpu-percent=50 ,即 100m,最大增长不超过10个节点

扩容快,回收比较慢
防止:因为网络波动问题,造成访问压力没这么大,然后就关闭了,这种是不合理的

 

6. Resource limit-pod

Request can be understood as soft limit, limit is hard limit
Insert picture description here

7. Resource restrictions-namespace

1. Computing resource quota
Insert picture description here

2. Configure quota limits on the number of objects
Insert picture description here

3. Configure CPU and memory LimitRange
Insert picture description here

 

Five, EFK log

Collect log information of pod running on the current node in /var/log/containers/
Insert picture description here

1. Add Google incubator warehouse

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

Insert picture description here

2. Deploy Elasticsearch

[root@k8s-master01 elasticsearch]# kubectl create namespace efk
[root@k8s-master01 elasticsearch]# helm fetch incubator/elasticsearch
[root@k8s-master01 elasticsearch]# tar -zxvf elasticsearch-1.10.2.tgz
[root@k8s-master01 elasticsearch]# vim values.yaml

MINIMUM_MASTER_NODES: "1"  
由于16G,带不动,故此由2改成1个
无额外持久卷,false
master:
  name: master
  exposeHttp: false
  replicas: 1
  heapSize: "512m"
  persistence:
    enabled: false
data:
  name: data
  exposeHttp: false
  replicas: 1
  heapSize: "1536m"
  persistence:
    enabled: false

[root@k8s-master01 elasticsearch]# helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch
[root@k8s-master01 elasticsearch]# kubectl get pod -n efk
全都Running后再执行下一条

Insert picture description here

[root@k8s-master01 elasticsearch]# kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh
[root@k8s-master01 elasticsearch]# curl Elasticsearch:Port/_cat/nodes

[root@k8s-master01 elasticsearch]# kubectl describe pod -n efk
报错:0/3 nodes are available: 1 node(s) had taints that the pod didn’t tolerate, 2 Insufficient memory.
解决:kubectl taint nodes --all node-role.kubernetes.io/master-
Insert picture description here

3. Deploy Fluentd

[root@k8s-master01 efk]# pwd
/usr/local/install-k8s/efk
[root@k8s-master01 efk]# helm fetch stable/fluentd-elasticsearch
[root@k8s-master01 efk]# tar -zxvf fluentd-elasticsearch-2.0.7.tgz
[root@k8s-master01 efk]# cd fluentd-elasticsearch
[root@k8s-master01 fluentd-elasticsearch]# vim values.yaml
# 更改其中Elasticsearch访问地址
elasticsearch:
 host: '10.102.94.81'
# kubectl get svc -n efk  获取到client的IP地址

[root@k8s-master01 fluentd-elasticsearch]# helm install --name flu1 --namespace=efk -f values.yaml .
[root@k8s-master01 fluentd-elasticsearch]# kubectl get pod -n efk

4. Deploy kibana data to show
that the versions of E and K must be the same

master:
# helm fetch stable/kibana --version 0.14.8
# vim values.yaml
files:
 kibana.yml
   elasticsearch.url: http://10.102.94.81:9200

三个节点:
# 在values.yaml看到kibana-oss也要下载
# docker pull docker.elastic.co/kibana/kibana-oss:6.4.2
# docker save -o kibana.tar docker.elastic.co/kibana/kibana-oss:6.4.2
# docker load -i kibana.tar  其他节点

# helm install --name kib1 --namespace=efk -f values.yaml stable/kibana --verison 0.14.8

# kubectl get svc -n efk
# kubectl edit svc kib1-kibana -n efk
默认是ClusterIP
type: NodePort

浏览器访问:

Insert picture description here

I didn't learn well this time, and there were some environmental problems that couldn't be completed.

Guess you like

Origin blog.csdn.net/qq_39578545/article/details/108943538