容器资源需求,资源限制以及HeapSter

版权声明:知识就是为了传播! https://blog.csdn.net/weixin_36171533/article/details/82771934

容器的资源需求,资源限制
在k8s上可以定义资源的限制和最高值
limits:限制,硬限制,最大
requests:需求,最低保障

CPU的单位:一颗逻辑CPU
1=1000毫核CPU

查看帮助:
kubectl explain pods.spec.containers.resources
limits:
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

requests:
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[root@master free]# cat pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/stress-ng  #压力测试容器
    command: ["/usr/bin/stress-ng", "-m 1", "-c 1", "--metrics-brief" ]
    resources:
      requests:
        cpu: "200m"         #cpu最低200微核
        memory: "128Mi"		#内存最低是128M
      limits:
        cpu: "500m"			#cpu最大是200
        memory: "200Mi"		#内存最低是200M

kubectl apply -f pod-demo.yaml
kubectl describe pods pod-demo

 Limits:
      cpu:     500m
      memory:  200Mi
    Requests:
      cpu:        200m
      memory:     128Mi
    Environment:  <none>
    Mounts:

####################################################################
QOS:
    Guranteed:每个容器
        同时设置CPU和内存的requests和limits
        cpu.limits=cpu.requests
        memory,limits=memory.request
    Burstable:
        至少有一个容器设置CPU或者内存资源的requests属性
    BestEffort:
        没有任何一个容器设置了requests或limits属性,最低优先级别
####################################################################

Burstable:
[root@master free]# kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
pod-demo   1/1       Running   0          57s

[root@master free]# cat pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng", "-m 1", "-c 1", "--metrics-brief" ]
    resources:
      requests:
        cpu: "500m"  #相同
        memory: "200Mi"  #相同
      limits:
        cpu: "500m"	#相同
        memory: "200Mi"	#相同

查看信息显示:
kubectl describe pods pod-demo
QoS Class:       Guaranteed

kubectl top 查看 kubectl 指标数据
HeapSter
cAdvisor 会收集每个pod的数据
HeapSter 采集的数据可以存储到influxDB数据库中
influxDB可以接入Grafana提供数据源


Pod监控的资源大致分为3个指标:
1.系统指标
2.容器指标:cpu,内存
3.应用指标:业务,进程


现在开始安装三大组件:
HeapSter监听100250端口
Grafana,HeapSter,InfluxDB

##########################
部署InfluxDB
##########################

https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
编辑:
[root@master influxdb]# cat influxdb.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  selector:				#新加
    matchLabels:		#新加
      task: monitoring 	#新加
      k8s-app: influxdb 	#新加
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb

开始创建
kubectl apply -f influxdb.yaml

[root@master influxdb]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   14d
kubernetes-dashboard   NodePort    10.108.38.237   <none>        443:31619/TCP   4d
monitoring-influxdb    ClusterIP   10.110.217.24   <none>        8086/TCP        2m

[root@master influxdb]# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
canal-98mcn                            3/3       Running   0          1d
canal-gnp5r                            3/3       Running   3          1d
coredns-78fcdf6894-27npt               1/1       Running   1          14d
coredns-78fcdf6894-mbg8n               1/1       Running   1          14d
etcd-master                            1/1       Running   1          14d
kube-apiserver-master                  1/1       Running   1          14d
kube-controller-manager-master         1/1       Running   1          14d
kube-flannel-ds-amd64-6ws6q            1/1       Running   1          1d
kube-flannel-ds-amd64-mg9sm            1/1       Running   0          1d
kube-flannel-ds-amd64-sq9wj            1/1       Running   0          1d
kube-proxy-g9n4d                       1/1       Running   1          14d
kube-proxy-wrqt8                       1/1       Running   2          14d
kube-proxy-x7vc2                       1/1       Running   1          14d
kube-scheduler-master                  1/1       Running   1          14d
kubernetes-dashboard-767dc7d4d-cj75v   1/1       Running   0          1h
monitoring-influxdb-848b9b66f6-fh8gm   1/1       Running   0          3m   #创建


查看该pod的日志:
kubectl logs monitoring-influxdb-848b9b66f6-fh8gm -n kube-system

#####################
开始部署HeapSter
#####################

https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/rbac

https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
kubectl apply -f heapster-rbac.yaml

https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/heapster.yaml
kubectl apply -f heapster.yaml

[root@master influxdb]# cat heapster.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:#新加
      task: monitoring#新加
      k8s-app: heapster#新加
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: k8s.gcr.io/heapster-amd64:v1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  type: NodePort  #新加
  selector:
    k8s-app: heapster



[root@master influxdb]# kubectl apply -f heapster.yaml 
serviceaccount/heapster created
deployment.apps/heapster created
service/heapster created

[root@master influxdb]# cat heapster-rbac.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
查看创建:
[root@master influxdb]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
heapster               NodePort    10.96.12.252    <none>        80:30667/TCP    1m
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   14d
kubernetes-dashboard   NodePort    10.108.38.237   <none>        443:31619/TCP   4d
monitoring-influxdb    ClusterIP   10.110.217.24   <none>        8086/TCP        28m
[root@master influxdb]# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
canal-98mcn                            3/3       Running   0          1d
canal-gnp5r                            3/3       Running   3          1d
coredns-78fcdf6894-27npt               1/1       Running   1          14d
coredns-78fcdf6894-mbg8n               1/1       Running   1          14d
etcd-master                            1/1       Running   1          14d
heapster-84c9bc48c4-z6vhf              1/1       Running   0          1m
kube-apiserver-master                  1/1       Running   1          14d
kube-controller-manager-master         1/1       Running   1          14d
kube-flannel-ds-amd64-6ws6q            1/1       Running   1          1d
kube-flannel-ds-amd64-mg9sm            1/1       Running   0          1d
kube-flannel-ds-amd64-sq9wj            1/1       Running   0          1d
kube-proxy-g9n4d                       1/1       Running   1          14d
kube-proxy-wrqt8                       1/1       Running   2          14d
kube-proxy-x7vc2                       1/1       Running   1          14d
kube-scheduler-master                  1/1       Running   1          14d
kubernetes-dashboard-767dc7d4d-cj75v   1/1       Running   0          1h
monitoring-influxdb-848b9b66f6-fh8gm   1/1       Running   0          28m


尝试访问:
http://192.168.68.20:30667/
报404是正常的

查看日志:
kubectl logs heapster-84c9bc48c4-k6kz8 -n kube-system

############
grafana
############

https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/grafana.yaml

[root@master influxdb]# cat grafana.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels: #新加
      task: monitoring #新加
      k8s-app: grafana #新加
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
  type: NodePort  #新加


[root@master influxdb]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
heapster               NodePort    10.111.40.154    <none>        80:30535/TCP    15m
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   14d
kubernetes-dashboard   NodePort    10.108.38.237    <none>        443:31619/TCP   4d
monitoring-grafana     NodePort    10.101.152.236   <none>        80:30634/TCP    19s
monitoring-influxdb    ClusterIP   10.106.17.74     <none>        8086/TCP        1h


[root@master influxdb]# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
canal-98mcn                            3/3       Running   0          1d
canal-gnp5r                            3/3       Running   3          1d
coredns-78fcdf6894-27npt               1/1       Running   1          14d
coredns-78fcdf6894-mbg8n               1/1       Running   1          14d
etcd-master                            1/1       Running   1          14d
heapster-84c9bc48c4-k6kz8              1/1       Running   0          18m
kube-apiserver-master                  1/1       Running   1          14d
kube-controller-manager-master         1/1       Running   1          14d
kube-flannel-ds-amd64-6ws6q            1/1       Running   1          1d
kube-flannel-ds-amd64-mg9sm            1/1       Running   0          1d
kube-flannel-ds-amd64-sq9wj            1/1       Running   0          1d
kube-proxy-g9n4d                       1/1       Running   1          14d
kube-proxy-wrqt8                       1/1       Running   2          14d
kube-proxy-x7vc2                       1/1       Running   1          14d
kube-scheduler-master                  1/1       Running   1          14d
kubernetes-dashboard-767dc7d4d-cj75v   1/1       Running   0          3h
monitoring-grafana-555545f477-jkc9s    1/1       Running   0          3m
monitoring-influxdb-848b9b66f6-s9mb6   1/1       Running   0          1h


这时候就可以访问Grafana的UI了
http://192.168.68.30:30634/


但是注意:HeapSter已经在1.11和以上已经废弃使用了

导入“Kubernetes Node Statistics”dashabord 
import这个模板的序号3646就可以了。

参考:

https://blog.csdn.net/liukuan73/article/details/78704395

猜你喜欢

转载自blog.csdn.net/weixin_36171533/article/details/82771934