K8S mounting prometheus + grafana

k8s install + Grafana Prometheus:
 1to create a namespace in kubernetest cluster:.
[root @ Master k8s-promethus] # CATnamespace.yaml
apiVersion: v1
kind: the Namespace
the Metadata:
  name: ns-Monitor
  Labels:
    name: ns-Monitor

[root K8S the @Master-promethus] -f # kubectl Applynamespace.yaml




 2. installation node- Exporter: 
[@ Master K8S the root-promethus] CAT # node-exporter.yaml
kind: DaemonSet
apiVersion: Apps/V1
Metadata:
  Labels:
    App: the Node-exporter
  name: node-exporter
  namespace: ns-monitor
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      containers:
        - name: node-exporter
          image: prom/node-exporter:v0.16.0
          ports:
            - containerPort: 9100
              protocol: TCP
              name:    http
      hostNetwork: true
      hostPID: true
      tolerations:
        - effect: NoSchedule
          operator: Exists

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: node-exporter
  name: node-exporter-service
  namespace: ns-monitor
spec:
  ports:
    - name: http
      port: 9100
      protocol: TCP
  type: NodePort
  selector:
    app: node-exporter

[root@master k8s-promethus]# kubectl apply -f node-exporter.yaml 

[root@master k8s
-promethus]# kubectl get pod -n ns-monitor -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-cl6d5 1/1 Running 15 3h15m 192.168.100.64 test <none> <none> node-exporter-gwwnj 1/1 Running 0 3h15m 192.168.100.201 node1 <none> <none> node-exporter-hbglm 1/1 Running 0 3h15m 192.168.100.200 master <none> <none> node-exporter-kwsfv 1/1 Running 0 3h15m 192.168.100.202 node2 <none> <none> [root@master k8s-promethus]# kubectl get svc -n ns-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE node-exporter-service NodePort 10.99 . 128.173 <none> 9100 : 30 372 / TCP 3h17m

access the test:


Description: node- Exporter has been running successfully.
 

 
 
3.部署Prometheus :
注意:prometheus.yaml 中包含rbac认证、ConfigMap等
[root@master k8s
-promethus]# kubectl apply -f prometheus.yaml
clusterrole.rbac.authorization.k8s.io
/prometheus unchanged serviceaccount/prometheus unchanged clusterrolebinding.rbac.authorization.k8s.io/prometheus unchanged configmap/prometheus-conf unchanged configmap/prometheus-rules unchanged persistentvolume/prometheus-data-pv unchanged persistentvolumeclaim/prometheus-data-pvc unchanged deployment.apps/Unchanged prometheus Service / prometheus- Service Unchanged Note: This prometheus is used pv, pvc do local storage. Therefore, it is necessary before this need to deploy good nfs, deployed here omitted nfs
[root @ Master K8S
-promethus] # kubectl GET pv -n ns- Monitor NAME in CAPACITY ACCESSMODES RECLAIM POLICY CLAIM storageClass the STATUS REASON AGE Prometheus -data-pv 5Gi RWO Bound ns-Monitor Recycle / Prometheus-DATA- PVC 157m [root @ Master K8S -promethus] # kubectl GET PVC -n ns- Monitor NAME VOLUME in CAPACITY ACCESS the MODES storageClass the STATUS AGE PrometheusPrometheus Bound-pvc--data DATA- pv 5Gi RWO 158m Description: At this point pvc been successfully bound pv. [root @ Master K8S -promethus] # kubectl GET POD -n ns- Monitor NAME AGE READY the STATUS RESTARTS the Node -exporter- cl6d5 . 1 / . 1 Running 15 3h27m Node -exporter-gwwnj . 1 / . 1 Running 0 3h27m Node -exporter-hbglm . 1 / . 1 Running 0 3h27m node-exporter-kwsfv 1/1 Running 0 3h27m prometheus-dd69c4889-qwmww 1/1 Running 0 161m [root@master k8s-promethus]# kubectl get svc -n ns-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE node-exporter-service NodePort 10.99.128.173 <none> 9100:30372/TCP 3h29m Prometheus -service NodePort 10.96 . 119.235 <none> 9090 : 31555 / TCP 162 m access the test:

Description: Prometheus has run successfully.
 
 
4.在kubernetest中部署grafana:
[root@master k8s-promethus]# cat grafana.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "grafana-data-pv"
  labels:
    name: grafana-data-pv
    release: stable
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/volumes/v2
    server: 192.168.100.64
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-data-pvc
  namespace: ns-monitor
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      name: grafana-data-pv
      release: stable
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: ns-monitor
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      securityContext:
        runAsUser: 0
      containers:
        - name: grafana
          image: grafana/grafana:latest
          imagePullPolicy: IfNotPresent
          env:
            - name: GF_AUTH_BASIC_ENABLED
              value: "true"
            - name: GF_AUTH_ANONYMOUS_ENABLED
              value: "false"
          readinessProbe:
            httpGet:
              path: /login
              port: 3000
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-data-volume
          ports:
            - containerPort: 3000
              protocol: TCP
      volumes:
        - name: grafana-data-volume
          persistentVolumeClaim:
            claimName: grafana-data-pvc
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: grafana
  name: grafana-service
  namespace: ns-monitor
spec:
  ports:
    - port: 3000
      targetPort: 3000
  selector:
    app: grafana
  type: NodePort

[root@master k8s-promethus]# kubectl apply -f grafana.yaml 
persistentvolume/grafana-data-pv unchanged
persistentvolumeclaim/grafana-data-pvc unchanged
deployment.apps/grafana unchanged
service/grafana-service unchanged
注意:此时grafana也需要pvc.

[root@master k8s-promethus]# kubectl get pv,pvc -n ns-monitor 
NAME                               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
persistentvolume/grafana-data-pv      5Gi         RWO         Recycle          Bound    ns-monitor/grafana-data-pvc                           164m
persistentvolume/prometheus-data-pv   5Gi         RWO         Recycle          Bound    ns-monitor/prometheus-data-pvc                        168m

NAME                                         STATUS       VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/grafana-data-pvc       Bound     grafana-data-pv       5Gi         RWO                         164m
persistentvolumeclaim/prometheus-data-pvc    Bound    prometheus-data-pv     5Gi         RWO                          168m

[root@master k8s-promethus]# kubectl get pod -n ns-monitor 
NAME                         READY   STATUS    RESTARTS   AGE
grafana-576db894c6-4qtcl     1/1     Running   0          166m
node-exporter-cl6d5          1/1     Running   15         3h36m
node-exporter-gwwnj          1/1     Running   0          3h36m
node-exporter-hbglm          1/1     Running   0          3h36m
node-exporter-kwsfv          1/1     Running   0          3h36m
prometheus-dd69c4889-qwmww   1/1     Running   0          169m


[root@master k8s-promethus]# kubectl get svc -n ns-monitor 
NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
grafana-service         NodePort   10.104.174.223   <none>        3000:30771/TCP   166m
node-exporter-service   NodePort   10.99.128.173    <none>        9100:30372/TCP   3h37m
prometheus-service      NodePort   10.96.119.235    <none>        9090:31555/TCP   170m


访问测试:


说明:Grafana已经部署完成,用户名/密码:admin/admin


添加数据源:URL=http://prometheus-service.ns-monitor.svc.cluster.local:9090



导入模板:

 

Guess you like

Origin www.cnblogs.com/ccbyk-90/p/12004287.html