A, Prometheus support multiple service discovery mechanism (commonly follows)
- static_configs: static service discovery
- file_sd_configs: File Service Discovery
- dns_sd_configs: DNS Service Discovery
- kubernetes_sd_configs: Kubernetes Service Discovery
- consul_sd_configs: Consul Service Discovery
Two, Prometheus prometheus.yml the main configuration scrape_configs
and consul_sd_configs
the following
scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: 'consul' consul_sd_configs: - server: 'localhost:8500' relabel_configs: - source_labels: [__meta_consul_tags] regex: .*,prome,.* action: keep - source_labels: [__meta_consul_service] target_label: job
By reloadingprometheus.yml配置添加
./prometheus --config.file="prometheus.yml" --web.listen-address="0.0.0.0:9090" --web.enable-admin-api --web.enable-lifecycle & admin----web.enable API to run through the web management prometheus (empty data deleting operation) --web.enable Lifecycle-reloading operation by web prometheus configuration (corresponding to reload)
Three, k8s environment to achieve
The secret configured to hang on to produce the prometheus
1, write prometheus-additional-consul.yaml
scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: 'consul' consul_sd_configs: - server: 'localhost:8500' relabel_configs: - source_labels: [__meta_consul_tags] regex: .*,prome,.* action: keep - source_labels: [__meta_consul_service] target_label: job
2, making secret
kubectl create secret generic additional-consul-configs --from-file=prometheus-additional-consul.yaml -n monitoring
3, is added to the configuration of the pod prometheus, modified prometheus-prometheus.yaml, and update the resource
apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: prometheus: k8s name: k8s namespace: monitoring spec: alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web baseImage: quay.io/prometheus/prometheus nodeSelector: kubernetes.io/os: linux podMonitorNamespaceSelector: {} podMonitorSelector: {} replicas: 2 resources: requests: memory: 400Mi ruleSelector: matchLabels: prometheus: k8s role: alert-rules securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.11.0 additionalScrapeConfigs: name: additional-consul-configs key: prometheus-additional-consul.yaml
4 Note: insufficient privileges (configuration update, but there is no corresponding monitoring task generation)
Modify ClusterRole authority called prometheus-k8s and update resources
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s rules: - apiGroups: - "" resources: - nodes - services - endpoints - pods - nodes/proxy verbs: - get - list - watch - apiGroups: - "" resources: - configmaps - nodes/metrics verbs: - get - nonResourceURLs: - /metrics verbs: - get