Prometheus configuration acquisition target

Prometheus configuration acquisition target

1, according to the configuration task (Job) at http / s periodic uncover (scraping without / pull)
2, targeting (target) on the indicator (metric). Target (target)
3, or can automatically find a way to specify a static manner. Prometheus will receive scratch (scrape) indicator (metric) stored on a local or remote storage.

Use scrape_configs definition of acquisition targets
configure a series of goals, and how to crawl their parameters. In general, each corresponding to a single scrape_config Job.
Target can scrape_config static configuration, you can also use a service discovery mechanism for dynamic discovery.

# Task name, to automatically crawl as an indicator of a label 
job_name: <job_name> # fetch cycle 
[scrape_interval: <DURATION> | default = <global_config.scrape_interval> ] # every time we crawl timeout 
[scrape_timeout: <duration > | default = <global_config.scrape_timeout> ] # from the URL path to the target crawl index 
[metrics_path: <path> | default = / metrics] # when the index has been found to add labels label of the same name, whether to retain the original label does not cover 
[honor_labels: <Boolean> | = default to false] # fetch protocol 
[scheme: <scheme> | = default HTTP] # the HTTP request parameters the params: 
  [ <String>: [<String> , ...]] # Authentication information
 





Basic_Auth: [username: <String> ] [password: <Secret> ] [password_file: <String> ]
# Authorization request header Value [bearer_token: <Secret> ]
# read from the file Authorization request header [bearer_token_file: / path / to / BEARER / token / file] # TLS configuration tls_config: [ <tls_config> ] # proxy configuration [proxy_url: <String> ] # the DNS service discovery configuration dns_sd_configs: [ - <dns_sd_config> ...]
# file service discovery configuration file_sd_configs : [- <file_sd_config> ...]
# K8S service discovery configuration kubernetes_sd_configs: [ - <kubernetes_sd_config> ...] # goal statically configured this Job list static_configs: [ - <static_config> ...] # goal wham tag configuration relabel_configs: [ - <relabel_config> ...]
# indicators wham tab to configure metric_relabel_configs: [ - <relabel_config> ...] # maximum number of samples allowed to crawl every time, if re-labeling in the index, the number of samples still exceeds the limit, the entire crawl considered a failure # 0 means no limit [sample_limit: <int> | default = 0

 

Guess you like

Origin www.cnblogs.com/xiangsikai/p/11288858.html