In Prometheus terminology, an endpoint that you can scrape is called an instance , and usually corresponds to a single process. A collection of instances with the same purpose, such as a process replicated for scalability or reliability, is called a job .
For example, an API server job with four replication instances:
- job (job):
api-server
- Example 1:
1.2.3.4:5670
- Example 2:
1.2.3.4:5671
- Example 3:
5.6.7.8:5670
- Example 4:
5.6.7.8:5671
- Example 1:
Automatically generated labels and time series
When Prometheus grabs a target, it will automatically attach some tags to the captured time series to identify the captured target:
job
: The configured job name to which the target belongs.instance
:<host>:<port>
Part of the target URL to be crawled.
If any of these tags are already present in the scraped data, the behavior depends on honor_labels
configuration options. See the crawl configuration documentation for more information .
For each scrape instance, Prometheus stores samples in the following time series:
up{job="<job-name>", instance="<instance-id>"}
:1
If the instance is healthy, reachable, or0
if the fetch failed.scrape_duration_seconds{job="<job-name>", instance="<instance-id>"}
: The duration of the crawl.scrape_samples_post_metric_relabeling{job="<job-name>", instance="<instance-id>"}
: Number of samples remaining after applying metric relabeling.scrape_samples_scraped{job="<job-name>", instance="<instance-id>"}
: Number of samples for target exposure.scrape_series_added{job="<job-name>", instance="<instance-id>"}
: Approximate number of new series in this crawl. New features in v2.10
up
Time series are useful for instance availability monitoring.