prometheus scrape

编程入门 行业动态 更新时间:2024-10-12 03:18:47

<a href=https://www.elefans.com/category/jswz/34/1768256.html style=prometheus scrape"/>

prometheus scrape

prometheus scrape_configs 完整模板 和 参数详解

  • scarpe_configs完整模板
  • 参数详解
  • 参考链接

scrape_config是用来配置Prometheus数据拉取的

scarpe_configs完整模板

# The job name assigned to scraped metrics by default.
job_name: <job_name># How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]# Per-scrape timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels. This is useful for use cases such as federation, where all labels
# specified in the target should be preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
[ honor_labels: <boolean> | default = false ]# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]# Optional HTTP URL parameters.
params:[ <string>: [<string>, ...] ]# Sets the `Authorization` header on every scrape request with the
# configured username and password.
basic_auth:[ username: <string> ][ password: <string> ]# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]# Sets the `Authorization` header on every scrape request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]# Configures the scrape request's TLS settings.
tls_config:[ <tls_config> ]# Optional proxy URL.
[ proxy_url: <string> ]# List of Azure service discovery configurations.
azure_sd_configs:[ - <azure_sd_config> ... ]# List of Consul service discovery configurations.
consul_sd_configs:[ - <consul_sd_config> ... ]# List of DNS service discovery configurations.
dns_sd_configs:[ - <dns_sd_config> ... ]# List of EC2 service discovery configurations.
ec2_sd_configs:[ - <ec2_sd_config> ... ]# List of OpenStack service discovery configurations.
openstack_sd_configs:[ - <openstack_sd_config> ... ]# List of file service discovery configurations.
file_sd_configs:[ - <file_sd_config> ... ]# List of GCE service discovery configurations.
gce_sd_configs:[ - <gce_sd_config> ... ]# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:[ - <kubernetes_sd_config> ... ]# List of Marathon service discovery configurations.
marathon_sd_configs:[ - <marathon_sd_config> ... ]# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:[ - <nerve_sd_config> ... ]# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:[ - <serverset_sd_config> ... ]# List of Triton service discovery configurations.
triton_sd_configs:[ - <triton_sd_config> ... ]# List of labeled statically configured targets for this job.
static_configs:[ - <static_config> ... ]# List of target relabel configurations.
relabel_configs:[ - <relabel_config> ... ]# List of metric relabel configurations.
metric_relabel_configs:[ - <relabel_config> ... ]# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabelling
# the entire scrape will be treated as failed. 0 means no limit.
[ sample_limit: <int> | default = 0 ]

参数详解

  • relabel_configs

参考链接

更多推荐

prometheus scrape

本文发布于:2024-02-07 06:27:40,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1754259.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:prometheus   scrape

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!