Some applications charge per ingested metric time series so you may need to filter metrics you're not actively using. For example in case of metrics ingested into Splunk using otel-collector from Prometheus endpoints you can exclude metrics having a particular label combination.
The solution involves using to drop the metrics. Metric relabeling is applied to samples as the last step before ingestion.
Filtering based on a label
Considering the following metrics, the relabel config below will prevent metrics having group="ServiceD" from being ingested.
# TYPE node_supervisord_up gauge
node_supervisord_up{group="serviceA",name="serviceA"} 1
node_supervisord_up{group="serviceB",name="serviceB"} 1
node_supervisord_up{group="serviceC",name="serviceC"} 1
node_supervisord_up{group="ServiceD",name="serviceD"} 0
node_supervisord_up{group="ServiceE",name="serviceE"} 1
receivers:
prometheus:
config:
scrape_configs:
- job_name: "otel-collector"
scrape_interval: 30s
static_configs:
- targets: ["localhost:9001"]
metric_relabel_configs:
- source_labels: [ group ]
regex: '^ServiceD$'
action: drop
Filtering using pod annotations
In this case, the metric time series are filtered using a pod annotation
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
- job_name: k8s
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
regex: "true"
action: keep
When both pods have annotation prometheus.io/scrape: 'true' , both time series are ingested
When the second pod has annotation prometheus.io/scrape: 'false' , only one time series is ingested
otel collector scrape config