Why Prometheus Failed to Scrape Monitors' Metrics
Prometheus failed to scrape metrics due to missing RBAC permissions. This guide explains the symptoms, diagnosis, and fix using Role and RoleBinding, with YAML examples and logs for reference.

Monitoring Kubernetes applications with Prometheus should be straightforward using the Prometheus Operator and its custom resources like ServiceMonitor[1]. However, misconfigurations can lead to a frustrating scenario: your app’s /metrics endpoint is up and reachable, yet Prometheus shows no trace of it in the targets list.
This guide walks through a real troubleshooting journey of such an issue – specifically, a Prometheus instance failing to scrape an application's metrics – and how we diagnosed and fixed the problem. We’ll cover the symptoms (what went wrong), delve into the Prometheus and ServiceMonitor configuration, identify the root cause (an RBAC permissions gap), and demonstrate the solution with example YAML manifests. (Throughout this guide, placeholder names like my-app
and metrics-namespace
are used instead of real cluster names.)
Symptoms: No Active Targets and Missing Metrics
The first clue that something was wrong came from Prometheus’s own interface. In the Prometheus Targets UI (/targets
page), the expected scrape target for my-app
was nowhere to be found – effectively “no active targets” were listed for that job[2]. This means Prometheus wasn’t actively scraping our application’s metrics at all. Consequently, Grafana dashboards and PromQL queries for my-app
metrics were blank.
Some telltale signs of this issue include:
- Prometheus Targets UI Empty: On the Status → Targets page in Prometheus, the job associated with
my-app
is missing or shows 0/0 targets up (no targets discovered). In our case, Prometheus’s UI indicated no active targets for the application job[2:1]. - No Metrics in Queries: Queries for my_app_metric (an example metric) returned no data because Prometheus wasn’t scraping anything from
my-app
. - No Alerts Triggered: If you expected alerts on
my-app
metrics, they remained quiet – simply because no data was being ingested.
At this point, we confirmed the application itself was fine (the /metrics
endpoint responded when accessed directly), so the problem lay in the monitoring pipeline.
Investigation: ServiceMonitor and Configuration Checks
Knowing that we’re using the Prometheus Operator, a logical first step was to inspect the ServiceMonitor configuration for my-app
. The Prometheus Operator uses ServiceMonitor CRDs to discover and scrape Kubernetes services[1:1]. In our setup, we had created a ServiceMonitor (let’s call it my-app-metrics
) intended to scrape the metrics from my-app
’s Service. We double-checked the YAML to ensure it was configured correctly. Key fields in a ServiceMonitor include:
- selector – which labels it looks for on the target Service.
- namespaceSelector – which namespace(s) to search for that Service.
- endpoints – the port and path to scrape on the Service.
For example, a ServiceMonitor might look like:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-app-metrics
namespace: metrics-namespace
spec:
selector:
matchLabels:
app: "my-app"
namespaceSelector:
matchNames:
- "metrics-namespace"
endpoints:
- port: "http"
path: "/metrics"
interval: 15s
In parallel, we inspected the Kubernetes Service for my-app
(in metrics-namespace
). It’s important that the Service’s labels and port name align with what the ServiceMonitor expects. Common pitfalls at this stage include mismatched labels or missing port names. In our case, we verified the following:
- The Service had the label app:
my-app
(matching the ServiceMonitor’s selector). A ServiceMonitor finds targets by referencing a Service’s labels, not the Deployment or Pod labels[3]. (If the Service’s labels don’t match, Prometheus won’t even register the target.) - The Service had a port name (e.g. http or metrics) that matched the endpoints.port field in the ServiceMonitor. The port name in the Service definition is required for the ServiceMonitor to work[3:1]. (Using the port name is correct; using a numeric port in the ServiceMonitor YAML would not work[4].)
- The ServiceMonitor’s namespaceSelector included the metrics-namespace (so that Prometheus knows to look in that namespace for the Service endpoints).
These configuration points are crucial. If any of them are wrong, Prometheus might ignore the ServiceMonitor or fail to find any targets. In fact, the official troubleshooting docs note that a ServiceMonitor whose selector doesn’t match any Service won’t show up on the targets page at all[5]. Our configs, however, all looked correct on paper – the Service existed with the right labels and port, and the ServiceMonitor was selecting it properly.
Prometheus Targets Still Missing: Looking at Logs
With the configs seemingly in order, we moved on to check Prometheus’s logs for clues. Since we used the Prometheus Operator, the Prometheus server runs as a pod (managed via a Prometheus CR). We fetched the logs from that Prometheus pod to see if it was encountering errors related to our my-app
target.
Almost immediately, the logs revealed the culprit. There were repeated errors about missing permissions when Prometheus attempted to discover targets in the metrics-namespace
:
level=error component=k8s_client_runtime msg="... Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: **endpoints is forbidden: User \"system:serviceaccount:monitoring:prometheus-k8s\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"dummyapp\"**"
level=error component=k8s_client_runtime msg="... Failed to watch *v1.Service: failed to list *v1.Service: **services is forbidden: User \"system:serviceaccount:monitoring:prometheus-k8s\" cannot list resource \"services\" ... in the namespace \"metrics-namespace\"**"
(The log above is an example based on a similar scenario; it shows a Prometheus service account lacking permissions on a target namespace. In our case, the namespace was metrics-namespace
and the Prometheus service account was prometheus-k8s
in the monitoring namespace.)
These errors made the root cause clear: Prometheus was failing to scrape the target due to an RBAC permission issue. The Prometheus server’s Kubernetes service account did not have the rights to discover the my-app
Service or its Endpoints in the metrics-namespace
. As a result, Prometheus couldn’t find any targets to scrape, even though the ServiceMonitor was properly configured. This is a common reason for missing targets: the Prometheus ServiceAccount lacks permission to get/list the necessary resources in that namespace[6].
Understanding the Root Cause: RBAC and Cross-Namespace Scraping
Why did this happen? Kubernetes RBAC (Role-Based Access Control) was blocking Prometheus from accessing the Service and Endpoints of my-app
. In our cluster, Prometheus runs in a central monitoring namespace (monitoring-namespace
) under a dedicated service account (e.g. prometheus-k8s). By default, the Prometheus Operator’s setup might grant it permissions to monitor certain namespaces (for example, via a ClusterRole). But if your application’s namespace isn’t covered by those permissions, Prometheus will be forbidden from discovering targets there – exactly the error we saw in the logs.
Illustration: How Prometheus Operator discovers targets via ServiceMonitors. The Prometheus instance (yellow, in a monitoring namespace) selects ServiceMonitor objects (blue) according to its configuration. Each ServiceMonitor specifies a target Service (green) by label selectors and a port name[3:2]. The Service in turn selects the app’s Pods (red) via labels. Crucially, the Prometheus ServiceAccount needs permission to list/watch the Service, Endpoints, and Pods in the target namespace – otherwise, the scrape target will not be discovered[6:1].
In our case, the Prometheus custom resource was configured to pick up ServiceMonitors from all namespaces (or specifically from metrics-namespace), so it knew about the my-app-metrics
ServiceMonitor. The operator added the scrape configuration for my-app
into Prometheus. However, Prometheus itself couldn’t execute that scrape config because Kubernetes denied it access to the necessary endpoints. This mismatch can be tricky: everything looks correct (no errors in applying the ServiceMonitor, etc.), but behind the scenes Prometheus’s requests to the K8s API are being forbidden by RBAC.
To summarize the root cause: The Prometheus service account did not have the rights to list or get the Service, Endpoints, and Pod objects for -app
’s namespace. Prometheus therefore had no knowledge of the endpoints to scrape, leading it to show zero active targets for that job.
The Fix: Granting Prometheus Access to the Namespace
The solution is to update our RBAC settings so that Prometheus is allowed to discover targets in metrics-namespace. There are two typical approaches to achieve this:
- Grant namespace-specific Role and RoleBinding – Create a Role in metrics-namespace that permits reading the necessary resources, and bind it to Prometheus’s service account. This is a scoped approach, granting minimal privileges only for that namespace.
- Expand the ClusterRole – If using a cluster-wide monitoring setup, ensure the Prometheus ClusterRole includes the rules to list/watch Services, Endpoints, and Pods (and that the service account is bound to that ClusterRole). This gives Prometheus access to monitor all namespaces (or a broad set of them).
In our scenario, we chose the first approach to limit scope. We created a Role and RoleBinding for the monitoring:prometheus-k8s service account in the metrics-namespace. Below are example manifests (using our placeholder names):
# metrics-namespace-prometheus-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: prometheus-k8s
namespace: metrics-namespace
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
# metrics-namespace-prometheus-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prometheus-k8s
namespace: metrics-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus-k8s
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: monitoring-namespace
These YAMLs grant the Prometheus service account (monitoring-namespace:prometheus-k8s
) permission to get, list, and watch the Service, Endpoints, and Pods in metrics-namespace. This aligns with the recommended fix from Red Hat’s knowledge base, which provides the same Role/RoleBinding pattern for allowing Prometheus to scrape user-defined namespaces[7].
Alternatively, if you prefer a cluster-wide solution (or if you installed Prometheus via a helm chart that uses cluster roles), you should ensure the Prometheus ClusterRole includes these same permissions. For example, one user discovered that adding pods, services, and endpoints with list/watch verbs to the cluster role fixed missing targets across multiple namespaces[8]. Be cautious with cluster roles – they are convenient but grant Prometheus access to all namespaces, so use them consistent with your security requirements.
With the new Role and RoleBinding in place, we restarted the Prometheus pod (to expedite re-loading configurations and credentials). Almost immediately, the logs became clean – no more “forbidden” errors. In the Prometheus UI, our my-app
target appeared in the Active Targets list and was marked UP. Metrics from my-app
began flowing in, and our Grafana dashboards lit up with data.
Solution Summary and Lessons Learned
Root Cause
Prometheus was not scraping my-app
because it lacked RBAC permission to discover the Service and Endpoints in the metrics-namespace
. The ServiceMonitor and Service were configured correctly, but Kubernetes security prevented Prometheus from seeing the target. This is a classic case of an overlooked RBAC rule causing metrics to “disappear.”
Solution
We resolved the issue by updating RBAC so that Prometheus’s service account can list/watch the necessary resources in the application’s namespace. In practice, this meant adding a Role & RoleBinding for the metrics-namespace
(as shown above) to grant access to Services, Endpoints, and Pods. After applying these, Prometheus successfully discovered and scraped the my-app
metrics.
Key Takeaways for Troubleshooting:
- Check the Prometheus UI and Config: If metrics are missing, look at Prometheus’s Targets (/targets) or Service Discovery page. If a job is not listed or shows 0 targets, that’s a red flag. Ensure your ServiceMonitor is detected by Prometheus (the Prometheus CR’s serviceMonitorSelector must match, etc.). Also verify the Service exists with correct labels and port naming.
- Verify ServiceMonitor–Service Alignment: Make sure the ServiceMonitor’s selector and namespace match your Service. A common mistake is mismatched labels – the ServiceMonitor selects Services by labels, so both the Service and ServiceMonitor need a unique label (e.g. app:
my-app
) in common. The port name in the Service should match the ServiceMonitor’s endpoints.port field. - Inspect Prometheus Logs for RBAC Errors: The presence of “cannot list resource … is forbidden” errors in Prometheus logs is a strong indicator of missing RBAC permissions. This tells you Prometheus is blocked from scraping a namespace.
- Apply/Adjust RBAC Permissions: Grant Prometheus the needed read access. Either scope it narrowly (Role & RoleBinding per namespace) or update the ClusterRole used by Prometheus. The rules must include at least get, list, watch on endpoints, services, and pods in the target namespaces (since those are what Prometheus needs to discover and scrape targets).
By methodically checking each of these points, you can zero in on why Prometheus isn’t scraping a given target. In our case, once the RBAC was corrected, Prometheus could do what it was meant to – scrape all the metrics from my-app
. The “no active targets” problem disappeared, and our monitoring returned to normal.
Finally, this scenario is a good reminder that when something is “off” in your Prometheus monitoring, it’s often a mix of configuration and permissions. It pays to double-check both: the YAML definitions and the RBAC policies. With the right access in place, Prometheus will faithfully scrape your application metrics as expected. Enjoy your fully-instrumented Kubernetes monitoring, and happy debugging!
https://www.linkedin.com/posts/naineshparekh_troubleshootingthursday-devops-observability-activity-7354398417646772224-eebE/ ↩︎ ↩︎
Using textual port number instead of port name - Prometheus Operator Troubleshooting ↩︎
It is in the configuration but not on the Service Discovery page - Prometheus Operator Troubleshooting ↩︎
Overview of ServiceMonitor tagging and related elements - Prometheus Operator Troubleshooting ↩︎ ↩︎
Prometheus serviceaccount missing permissions to monitor services in user-defined namespaces ↩︎
https://stackoverflow.com/questions/66216133/cant-see-nginx-ingress-metrics-in-prometheus ↩︎