How to configure pod scraped annotations #611
Replies: 2 comments
-
| 
         Tracked by issue #613 that got resolved.  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            -
| 
         Hi Doyle, Glad to hear you worked out the issue! Your suggestions for improving the documentation sound helpful. If others face a similar problem, your detailed steps and updates might provide valuable insights. Feel free to share any additional findings or questions you may have. Cheers!  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Update: I worked out my issue (see below). See suggestions and >> updates below, hope this helps others.
Suggestions to improve docs:
which needed to be this:
Hello,
We are using the managed prometheus service in our AKS cluster. I would like to add annotations to my pods to configure prometheus scraping. I have questions on how to do that and some follow ups on how to troubleshoot. I've read over the default/custom/troubleshooting topics.
My goal is to have the minimal ingestion targets enabled (cadvisor, kubelet, etc) and in addition, use annotations to indicate which pods in selected namespaces should also be scraped.
Here are the steps I'm following:
I have some nginx pods in the cluster that already have the prometheus.io annotations on them (scrape = true and port set to 10254). They are in namespace ingress-nginx. My deployed pods are in the namespace "dev" and do not have annotations (yet).
I copy the ama-metrics-settings-configmap.yaml in this repo, and I've editted these three fields:
prometheus-collector-settings: |-
cluster_alias = "DevCluster"
...
pod-annotation-based-scraping: |-
podannotationnamespaceregex = "|dev|ingress-nginx"
...
debug-mode: |-
enabled = true
I have ensured I do not have a configmap named ama-metrics-prometheus-config set (see Q2) and there is no other configmaps with the prefix ama-metrics. I am assuming these settings are all I need, and that I would use the prometheus config for static scrape config targets.
I upload the settings as follows:
(original content with follow ups):
I don't see a pod restart, which seems odd. I don't see errors in logs, but I don't see obvious new activity on merging the config (as I would if I loaded a prometheus configmap). If I portforward 9090 to see Prometheus on my ama-metrics and ama-metrics-node pods, I see Configuration reload unsuccessful - that doesn't seem good. If I delete the configmap, I do not see the expected list of the minimal ingestion targets reappear in Prometheus as scrape targets. I only see kube-state-metrics on ama-metrics, and the four other default kube-system targets on ama-metrics-node.
Q1: any obvious issues with my configmap name, settings or commands above?
Q2: Do I need to additionally add a job in prometheus configmap for service discovery? I wasn't clear if setting the podannotationsnamespace was enough (as long as annotations are present on pods in the namespace).
kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-systemcontents of prometheus-config:
Q3: Do you have a simple "hello world" example of pod based annotation scraping in managed prometheus?
Q4: Can I expect to be able to combine pod annotation scraping with the scraping of the minimal ingestion kube-system targets? What configmaps are needed for that to work as expected?
Troubleshooting questions:
Is there a way to validate the ama-metrics-settings-configmap after editting, similar to the promconfigvalidator process?
Should I see a pod restart when I create or delete the ama metrics settings configmap? (I do not see any restarts in kube-system, which seems odd). Should I be forcing nodes to restart after creating or deleting settings (and if so - which nodes)?
Thanks in advance,
Doyle
Beta Was this translation helpful? Give feedback.
All reactions