Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60954

VPA Operator/kube-state-metrics error - got nil while resolving path

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • 4.16
    • Monitoring
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Moderate
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      VPA Operator / kube-state-metrics error - got nil while resolving path    

      Version-Release number of selected component (if applicable):

          

      How reproducible:

      Yes    

      Steps to Reproduce:

      [1] Installed VerticalPodAutoscaler Operator `v4.16.0-202507081835` provided by Red Hat in our OpenShift Cluster `v4.16.43`.   
      
      [2] Configured some VPA Resources, for example
      ~~~
      apiVersion: autoscaling.k8s.io/v1
      kind: VerticalPodAutoscaler
      metadata:
        name: abwicklungsort-vpa
        namespace: t41-sl
      spec:
        targetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: abwicklungsort
        updatePolicy:
          updateMode: 'Off'
      status:
        conditions:
          - lastTransitionTime: '2025-07-21T14:15:11Z'
            status: 'True'
            type: RecommendationProvided
        recommendation:
          containerRecommendations:
            - containerName: abwicklungsort
              lowerBound:
                cpu: 25m
                memory: '671472660'
              target:
                cpu: 25m
                memory: '764046746'
              uncappedTarget:
                cpu: 25m
                memory: '764046746'
              upperBound:
                cpu: 25m
                memory: '853387169'
      ~~~
      
      [3] Check the `kube-state-metric` logs, there are lot of errors regarding minAllowed and maxAllowed parameters. According to the VPA CRD, those minAllowed and maxAllowed are optional.
      
          

      Actual results:

      - Seeing errors in the prometheus logs in openshift-monitoring and PrometheusDuplicateTimestamps alerts are triggering. 
      ~~~ 
      ts=2025-07-30T08:23:10.365Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://100.75.4.x:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=3360   
      ~~~ 

      Expected results:

      Alert should not get trigger

      Additional info:

       [1] We installed VerticalPodAutoscaler Operator v4.16.0-202507081835 provided by Red Hat in our OpenShift  Cluster v4.16.43.

       [2] We configured some VPA Resources, for example:

      apiVersion: autoscaling.k8s.io/v1
      kind: VerticalPodAutoscaler
      metadata:
        name: abwicklungsort-vpa
        namespace: t41-sl
      spec:
        targetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: abwicklungsort
        updatePolicy:
          updateMode: 'Off'
      status:
        conditions:
          - lastTransitionTime: '2025-07-21T14:15:11Z'
            status: 'True'
            type: RecommendationProvided
        recommendation:
          containerRecommendations:
            - containerName: abwicklungsort
              lowerBound:
                cpu: 25m
                memory: '671472660'
              target:
                cpu: 25m
                memory: '764046746'
              uncappedTarget:
                cpu: 25m
                memory: '764046746'
              upperBound:
                cpu: 25m
                memory: '853387169'

       [3] But checking kube-state-metric logs, there are lot of errors regarding minAllowed and maxAllowed parameters:

      E0730 06:30:55.062178 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_minallowed_cpu" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"
      E0730 06:30:55.062223 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_minallowed_memory" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"
      E0730 06:30:55.062243 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_maxallowed_cpu" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"
      E0730 06:30:55.062260 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_maxallowed_memory" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"
      E0730 06:30:55.062270 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_minallowed_cpu" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"
      E0730 06:30:55.062319 1 registry_factory.go:685] "kube_customresource_verticalpodautoscaler_spec_resourcepolicy_container_policies_minallowed_memory" err="[spec,resourcePolicy,containerPolicies]: got nil while resolving path"

      According to the VPA CRD, those minAllowed and maxAllowed are optional.

       [4] Furthermore, we are seeing errors in the prometheus logs in openshift-monitoring and PrometheusDuplicateTimestamps alerts are triggering.

      ts=2025-07-30T08:23:10.365Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target= https://100.75.4.x:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=3360
      

       

       

              prasriva@redhat.com Pranshu Srivastava
              rhn-support-hthakare Harshal Thakare
              None
              None
              Junqi Zhao Junqi Zhao
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: