Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-62496

NodePool gets stuck in updating state and complains about more than a single configmap status

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • 4.22
    • Node Tuning Operator
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • Stage
    • None
    • None
    • None
    • In Progress
    • Bug Fix
    • Hide
      Trying to replace a PerformanceProfile configMap for the NodePool would cause the controller to stuck on reconciliation loop without being able to apply the new profile.
      This patch fixes the issue and allow smooth PerformanceProfile replacement.
      Show
      Trying to replace a PerformanceProfile configMap for the NodePool would cause the controller to stuck on reconciliation loop without being able to apply the new profile. This patch fixes the issue and allow smooth PerformanceProfile replacement.
    • None
    • None
    • None
    • None

      Description of problem:

      When updating the tuningConfig of a NodePool to point to a different ConfigMap, the NodePool becomes stuck in the Updating state. The error indicates that more than one ConfigMap status is detected, even though the tuningConfig list references only a single ConfigMap.
      Additionally, stale status ConfigMaps remain for each NodePool that was updated. The issue resolves only after manually deleting the old ConfigMap.

      Version-Release number of selected component (if applicable):

      MCE version: 2.9
      Hypershift operator Image: registry.redhat.io/multicluster-engine/hypershift-rhel9-operator@sha256:f8c3898e29f8c0a20c3be92bc4ee5a9443b9fc8218db95ba541fe3e57a89c40d
      

      How reproducible:

      always    

      Steps to Reproduce:

          1. Create a NodePool and reference Performance Profile ConfigMap  in the tuningConfig.
          2. Update the NodePool to reference different ConfigMap instead.
          3. Observe the NodePool status.     

      Actual results:

      [root@helix33 ~]#  oc describe np/europa | tail -n 5  
      Events:
        Type     Reason          Age                    From                 Message
        ----     ------          ----                   ----                 -------
        Warning  ReconcileError  29m (x3 over 29m)      nodepool-controller  failed to reconcile NTO: failed to mirror configs: failed to validate mirrored configs: more than a single KubeletConfig ConfigMap is associated with NodePool europa. please delete the redundant configs: NTO generated KubeletConfigs [kubeletconfig-performance-europa kubeletconfig-performance-2-europa] user provided KubeletConfigs []
      
      [root@helix33 ~]# oc logs pod/operator-7c7964ff7b-mql8s -n hypershift
      
      {"level":"error","ts":"2025-09-30T17:20:06Z","msg":"Failed to reconcile NodePool","controller":"nodepool","controllerGroup":"hypershift.openshift.io","controllerKind":"NodePool","NodePool":
      
      {"name":"europa","namespace":"clusters"},"namespace":"clusters","name":"europa","reconcileID":"fcc1244a-56cd-410a-9f3d-70ac9fe2eee5","error":"failed to reconcile NTO: failed to mirror configs: failed to validate mirrored configs: more than a single KubeletConfig ConfigMap is associated with NodePool europa. please delete the redundant configs: NTO generated KubeletConfigs [kubeletconfig-performance-2-europa kubeletconfig-performance-europa] user provided KubeletConfigs []","stacktrace":"github.com/openshift/hypershift/hypershift-operator/controllers/nodepool.(*NodePoolReconciler).Reconcile\n\t/hypershift/hypershift-operator/controllers/nodepool/nodepool_controller.go:213\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:224"} 
      [root@helix33 ~]# oc get cm -A | grep performance
      clusters-europa                                    kubeletconfig-performance-2-europa                                1      27m
      clusters-europa                                    kubeletconfig-performance-europa                                  1      27h
      clusters-europa                                    machineconfig-performance-2-europa                                1      27m
      clusters-europa                                    machineconfig-performance-europa                                  1      27h
      clusters-europa                                    performance-2-europa                                              1      27m
      clusters-europa                                    performance-europa                                                1      27h
      clusters-europa                                    status-performance-2-europa                                       1      27m
      clusters-europa                                    status-performance-europa                                         1      27h
      clusters-europa                                    tuned-performance-2-europa                                        1      27m
      clusters-europa                                    tuned-performance-europa                                          1      27h 

      Expected results:

          The nodepool should update and new config map configurations be applied on the Hosted cluster.

      Additional info:

          

              titzhak Talor Itzhak
              sargun_narula SARGUN NARULA
              None
              None
              Liquan Cui Liquan Cui
              None
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: