Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-18566

NTO fails to apply inplace updates for Hypershift/Kubevirt platform nodepool

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • Yes
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      nto fails to fully inplace apply nto config to a hypershift/kubevirt platform. When a nodepool has 2 replicas, only one of the replicas will receive the update,

      Version-Release number of selected component (if applicable):

      4.14

      How reproducible:

      100%

      Steps to Reproduce:

      1. run the hypershift repo's "NTOMachineConfigRolloutTest" e2e test with the kubevirt platform.
      2. 
      3.
      

      Actual results:

      Expect nto config to be applied to all kubevirt worker nodes

      Expected results:

      only one kubevirt worker node receives the config

      Additional info:

      This NTO pr merged around the time the failure began to show up in our Hypershift/Kubevirt CI, https://github.com/openshift/cluster-node-tuning-operator/pull/775 

      Also, node tuning operator log shows this error for the node that does not receive the NTO config.

      E0905 16:34:39.911280       1 controller.go:197] unable to sync(profile/openshift-cluster-node-tuning-operator/example-stpg2-test-ntomachineconfig-inplace-c3dfa3c6-trzpb) requeued (6): failed to sync Profile example-stpg2-test-ntomachineconfig-inplace-c3dfa3c6-trzpb: failed to update Profile example-stpg2-test-ntomachineconfig-inplace-c3dfa3c6-trzpb: not all 2 Nodes in NodePool example-stpg2-test-ntomachineconfig-inplace agree on bootcmdline: hugepagesz=2M hugepages=4

       

              jmencak Jiri Mencak
              rhn-engineering-dvossel David Vossel (Inactive)
              None
              None
              Liquan Cui Liquan Cui
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: