Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-29193

Nondeterministic application of kubeletconfigs

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Normal Normal
    • None
    • 4.13
    • Node / Kubelet

      This is a clone of issue OCPBUGS-26557. The following is the description of the original issue:

      Description of problem:

      ARO supplies a platform kubeletconfig to enable certain features, currently we use this to enable node sizing or enable autoSizingReserved. Customers want the ability to customize podPidsLimit and we have directed them to configure a second kubeletconfig.

      When these kubeletconfigs are rendered into machineconfigs, the order of their application is nondeterministic: the MCs are suffixed by an increasing serial number based on the order the kubeletconfigs were created. This makes it impossible for the customer to ensure their PIDs limit is applied while still allowing ARO to maintain our platform defaults.

      We need a way of supplying platform defaults while still allowing the customer to make supported modifications in a way that does not risk being reverted during upgrades or other maintenance.

      This issue has manifested in two different ways: 

      During an upgrade from 4.11.31 to 4.12.40, a cluster had the order of kubeletconfig rendered machine configs reverse. We think that in older versions, the initial kubeletconfig did not get an mc-name-suffix annotation applied, but rendered to "99-worker-generated-kubelet" (no suffix). The customer-provided kubeletconfig rendered to the suffix "-1". During the upgrade, MCO saw this as a new kubeletconfig and assigned it the suffix "-2", effectively reversing their order. See the RCS document https://docs.google.com/document/d/19LuhieQhCGgKclerkeO1UOIdprOx367eCSuinIPaqXA

      ARO wants to make updates to the platform defaults. We are changing from a kubeletconfig "aro-limits" to a kubeletconfig "dynamic-node". We want to be able to do this while still keeping it as defaults and if the customer has created their own kubeletconfig, the customer's should still take precedence. What we see is that the creation of a new kubeletconfig regardless of source overrides all other kubeletconfigs, causing the customer to lose their customization.

      Version-Release number of selected component (if applicable):

      4.12.40+

      ARO's older kubeletconfig "aro-limits":

      apiVersion: machineconfiguration.openshift.io/v1
      kind: KubeletConfig
      metadata:
        labels:
          aro.openshift.io/limits: ""
        name: aro-limits
      spec:
        kubeletConfig:
          evictionHard:
            imagefs.available: 15%
            memory.available: 500Mi
            nodefs.available: 10%
            nodefs.inodesFree: 5%
          systemReserved:
            memory: 2000Mi
        machineConfigPoolSelector:
          matchLabels:
            aro.openshift.io/limits: ""
      

      ARO's newer kubeletconfig, "dynamic-node"

      apiVersion: machineconfiguration.openshift.io/v1
      kind: KubeletConfig
      metadata:
        name: dynamic-node
      spec:
        autoSizingReserved: true
        machineConfigPoolSelector:
          matchExpressions:
          - key: machineconfiguration.openshift.io/mco-built-in
            operator: Exists

       

      Customer's desired kubeletconfig:

       

      apiVersion: machineconfiguration.openshift.io/v1
      kind: KubeletConfig
      metadata:
        labels:
          arogcd.arogproj.io/instance: cluster-config
        name: default-pod-pids-limit
      spec:
        kubeletConfig:
          podPidsLimit: 2000000
        machineConfigPoolSelector:
          matchExpressions:
          - key: pools.operator.machineconfiguration.io/worker
            operator: Exists

       

              qiwan233 Qi Wang
              openshift-crt-jira-prow OpenShift Prow Bot
              Sunil Choudhary Sunil Choudhary
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: