Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-2209

Hard eviction thresholds is different with k8s default when PAO is enabled

XMLWordPrintable

    • None
    • CNF Compute Sprint 226, CNF Compute Sprint 227, CNF Compute Sprint 229, CNF Compute Sprint 230, CNF Compute Sprint 231, CNF Compute Sprint 232
    • 6
    • False
    • Hide

      None

      Show
      None
    • Hide
      Issue:

      Kubelet hard eviction thresholds are different from k8s defaults when a Performance profile is applied to the node.

      Fix:

      The defaults were updated to match the expected kubernetes defaults.

      Workaround (until the defaults fix is released):

      Please follow "Additional Kubelet Arguments" section of https://access.redhat.com/solutions/5532341 and configure the wanted eviction thresholds manually via the Performance profile.
      Show
      Issue: Kubelet hard eviction thresholds are different from k8s defaults when a Performance profile is applied to the node. Fix: The defaults were updated to match the expected kubernetes defaults. Workaround (until the defaults fix is released): Please follow "Additional Kubelet Arguments" section of https://access.redhat.com/solutions/5532341 and configure the wanted eviction thresholds manually via the Performance profile.
    • Bug Fix

      Description of problem:

      Hard eviction thresholds is different with k8s default when PAO is enabled. 
      
      According to https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#hard-eviction-thresholds, the default Hard eviction thresholds in k8s are: 
      memory.available<100Mi nodefs.available<10% imagefs.available<15% nodefs.inodesFree<5% (Linux nodes), but when PAO is enabled, the default setting is changed to memory.available<100Mi: https://github.com/openshift/cluster-node-tuning-operator/blob/master/pkg/performanceprofile/controller/performanceprofile/components/kubeletconfig/kubeletconfig.go#L59-L64
      
      There is no doc mentioned this difference, considering 'nodefs.available<10%' will impact the calculation of node allocatable ephemeral-storage, the customer is seeing different allocatable ephemeral-storage with/without PAO enabled which brought confusions.
      
      Considering there was no special reason to overwrite the default settings except that the kubeconfig needs a non-nil value as a default, and there is doc to describe that PAO would change the default behavior, this should be addressed. 

      Version-Release number of selected component (if applicable):

       

      How reproducible:

      Always

      Steps to Reproduce:

      1. Prepare a OCP cluster
      2. Install the PAO operator and create a performanceprofile
      3.
      

      Actual results:

      The kubelet config contains a settings of  "evictionHard": { "memory.available": "100Mi"  }

      Expected results:

      The kubelet config should not change the k8s default evictionHard settings unless there is a reason

      Additional info:

       

              yquinn@redhat.com Yanir Quinn
              bzhai@redhat.com XIAOBO ZHAI
              Liquan Cui Liquan Cui
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

                Created:
                Updated:
                Resolved: