Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-31735

"openshift-node-performance-XXXXXX have the same priority" warning keeps printing

XMLWordPrintable

    • Moderate
    • No
    • CNF Compute Sprint 251, CNF Compute Sprint 252
    • 2
    • False
    • Hide

      None

      Show
      None
    • Hide
      *Cause*: As part of the profile selection process, NTO checks if there are profiles that share the same priority, regardless of their associated node
      *Consequence*: Deploying 2 performance profiles on two different nodes, resulting in false priority warnings that were dumped to logs
      *Fix*: The current selection process for a profile is as follows: First, all profiles are collected, then we are checking for priority conflicts, and finally we are filtering for the associated node. A proper fix is to reverse the last 2 steps - first filter, then check for priority conflicts.
      *Result*: Bug doesn’t present anymore.
      Show
      *Cause*: As part of the profile selection process, NTO checks if there are profiles that share the same priority, regardless of their associated node *Consequence*: Deploying 2 performance profiles on two different nodes, resulting in false priority warnings that were dumped to logs *Fix*: The current selection process for a profile is as follows: First, all profiles are collected, then we are checking for priority conflicts, and finally we are filtering for the associated node. A proper fix is to reverse the last 2 steps - first filter, then check for priority conflicts. *Result*: Bug doesn’t present anymore.
    • Bug Fix
    • In Progress

      This is a clone of issue OCPBUGS-29756. The following is the description of the original issue:

      This is a clone of issue OCPBUGS-24636. The following is the description of the original issue:

      Description of problem:

      False priority warning raised in the NTO profile selection process.    
      Deploying 2 performance profiles on two different nodes, resulting in false priority warnings that were dumped to logs. As part of the profile selection process, NTO checks if there are profiles that share the same priority, regardless of their associated node!  

      Version-Release number of selected component (if applicable):

          4.16

      Steps to Reproduce:

          1.Deploy a cluster with 2 nodes
          2.Label each of the nodes with a unique label
          3.Create MCP for each node
          4.Deploy a performance profile for each node.
          5.oc logs -f <nto-pod> -n openshift-cluster-node-tuning-operator

      Actual results:

          W1213 15:04:10.679528       1 profilecalculator.go:575] profiles openshift-node-performance-pp-worker-cnf/openshift-node-performance-pp-worker-cnf1 have the same priority 20, please use a different priority for your custom profiles!I1213 15:04:10.680623       1 status.go:303] 2/6 Profiles failed to be appliedW1213 15:04:15.746610       1 profilecalculator.go:575] profiles openshift-node-performance-pp-worker-cnf/openshift-node-performance-pp-worker-cnf1 have the same priority 20, please use a different priority for your custom profiles!W1213 15:04:15.765530       1 profilecalculator.go:575] profiles openshift-node-performance-pp-worker-cnf/openshift-node-performance-pp-worker-cnf1 have the same priority 20, please use a different priority for your custom profiles!I1213 15:04:15.768434       1 status.go:303] 2/6 Profiles failed to be applied

      Expected results:

          

      Proposed solution:

          The current selection process for a profile is as follows: First, all profiles are collected, then we are checking for priority conflicts, and finally we are filtering for the associated node. A proper fix is to reverse the last 2 steps - first filter, then check for priority conflicts.

              rh-ee-rbaturov Ronny Baturov
              openshift-crt-jira-prow OpenShift Prow Bot
              Liquan Cui Liquan Cui
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: