Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-1017

1.2: Kataconfig status not updated during installation when installed on non-worker nodes

    XMLWordPrintable

Details

    • 1
    • False
    • False
    • Hide
      [copied from a comment from Jens]

      Progress of the runtime installation is shown in the status section of the kataConfig CR. However, in one scenario this does not happen. A problem occurs when

      1. the cluster has a machine config pool 'worker' that has no members (machinecount=0)
      2. no kataConfigPoolSelector is specified to select nodes for installation

      In this case the installation will happen on master nodes because the operator assumes it is a converged cluster where nodes have both master and worker roles and the status section of the kataConfig CR is not updated during the installation.
      Show
      [copied from a comment from Jens] Progress of the runtime installation is shown in the status section of the kataConfig CR. However, in one scenario this does not happen. A problem occurs when 1. the cluster has a machine config pool 'worker' that has no members (machinecount=0) 2. no kataConfigPoolSelector is specified to select nodes for installation In this case the installation will happen on master nodes because the operator assumes it is a converged cluster where nodes have both master and worker roles and the status section of the kataConfig CR is not updated during the installation.
    • Known Issue
    • Done
    • Kata Sprint #213, Kata Sprint #214, Kata Sprint #218, Kata Sprint #243
    • 0
    • 0.0

    Description

      Impact

      Status information will not be updated during installation when the cluster is configured in a certain way. 

      Description

      When the cluster has nodes in the master pool but none in the worker pool the operator

      will not update status information in the KataConfig CR. This is due to a bug in the way we

      update the nodes, we have a hardcoded node/roles=worker in there.

      Env

      This was seen by a customer on a 4.8.10 cluster.

      Example helpful info:

      Cluster version: 4.8.10
      Operator version: 1.0.0 (more recent versions have the same bug)
      Container type:
      Polarion test case ID:

      Test steps

      This is not super easy to reproduce, you have to change the machine config pools.

      Create a cluster, install the operator

      1. take all nodes out of the worker machine config pool, remove the worker label from the nodes
      2. create a KataConfig, dont select a custom node selector
      3. watch during the installation that the status in the KataConfig is not updated, however the installation will still go through, but the runtime will be automatically installed on the maser nodes

      Expected result

      The status should be updats as usual with nodes in progress, competed, failed...

      Actual result

      No status updates can be seen.

      Attachments

        Activity

          People

            pmores Pavel Mores
            jfreiman Jens Freimann
            Jens Freimann
            Tom Buskey Tom Buskey
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: