Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-13300

Converge the masters to use only one ServerGroup

    XMLWordPrintable

Details

    • +
    • ShiftStack Sprint 236
    • 1
    • No
    • False
    • Hide

      None

      Show
      None
    • Hide
      * For clusters that run on {rh-openstack} that you upgrade to 4.14 and have root volume availability zones, you must converge control plane machines onto one server group before you can enable control plane machine sets. To make the required change, follow the instructions in link:https://access.redhat.com/solutions/7013893[OpenShift on OpenStack with Availability Zones: Invalid Compute ServerGroup setup during OpenShift deployment]. (link:https://issues.redhat.com/browse/OCPBUGS-13300[*OCPBUGS-13300*])
      Show
      * For clusters that run on {rh-openstack} that you upgrade to 4.14 and have root volume availability zones, you must converge control plane machines onto one server group before you can enable control plane machine sets. To make the required change, follow the instructions in link: https://access.redhat.com/solutions/7013893 [OpenShift on OpenStack with Availability Zones: Invalid Compute ServerGroup setup during OpenShift deployment]. (link: https://issues.redhat.com/browse/OCPBUGS-13300 [* OCPBUGS-13300 *])
    • Known Issue
    • Done

    Description

      Description of problem:

      Currrently, only one ServerGroup is created in OpenStack when 3 masters on 3 AZs are deployed while 3 should have been created (one per AZ). With the work on CPMS, we made the decision to only create one ServerGroup for the masters. However, this will require a change in the installer to reflect this decision.
      Indeed, when specifying AZs, the master machines would have their own ServerGroup, while only one actually existed in OpenStack. This was a mistake but instead of fixing that bug, we'll change the behaviour to have only one ServerGroup for masters.

      Version-Release number of selected component (if applicable):

      latest (4.14)

      How reproducible: deploy a control plane with 3 failure domains:

      controlPlane:
        name: master
        platform:
          openstack:
            type: m1.xlarge
            failureDomains:
            - computeAvailabilityZone: az0
            - computeAvailabilityZone: az1
            - computeAvailabilityZone: az2
      

      Steps to Reproduce:

      1. Deploy the control plane in 3 AZ
      2. List OpenStack Compute Server Groups
      

      Actual results:

      +--------------------------------------+--------------------------+--------------------+
      | ID                                   | Name                     | Policy             |
      +--------------------------------------+--------------------------+--------------------+
      | 0750c579-d2cf-41b3-9e88-003dcbcad0c5 | refarch-jkn8g-master-az0 | soft-anti-affinity |
      | 05715c08-ac2b-439d-9bd5-5803ac40c322 | refarch-jkn8g-worker     | soft-anti-affinity |
      +--------------------------------------+--------------------------+--------------------+

      Expected results without our work on CPMS:

      refarch-jkn8g-master-az1 and refarch-jkn8g-master-az2 should have been created.

      This expectation is purely for documentation, QE should ignore it.

       

      Expected results with our work on CPMS (which should be taken in account by QE when testing CPMS):

      refarch-jkn8g-master-az0 should not exist, and the ServerGroup should be named refarch-jkn8g-master.
      All the masters should use that ServerGroup in both the Nova instance properties and in the MachineSpec once the machines are enrolled by CCPMSO.

      Attachments

        Activity

          People

            emacchi@redhat.com Emilien Macchi
            emacchi@redhat.com Emilien Macchi
            Ramón Lobillo Ramón Lobillo
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: