Uploaded image for project: 'OpenShift Hive'
  1. OpenShift Hive
  2. HIVE-2327

Incorrect machinepool for newly created vsphere OCP cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Major Major
    • None
    • openshift-4.14
    • None
    • False
    • None
    • False

      • When we deploy an OCP 4.14 cluster with ACM, the following default machinepol is getting created:


      apiVersion: hive.openshift.io/v1
      kind: MachinePool
      metadata:
        name: ocp4ipi06-worker
        namespace: 'ocp4ipi06'
      spec:
        clusterDeploymentRef:
          name: 'ocp4ipi06'
        name: worker
        platform:
          vsphere:
            cpus: 4
            coresPerSocket: 2
            memoryMB: 16384
            osDisk:
              diskSizeGB: 120
        replicas: 3
      --- 

      • The OCP installer also crates another machineset with the name 'clustername-worker-0':

      1. cat openshift/99_openshift-cluster-api_worker-machineset-0.yaml
        apiVersion: machine.openshift.io/v1beta1
        kind: MachineSet
        metadata:
          creationTimestamp: null
          labels:
            machine.openshift.io/cluster-api-cluster: test-jh724
          name: test-jh724-worker-0
          namespace: openshift-machine-api
        spec:
          replicas: 3
          selector:
            matchLabels:
              machine.openshift.io/cluster-api-cluster: test-jh724
              machine.openshift.io/cluster-api-machineset: test-jh724-worker-0
          template:
            metadata:
              labels:
                machine.openshift.io/cluster-api-cluster: test-jh724
                machine.openshift.io/cluster-api-machine-role: worker
                machine.openshift.io/cluster-api-machine-type: worker
                machine.openshift.io/cluster-api-machineset: test-jh724-worker-0

       

      This results in the creation of two machinesets, one from the installer named 'clustername-worker-0' and another from the machinepool 'clustername-worker'.

       

      So the cluster ends up with total 6 worker nodes instead of default 3 as per install-config.yaml file.

      :

            leah_leshchinsky Leah Leshchinsky
            rhn-support-asadawar Abhijeet Sadawarte
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: