-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
MCE 2.5.1
-
False
-
None
-
False
-
-
-
Moderate
-
No
Description of problem:
When deploying an AWS or Azure cluster pool claim, the machine pool is showing the wrong type.
For example, for AWS, when deploying with instance type c7g.xlarge, I can see m6i.xlarge showing up in the MP CR:
apiVersion: hive.openshift.io/v1 kind: MachinePool metadata: creationTimestamp: "2024-02-01T03:14:39Z" finalizers: - hive.openshift.io/remotemachineset generation: 2 name: clc-arm-cp-kp56m-worker namespace: clc-arm-cp-kp56m resourceVersion: "1390949" uid: 81990180-b602-437a-92bf-7f92e1f6c80a spec: clusterDeploymentRef: name: clc-arm-cp-kp56m name: worker platform: aws: rootVolume: size: 120 type: gp3 type: m6i.xlarge replicas: 4
When I check the cluster, I can see that there are two additional machinesets (I expect 4 machines in total):
$ oc get machinesets -A NAMESPACE NAME DESIRED CURRENT READY AVAILABLE AGE openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1a 1 1 1 1 15h openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1b 1 1 1 1 15h openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1c 1 1 1 1 15h openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1d 1 1 1 1 15h openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1e 0 0 15h openshift-machine-api clc-arm-cp-kp56m-m6952-worker-us-east-1f 0 0 15h
When checking the machineset with 0 replicas we can see the instance type is m6i.xlarge (the ones with replicas are correct with c7g.xlarge)
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: capacity.cluster-autoscaler.kubernetes.io/labels: kubernetes.io/arch=amd64 machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2024-02-01T03:47:31Z" generation: 1 labels: hive.openshift.io/machine-pool: worker hive.openshift.io/managed: "true" machine.openshift.io/cluster-api-cluster: clc-arm-cp-kp56m-m6952 name: clc-arm-cp-kp56m-m6952-worker-us-east-1e namespace: openshift-machine-api resourceVersion: "31861" uid: 2ac81da8-bf7e-462a-a94b-a09a3e7209e1 spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: clc-arm-cp-kp56m-m6952 machine.openshift.io/cluster-api-machineset: clc-arm-cp-kp56m-m6952-worker-us-east-1e template: metadata: labels: machine.openshift.io/cluster-api-cluster: clc-arm-cp-kp56m-m6952 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: clc-arm-cp-kp56m-m6952-worker-us-east-1e spec: lifecycleHooks: {} metadata: {} providerSpec: value: ami: id: ami-0ec0f42eb805ad268 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: encrypted: true iops: 0 kmsKey: arn: "" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: clc-arm-cp-kp56m-m6952-worker-profile instanceType: m6i.xlarge kind: AWSMachineProviderConfig metadata: creationTimestamp: null metadataServiceOptions: {} placement: availabilityZone: us-east-1e region: us-east-1 securityGroups: - filters: - name: tag:Name values: - clc-arm-cp-kp56m-m6952-worker-sg subnet: filters: - name: tag:Name values: - clc-arm-cp-kp56m-m6952-private-us-east-1e tags: - name: kubernetes.io/cluster/clc-arm-cp-kp56m-m6952 value: owned userDataSecret: name: worker-user-data status: observedGeneration: 1 replicas: 0
creationTimestamp for incorrect machineset:
2024-02-01T03:47:31Z
creationTimestamp for two of the correct machinesets:
2024-02-01T03:24:54Z 2024-02-01T03:24:56Z
This only happens to cluster pool claims, non-cluster pool cluster deployments do not display this issue.
Version-Release number of selected component (if applicable):
2.10.0-DOWNSTREAM-2024-01-31-16-46-28
How reproducible:
always
Steps to Reproduce:
- create cluster pool cluster claim
- observe machine pool, check type
- ...