-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.22
-
None
-
None
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
CAPI machine get Running with AWS gp3 throughput<125, while MAPI not allowed it, which cause CAPI2MAPI migration failed
Version-Release number of selected component (if applicable):
4.22.0-0-2026-01-28-010555-test-ci-ln-x4vxr92-latest
How reproducible:
always
Steps to Reproduce:
1.Create an awsmachinetemplate with throughput<125
liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms3.yaml
awsmachinetemplate.infrastructure.cluster.x-k8s.io/huliu-aws0128a-dfhqk-worker-us-east-2aa created
liuhuali@Lius-MacBook-Pro huali-test % cat ms3.yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
metadata:
name: huliu-aws0128a-dfhqk-worker-us-east-2aa
namespace: openshift-cluster-api
spec:
template:
metadata: {}
spec:
additionalSecurityGroups:
- filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-node
- filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-lb
additionalTags:
kubernetes.io/cluster/huliu-aws0128a-dfhqk: owned
ami:
id: ami-0bc8dda494f111572
cloudInit: {}
hostAffinity: host
iamInstanceProfile: huliu-aws0128a-dfhqk-worker-profile
ignition:
storageType: UnencryptedUserData
instanceMetadataOptions:
httpEndpoint: enabled
httpPutResponseHopLimit: 1
httpTokens: optional
instanceMetadataTags: disabled
instanceType: m6i.xlarge
rootVolume:
encrypted: true
size: 120
type: gp3
throughput: 124
iops: 4000
subnet:
filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-subnet-private-us-east-2a
2.Create a CAPI machine set
liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms4.yaml
machineset.cluster.x-k8s.io/huliu-aws0128a-dfhqk-worker-us-east-2aa created
liuhuali@Lius-MacBook-Pro huali-test % cat ms4.yaml
apiVersion: cluster.x-k8s.io/v1beta2
kind: MachineSet
metadata:
name: huliu-aws0128a-dfhqk-worker-us-east-2aa
namespace: openshift-cluster-api
spec:
clusterName: huliu-aws0128a-dfhqk
deletion:
order: Random
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: huliu-aws0128a-dfhqk
machine.openshift.io/cluster-api-machineset: huliu-aws0128a-dfhqk-worker-us-east-2aa
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: huliu-aws0128a-dfhqk
machine.openshift.io/cluster-api-cluster: huliu-aws0128a-dfhqk
machine.openshift.io/cluster-api-machineset: huliu-aws0128a-dfhqk-worker-us-east-2aa
node-role.kubernetes.io/worker: ""
spec:
bootstrap:
dataSecretName: worker-user-data
clusterName: huliu-aws0128a-dfhqk
deletion:
nodeDeletionTimeoutSeconds: 10
failureDomain: us-east-2a
infrastructureRef:
apiGroup: infrastructure.cluster.x-k8s.io
kind: AWSMachineTemplate
name: huliu-aws0128a-dfhqk-worker-us-east-2aa
The machine get Running
liuhuali@Lius-MacBook-Pro huali-test % oc get machine.c
NAME CLUSTER NODE NAME READY AVAILABLE UP-TO-DATE PHASE AGE VERSION
huliu-aws0128a-dfhqk-worker-us-east-2a-pqs4d huliu-aws0128a-dfhqk ip-10-0-21-227.us-east-2.compute.internal True True Running 5h30m
huliu-aws0128a-dfhqk-worker-us-east-2aa-c5vp5 huliu-aws0128a-dfhqk ip-10-0-4-92.us-east-2.compute.internal False False Running 3m38s
huliu-aws0128a-dfhqk-worker-us-east-2b-98txh huliu-aws0128a-dfhqk ip-10-0-55-202.us-east-2.compute.internal True True Running 5h30m
huliu-aws0128a-dfhqk-worker-us-east-2c-p84km huliu-aws0128a-dfhqk ip-10-0-64-197.us-east-2.compute.internal True True Running 5h27m
liuhuali@Lius-MacBook-Pro huali-test % oc get awsmachine
NAME CLUSTER STATE READY INSTANCEID MACHINE
huliu-aws0128a-dfhqk-worker-us-east-2a-pqs4d huliu-aws0128a-dfhqk running true aws:///us-east-2a/i-0e19a96fd930ebdf3 huliu-aws0128a-dfhqk-worker-us-east-2a-pqs4d
huliu-aws0128a-dfhqk-worker-us-east-2aa-c5vp5 huliu-aws0128a-dfhqk running true aws:///us-east-2a/i-07754e896717e6998 huliu-aws0128a-dfhqk-worker-us-east-2aa-c5vp5
huliu-aws0128a-dfhqk-worker-us-east-2b-98txh huliu-aws0128a-dfhqk running true aws:///us-east-2b/i-0c63e5e69007ecb21 huliu-aws0128a-dfhqk-worker-us-east-2b-98txh
huliu-aws0128a-dfhqk-worker-us-east-2c-p84km huliu-aws0128a-dfhqk running true aws:///us-east-2c/i-086d50c1210abac56 huliu-aws0128a-dfhqk-worker-us-east-2c-p84km
liuhuali@Lius-MacBook-Pro huali-test % oc get awsmachine huliu-aws0128a-dfhqk-worker-us-east-2aa-c5vp5 -oyaml
...
rootVolume:
encrypted: true
iops: 4000
size: 120
throughput: 124
type: gp3
Check on AWS console, throughput is defaulted to 125
3.Create a same name MAPI machineset with authoritativeAPI: ClusterAPI, the machineset report Synchronized Error
liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms5.yaml
machineset.machine.openshift.io/huliu-aws0128a-dfhqk-worker-us-east-2aa created
liuhuali@Lius-MacBook-Pro huali-test % oc get machineset -n openshift-machine-api huliu-aws0128a-dfhqk-worker-us-east-2aa -oyaml
...
- lastTransitionTime: "2026-01-28T07:39:28Z"
message: 'failed to update MAPI machine set: admission webhook "validation.machineset.machine.openshift.io"
denied the request: providerSpec.blockDevices[0].ebs.throughputMib: Invalid
value: 124: must be a value between 125 and 2000'
reason: FailedToUpdateMAPIMachineSet
severity: Error
status: "False"
type: Synchronized
...
liuhuali@Lius-MacBook-Pro huali-test % cat ms5.yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: huliu-aws0128a-dfhqk-worker-us-east-2aa
namespace: openshift-machine-api
spec:
authoritativeAPI: ClusterAPI
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: huliu-aws0128a-dfhqk
machine.openshift.io/cluster-api-machineset: huliu-aws0128a-dfhqk-worker-us-east-2aa
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: huliu-aws0128a-dfhqk
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: huliu-aws0128a-dfhqk-worker-us-east-2aa
spec:
authoritativeAPI: ClusterAPI
lifecycleHooks: {}
metadata: {}
providerSpec:
value:
ami:
id: ami-0bc8dda494f111572
apiVersion: machine.openshift.io/v1beta1
blockDevices:
- ebs:
encrypted: true
iops: 0
kmsKey:
arn: ""
volumeSize: 120
volumeType: gp3
capacityReservationId: ""
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: huliu-aws0128a-dfhqk-worker-profile
instanceType: m6i.xlarge
kind: AWSMachineProviderConfig
metadata: {}
metadataServiceOptions: {}
placement:
availabilityZone: us-east-2a
region: us-east-2
securityGroups:
- filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-node
- filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-lb
subnet:
filters:
- name: tag:Name
values:
- huliu-aws0128a-dfhqk-subnet-private-us-east-2a
tags:
- name: kubernetes.io/cluster/huliu-aws0128a-dfhqk
value: owned
userDataSecret:
name: worker-user-data
Actual results:
CAPI machine get Running with AWS gp3 throughput<125, while MAPI not allowed it, which cause CAPI2MAPI migration failed
Expected results:
MAPI and CAPI should be feature parity and migration succeed
Additional info: