-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
4.18
-
Quality / Stability / Reliability
-
False
-
-
None
-
Low
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When providing a custom KubeletConfig for a Hosted Control Plane (HCP) cluster via the standard NodePool process (i.e., embedding the KubeletConfig YAML within a ConfigMap), the validation mechanism fails to reject the spec.machineConfigPoolSelector field.In a standard (non-HCP) OpenShift cluster, machineConfigPoolSelector is used to target a specific Machine Config Pool (MCP). However, in an HCP architecture, MCPs do not exist; node configuration is managed directly by the NodePool resource. Allowing this field to be specified is misleading and incorrect. It implies a functionality that doesn't exist, and users may wrongly assume their configuration is being targeted when the field is, at best, being silently ignored. The validation should explicitly reject this field to enforce the correct configuration schema for HCP.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create the following ConfigMap on the management cluster's project where hosted cluster cr is present. This ConfigMap contains a KubeletConfig that incorrectly includes the machineConfigPoolSelector field.
apiVersion: v1
kind: ConfigMap
metadata:
name: maxpods-autoreserve-with-mcp
namespace: <hc-namespace>
data:
config: |
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: maxpods-autoreserve
spec:
kubeletConfig:
maxPods: 500
autoSizingReserved: true
# THIS FIELD SHOULD BE REJECTED IN AN HCP CONTEXT
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/worker: ""
2. Apply the ConfigMap to the management cluster:
oc apply -f configmap.yaml
3. Update the NodePool resource to reference this ConfigMap:
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: <your-nodepool-name>
namespace: <hcp-namespace>
spec:
# ... other nodepool specs ...
config:
- name: maxpods-autoreserve-with-mcp # Reference the bad ConfigMap
# ... other nodepool specs ...
Actual results:
The ConfigMap is successfully created. The NodePool resource is successfully created or updated, and it enters a Ready state (assuming other parameters are correct). No validation errors, warnings, or events are generated regarding the presence of the unknown/invalid machineConfigPoolSelector field. The rest of the KubeletConfig (e.g., maxPods: 500) is likely applied to the nodes, but the machineConfigPoolSelector is silently ignored.
Expected results:
The NodePool controller or its associated validation webhook should reject the configuration. When the NodePool attempts to reconcile and parse the KubeletConfig from the referenced ConfigMap, it should identify spec.machineConfigPoolSelector as an invalid field for the HCP context. The NodePool resource should fail to apply, and an error message should be reported in its status or as an event, similar to: Failed to apply configuration: spec.machineConfigPoolSelector is not a valid field for KubeletConfig provided via a NodePool. In a Hosted Control Plane cluster, configuration is applied directly to the NodePool and does not use Machine Config Pools.
Additional info:
This behavior suggests that the schema validation for the embedded KubeletConfig is either missing or is incorrectly reusing the standard (non-HCP) KubeletConfig schema, which does include this field. Enforcing a stricter, context-aware schema would prevent user confusion and misconfiguration.