-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.19
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
While verifying for OCL GA when MOSC is applied and build is completed, the worker MCP starts to update and after update is complete one of the worker node is rebooted and worker MCP is again started to update
Version-Release number of selected component (if applicable):
4.19
How reproducible:
Mostly when new MOSC is applied
Steps to Reproduce:
1. For OCL GA 2. Apply new MOSC oc create -f - << EOF apiVersion: machineconfiguration.openshift.io/v1 kind: MachineOSConfig metadata: name: worker spec: machineConfigPool: name: worker imageBuilder: imageBuilderType: Job baseImagePullSecret: name: $(oc get secret -n openshift-config pull-secret -o json | jq "del(.metadata.namespace, .metadata.creationTimestamp, .metadata.resourceVersion, .metadata.uid, .metadata.name)" | jq '.metadata.name="pull-copy"' | oc -n openshift-machine-config-operator create -f - &> /dev/null; echo -n "pull-copy") renderedImagePushSecret: name: $(oc get -n openshift-machine-config-operator sa builder -ojsonpath='{.secrets[0].name}') renderedImagePushSpec: "image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/ocb-image:latest" EOF 3. Wait for MOSB to be completed 4. Monitor the MCP status
Actual results:
MCP status after MOSC applied oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE 6h47m 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 1 1 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac True False False 3 3 3 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 3 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac True False False 2 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac False True False 3 2 2 0 8h worker rendered-worker-5cdf608f61d4c7b6fec856a366e42aac True False False 3 3 3 0 8h oc get node -w NAME STATUS ROLES AGE VERSION ip-10-0-23-124.us-east-2.compute.internal NotReady worker 4s v1.32.1 ip-10-0-25-246.us-east-2.compute.internal Ready control-plane,master 8h v1.32.1 ip-10-0-37-141.us-east-2.compute.internal Ready worker 8h v1.32.1 ip-10-0-61-148.us-east-2.compute.internal Ready control-plane,master 8h v1.32.1 ip-10-0-70-113.us-east-2.compute.internal Ready worker 42m v1.32.1 ip-10-0-70-254.us-east-2.compute.internal Ready control-plane,master 8h v1.32.1 ip-10-0-23-124.us-east-2.compute.internal NotReady worker 5s v1.32.1 ip-10-0-23-124.us-east-2.compute.internal NotReady worker 5s v1.32.1 ip-10-0-70-113.us-east-2.compute.internal Ready worker 42m v1.32.1
Expected results:
The node should not be rebooted
Additional info: