-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.17.z, 4.18.z, 4.19.z, 4.20.z
-
None
-
False
-
-
None
-
None
-
None
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
A new RS is created but it gets immediately scaled down and the old RS takes over instead. For example:
% oc rollout restart deployment openshift-pipelines-operator -n openshift-operators
Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: openshift-pipelines-operator-6b84bd4dbd (0/0 replicas created) NewReplicaSet: openshift-pipelines-operator-75d5dddc64 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 114s deployment-controller Scaled up replica set openshift-pipelines-operator-6b84bd4dbd from 0 to 1 Normal ScalingReplicaSet 113s deployment-controller Scaled down replica set openshift-pipelines-operator-6b84bd4dbd from 1 to 0
What did you expect to see?
A new Replica Set with a new set of pods comes up, replacing the existing RS.
Upstream bug:
https://github.com/operator-framework/operator-lifecycle-manager/issues/3392