-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.22.0
-
None
-
False
-
-
5
-
None
-
None
-
None
-
None
-
Rejected
-
Yanma Sprint 285
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
A new RS is created but it gets immediately scaled down and the old RS takes over instead. For example:
% oc rollout restart deployment openshift-pipelines-operator -n openshift-operators
Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: openshift-pipelines-operator-6b84bd4dbd (0/0 replicas created) NewReplicaSet: openshift-pipelines-operator-75d5dddc64 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 114s deployment-controller Scaled up replica set openshift-pipelines-operator-6b84bd4dbd from 0 to 1 Normal ScalingReplicaSet 113s deployment-controller Scaled down replica set openshift-pipelines-operator-6b84bd4dbd from 1 to 0
What did you expect to see?
A new Replica Set with a new set of pods comes up, replacing the existing RS.
Upstream bug:
https://github.com/operator-framework/operator-lifecycle-manager/issues/3392
- clones
-
OCPBUGS-76297 Rollout restart of a deployment managed by OLM doesn't work as expected
-
- New
-
- is blocked by
-
OPRUN-4496 Analyze OCPBUGS-76297 Scenario in OLMv1 via New Tests
-
- In Progress
-
-
OPRUN-4495 Fix SA1019: server-side apply required, needs generated apply configurations
-
- Closed
-