-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.18.z, 4.19.z, 4.20.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
With the new feature available in logging v6.4 when we set maxunavailable in clf then `oc rollout restart ds collector` does not restart all the collector pods it only restart the % of pods that are mentioned in maxunavailable
Version-Release number of selected component (if applicable):
4.18+
How reproducible:
Easily reproducible
Steps to Reproduce:
1. install logging v6.4 in any clsuter v4.18
2. Create a clf (clf added in comment)
3. once the clf is ready, execute the command `oc rollout restart collector` ## I am using collector as clf name, change it to whatever you have in your cluster.
4. Check the pod start, you will see only few pods would be restart not all will be rolled out in the order of maxunavailable
Actual results:
limited collector pods are restarted
Expected results:
All collector pods should be rolled out in orderly manner
Additional info:
Use this CLF as test apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: collector: maxUnavailable: 30% resources: limits: memory: 2Gi managementState: Managed outputs: - lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging name: default-lokistack type: lokiStack pipelines: - inputRefs: - application - infrastructure name: default-logstore outputRefs: - default-lokistack serviceAccount: name: collector