-
Bug
-
Resolution: Cannot Reproduce
-
Undefined
-
None
-
4.12.z, 4.12
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
No
-
None
-
None
-
None
-
MON Sprint 244
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
After the follwing change was applied to the user-workload-monitoring-config ConfigMap by a ROSA customer (where the original size of the volume was 300Gi and the original storageClassName was "rosa-efs-storage"): apiVersion: v1 data: config.yaml: | prometheus: retention: "15d" volumeClaimTemplate: spec: storageClassName: "rosa-efs-storage" resources: requests: storage: "20Gi" kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring The prometheus pods were stuck in Init state because they were unable to mount the db volumes. Manual intervention was required to bring the Prometheus pods back (deleting old PVCs with 300Gi and deleting the prometheus pods to speed up PVC temination). Please note that rosa-efs-storage PVCs can't be resized by changing the spec.resources.requests.storage parameter.
Version-Release number of selected component (if applicable):
4.12.34
How reproducible:
Steps to Reproduce:
1.Create a ROSA cluster 2.edit the volume spec as described above
Actual results:
UWM Prometheus down
Expected results:
UWM Prometheus PVCs and pods are recreated automatically by the operator.
Additional info: