-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.14.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When a consumer container/pod gets deleted/scaled down, the GCP disk is not being detached cleanly from the node where it was mounted, leaving leftover files inside the staging globalmount directory and because of this the NodeUnstage operation fails
Version-Release number of selected component (if applicable):
OCP 4.14.35
How reproducible:
Sporadically in different nodes in customer cluster
Steps to Reproduce:
1. Have a running consumer pod of the GCP disk with the CSI driver 2. Stop the pod / delete / scale down 3. Start the pod in a different node
Actual results:
Pod cannot start in other node in the cluster because its GCP disk is still associated with the previous node
Expected results:
GCP disk should be detached cleanly from the node
Additional info:
As a workaround, deleting leftover content inside staging globalmount directory allows the disk to be detached correctly