Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-19329

Failure to clean EFS volumes causes pods to be stuck in terminating state

XMLWordPrintable

    • No
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Application pods stuck in terminating state. The kubelet logs convey following failure while trying to clean up the respective EFS volume (some details modified):

      Sep 14 09:15:56.150816 node1 hyperkube[1672]: E0914 09:15:56.150794    1672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/efs.csi.aws.com^fs-d9a123::fsap-078e4b4612345 podName:80c51171-30d3-4da4-916e-a0e2ae2ddd27 nodeName:}" failed. No retries permitted until 2023-09-14 09:17:58.150774727 +0000 UTC m=+284111.731582583 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "efs-pvc" (UniqueName: "kubernetes.io/csi/efs.csi.aws.com^fs-d9a17299::fsap-078e4b460e26f0775") pod "80c51171-30d3-4da4-916e-a0e2ae2ddd27" (UID: "80c51171-30d3-4da4-916e-a0e2ae2ddd27") : kubernetes.io/csi: Unmounter.TearDownAt failed to clean mount dir [/var/lib/kubelet/pods/80c51171-30d3-4da4-916e-a0e2ae2ddd27/volumes/kubernetes.io~csi/volumename/mount]: kubernetes.io/csi: failed to remove dir [/var/lib/kubelet/pods/80c51171-30d3-4da4-916e-a0e2ae2ddd27/volumes/kubernetes.io~csi/volumename/mount]: remove /var/lib/kubelet/pods/80c51171-30d3-4da4-916e-a0e2ae2ddd27/volumes/kubernetes.io~csi/volumename/mount: directory not empty

      Version-Release number of selected component (if applicable):

      4.11.43

      How reproducible:

      Uncertain to reproduce yet

      Expected results:

      EFS volume cleanup should be clean

            rhn-engineering-jsafrane Jan Safranek
            travi.openshift Ravi Trivedi
            Rohit Patil Rohit Patil
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: