Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-2029

Data mover , Restore - CephFS PV create time is long or PV not created

XMLWordPrintable

    • Quality / Stability / Reliability
    • 4
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • Important
    • 2
    • Very Likely
    • 0
    • 8
    • None
    • Unset
    • Unknown
    • No

      Description of problem:

      While running DataMover over CephFS with diff PV sizes & Usage

      (NS1: 100 PVs 6G  size, usage 2G , NS2: 1 PV 1000G size , usage 500G). Backup completed OK. 
      During restore, PVCs & PODs were in 'Pending' status.
      NS1 - 100PVs, took 30-45min till all PVs were created and status changed to 'Bound' & 'Running'.
      NS2 - 1 PV (1T), PVC & POD were in 'Pending' status and the PV was not created (even after 10hrs)

      Version-Release number of selected component (if applicable):

      OCP 4.12.9

      ODF 4.12.3
      OADP 1.2.0-78
      Using CephFS

      How reproducible:

      Steps to Reproduce:
      1. Create NS1 - 100pods , PV size 6G, 2G usage
      2. Create NS2 - 1pod, PV size 1000G, 500G usage
      3. Backup NS1 
      4. Backup NS2 

      (sequential backups)
      5. Delete NS1 & NS2, run cleanup of: vs, vsc, vsb
      6. Restore NS1

      7. Monior restore NS1 ( Check pods, PVC, PVs, vs, vsc)
      8. Restore NS2

      9. Monior restore NS2 ( Check pods, PVC, PVs, vs, vsc)

      (sequential restores)

      Actual results:

      NS1 - All PVs were created after 30-40min

      NS2 - PV was not created (even after 10hrs)

       

      Expected results:

      All PVs should be created in a few minutes 

      Same restore cases using cephRBD - PVs created within 2min

      Additional info:

      Pods status
      [root@f01-h07-000-r640 20230525_143028]# grep 100pods-72 dm_restore-fs-100pods-status.txt
      busy-data-fs-100pods-72-cc9b95d55-crvr2    1/1     Running   0          33m

      PVC status

      [root@f01-h07-000-r640 20230525_143028]# grep pvc-busy-data-fs-100pods-72 dm_restore-fs-100pods-pvc-time.txt
      perf-busy-data-cephfs-100pods         pvc-busy-data-fs-100pods-72              Bound     pvc-85e96622-a4cf-4bea-8caf-a34fb00eed6c   6Gi        RWO            ocs-storagecluster-cephfs     32m

      PVs status

      [root@f01-h07-000-r640 20230525_143028]# grep pvc-busy-data-fs-100pods-72 dm_restore-fs-100pods-pv-time.txt
      pvc-85e96622-a4cf-4bea-8caf-a34fb00eed6c   6Gi        RWO            Delete           Bound    perf-busy-data-cephfs-100pods/pvc-busy-data-fs-100pods-72           ocs-storagecluster-cephfs              2m18s

              rhn-support-anarnold A Arnold
              dvaanunu@redhat.com David Vaanunu
              David Vaanunu David Vaanunu
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: