Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-5552

[DOC - RN] CSI / DM Restore fails when using Cinder Csi driver on Openstack

XMLWordPrintable

    • Quality / Stability / Reliability
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown
    • None

      Description of problem:

      When using Cinder csi driver on Openstack platform, The Backup passes (only after scaling down the application) but the Restore Fails.

      Version-Release number of selected component (if applicable):

      OCP 4.17, OSP

      How reproducible:

      Always

      Steps to Reproduce:
      1. Deploy a stateful application using Cinder Storageclass
      2. Scale down the application (As cinder does not allow to snapshot attached volumes)
      3. perform CSI / DM backup
      4. Remove the application
      5. Restore the application using the same backup.

      Actual results:

      Restore fails with below error

       velero describe restore restore4 -n openshift-adp --details 
      Name:         restore4
      Namespace:    openshift-adp
      Labels:       <none>
      Annotations:  <none>
      
      Phase:                       WaitingForPluginOperationsPartiallyFailed
      Total items to be restored:  36
      Items restored:              36
      
      Started:    2025-01-21 21:14:15 +0530 IST
      Completed:  <n/a>
      
      Warnings:
        Velero:     <none>
        Cluster:  could not restore, CustomResourceDefinition "volumesnapshots.snapshot.storage.k8s.io" already exists. Warning: the in-cluster version is different than the backed-up version
                  could not restore, VolumeSnapshotContent "snapcontent-0d0aa38e-a8b0-4b8f-81f7-66d6850d0640" already exists. Warning: the in-cluster version is different than the backed-up version
                  could not restore, VolumeSnapshotContent "snapcontent-5a1d408a-141f-428d-bb57-4941a821de3f" already exists. Warning: the in-cluster version is different than the backed-up version
        Namespaces:
          mysql-dmactual:  could not restore, ConfigMap "kube-root-ca.crt" already exists. Warning: the in-cluster version is different than the backed-up version
                           could not restore, ConfigMap "openshift-service-ca.crt" already exists. Warning: the in-cluster version is different than the backed-up version
                           could not restore, RoleBinding "admin" already exists. Warning: the in-cluster version is different than the backed-up version
                           could not restore, RoleBinding "system:deployers" already exists. Warning: the in-cluster version is different than the backed-up version
                           could not restore, RoleBinding "system:image-builders" already exists. Warning: the in-cluster version is different than the backed-up version
                           could not restore, RoleBinding "system:image-pullers" already exists. Warning: the in-cluster version is different than the backed-up version
      
      Errors:
        Velero:     <none>
        Cluster:    <none>
        Namespaces:
          mysql-dmactual:  error preparing volumesnapshots.snapshot.storage.k8s.io/mysql-dmactual/velero-mysql-data-294vj: rpc error: code = Unknown desc = VolumeSnapshot mysql/velero-mysql-data-294vj does not have a velero.io/csi-volumesnapshot-handle annotation
      
      Backup:  dmactual
      
      Namespaces:
        Included:  all namespaces found in the backup
        Excluded:  <none>
      
      Resources:
      

      Expected results:

      Restore should be successfull.

      Additional info:

              rhn-support-vaashiro Valentina Ashirova
              rhn-support-ssingla Sachin Singla
              Sachin Singla Sachin Singla
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: