Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-4457

Restore skip a resource with "Skipping restore of resource because it cannot be resolved via discovery"

XMLWordPrintable

    • 4
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • 0.000
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown
    • No

      Description of problem:

      We are taking a backup of cloud pak for Data system, and have verified that a custom resource called ZenService is in the backup

      [root@api.autonomy.cp.fyre.ibm.com inestigation]# $EXEC backup download tenant-offline-b1-2024-07-02
      Backup tenant-offline-b1-2024-07-02 has been successfully downloaded to /root/inestigation/tenant-offline-b1-2024-07-02-data.tar.gz
      
      [root@api.autonomy.cp.fyre.ibm.com zen]# pwd
      /root/inestigation/b1/resources/zenservices.zen.cpd.ibm.com/namespaces/zen
      [root@api.autonomy.cp.fyre.ibm.com zen]# cat lite-cr.json ; echo
      {"apiVersion":"zen.cpd.ibm.com/v1","kind":"ZenService","metadata":...
      

      however on restore, somehow the resource is skipped with logs

      cat x.log | grep -i zenservices
      time="2024-07-03T07:28:53Z" level=info msg="Skipping restore of resource because it cannot be resolved via discovery" logSource="/remote-source/velero/app/pkg/restore/restore.go:2188" resource=zenservices.zen.cpd.ibm.com restore=oadp-operator/tenant-offline-r4-2024-07-02-same
      

      this seems to be similar issue as
      https://github.com/vmware-tanzu/velero/issues/2632
      https://github.com/vmware-tanzu/velero/issues/2948

      we are using OADP 1.3.2 on openshift version:

      Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
      Server Version: 4.14.24
      Kubernetes Version: v1.27.13+fd36fb9
      

      we have sequences of restore from r1-r7 and we do this sequentially, on r1 we have restored the CR definition, with restore spec

      spec:
        backupName: tenant-offline-b2-2024-07-02
        excludedResources:
          - nodes
          - events
          - events.events.k8s.io
          - backups.velero.io
          - restores.velero.io
          - resticrepositories.velero.io
          - csinodes.storage.k8s.io
          - volumeattachments.storage.k8s.io
          - backuprepositories.velero.io
        hooks: {}
        includeClusterResources: true
        includedNamespaces:
          - '*'
        includedResources:
          - namespaces
          - operatorgroups
          - roles
          - rolebindings
          - serviceaccounts
          - customresourcedefinitions.apiextensions.k8s.io
          - securitycontextconstraints.security.openshift.io
        itemOperationTimeout: 4h0m0s
      status:
        completionTimestamp: '2024-07-05T09:57:02Z'
        phase: Completed
        progress:
          itemsRestored: 226
          totalItems: 226
        startTimestamp: '2024-07-05T09:55:53Z'
        warnings: 10
      

      then we started to restore the ZenService CR in -r4, with restore spec

      spec:
        backupName: tenant-offline-b1-2024-07-02
        excludedResources:
          - nodes
          - events
          - events.events.k8s.io
          - backups.velero.io
          - restores.velero.io
          - resticrepositories.velero.io
          - csinodes.storage.k8s.io
          - volumeattachments.storage.k8s.io
          - backuprepositories.velero.io
        hooks: {}
        includedNamespaces:
          - '*'
        includedResources:
          - namespaces
          - secrets
          - configmaps
          - certificates.cert-manager.io
          - certificates.certmanager.k8s.io
          - issuers.cert-manager.io
          - issuers.certmanager.k8s.io
          - zenservices
        itemOperationTimeout: 4h0m0s
      status:
        completionTimestamp: '2024-07-05T10:18:09Z'
        phase: Completed
        progress:
          itemsRestored: 289
          totalItems: 289
        startTimestamp: '2024-07-05T10:16:07Z'
        warnings: 33
      

              sseago Scott Seago
              arie.pratama.s@ibm.com Arie Sutiono (Inactive)
              Amos Mastbaum Amos Mastbaum
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: