• 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown
    • No

          [OADP-2790] OADP support for datamover and block volumes

          Errata Tool added a comment -

          Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

          For information on the advisory (Important: OpenShift API for Data Protection (OADP) 1.3.0 security update), and where to find the updated files, follow the link below.

          If the solution does not work for you, open a new bug report.
          https://access.redhat.com/errata/RHSA-2023:7555

          Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Important: OpenShift API for Data Protection (OADP) 1.3.0 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:7555

          rhn-engineering-mpryc 
          perfect! thank you 
          we will use it in QE e2e testing and run against providers.

          Amos Mastbaum added a comment - rhn-engineering-mpryc   perfect! thank you  we will use it in QE e2e testing and run against providers.

          amastbau 

           

          I have slightly different approach with using init container to format block device and then use it as mongodb.

           

          Attached is the deployment file. Simply `oc apply -f <path to file>`

          mongo-persistent-block.yaml

          Michal Pryc added a comment - amastbau     I have slightly different approach with using init container to format block device and then use it as mongodb.   Attached is the deployment file. Simply `oc apply -f <path to file>` mongo-persistent-block.yaml

          rhn-engineering-mpryc 

          1. I realized the our kubevirt test was faulty 
          it did not really validate the data. Now I have tested it a little differently and will update the kubevirt test. (it passed, data was validated).
          2. It looks like /var/lib/mysql is being overeaten  I have not managed to verify with a non-Kubevirt deployment yet, I thought it would be simply mount /var/lib/mysql as a volumeDevice instead a volumeMount, in the todolist app.
          Michal, do you have a todo list app PR you can share?

          Amos Mastbaum added a comment - rhn-engineering-mpryc   1. I realized the our kubevirt test was faulty  it did not really validate the data. Now I have tested it a little differently and will update the kubevirt test. (it passed, data was validated). 2. It looks like /var/lib/mysql is being overeaten  I have not managed to verify with a non-Kubevirt deployment yet, I thought it would be simply mount /var/lib/mysql as a volumeDevice instead a volumeMount, in the todolist app. Michal, do you have a todo list app PR you can share?

          Amos Mastbaum added a comment - - edited

          Please Review test-cases
          https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415
          https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-458

          wnstb  cc akarol@redhat.com 

          In addition, we will try to run in scale, and see if we can break something and how it handles it.

          one is  kubevirt
          but the other one is todo list app we can run in all providers
          for 1.3 cnv  plan to have more kubevirt coverage for native dm which is mostly block mode.

          Amos Mastbaum added a comment - - edited Please Review test-cases https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415 https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-458 wnstb   cc akarol@redhat.com   In addition, we will try to run in scale, and see if we can break something and how it handles it. one is  kubevirt but the other one is todo list app we can run in all providers for 1.3 cnv  plan to have more kubevirt coverage for native dm which is mostly block mode.

          Amos Mastbaum added a comment - - edited

          After this PR node agent pods can run in privileged mode. (--privileged-node-agent)
          and OADP deploys  privileged  node agents by default
          wnstb 
          correct? 

            resources:
                requests:
                  cpu: 100m
                  memory: 64Mi
              securityContext:
                privileged: true
              terminationMessagePath: /dev/termination-log
           

          Amos Mastbaum added a comment - - edited After this PR node agent pods can run in privileged mode. (--privileged-node-agent) and OADP deploys  privileged  node agents by default wnstb   correct?    resources:       requests:         cpu: 100m         memory: 64Mi     securityContext:       privileged: true     terminationMessagePath: /dev/termination-log

          verified 1.3.0-117 with running VM

          Covered by
          https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415

          Results
          node agent mounts

          - name: host-pods
                    mountPath: /host_pods
                    mountPropagation: HostToContainer
                  - name: host-plugins
                    mountPath: /var/lib/kubelet/plugins
                    mountPropagation: HostToContainer
                  - name: scratch
                    mountPath: /scratch
                  - name: certs
                    mountPath: /etc/ssl/certs
                  - name: kube-api-access-qw92t
                    readOnly: true
                    mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          

          dpa

          apiVersion: oadp.openshift.io/v1alpha1
          kind: DataProtectionApplication
          metadata:
            creationTimestamp: '2023-10-12T04:30:01Z'
            generation: 1
          ...
            name: ts-dpa
            namespace: openshift-adp
            resourceVersion: '801415'
            uid: 016b2695-6fe1-49fa-8633-8ddff760895a
          spec:
            backupLocations:
              - velero:
                  config:
                    insecureSkipTLSVerify: 'true'
                    profile: default
                    region: minio
                    s3ForcePathStyle: 'true'
                    s3Url: 'http://10.0.188.30:9000'
                  credential:
                    key: cloud
                    name: cloud-credentials
                  default: true
                  objectStorage:
                    bucket: amos-28aug2023
                    prefix: velero-e2e-018de116-68b8-11ee-910a-0c9a3c9340c2
                  provider: aws
            configuration:
              nodeAgent:
                enable: true
                podConfig:
                  resourceAllocations:
                    requests:
                      cpu: 100m
                      memory: 64Mi
                uploaderType: kopia
              velero:
                defaultPlugins:
                  - openshift
                  - aws
                  - kubevirt
                  - csi
                podConfig:
                  resourceAllocations:
                    requests:
                      cpu: 100m
                      memory: 64Mi
            podDnsConfig: {}
            snapshotLocations: []
          status:
            conditions:
              - lastTransitionTime: '2023-10-12T04:30:01Z'
                message: Reconcile complete
                reason: Complete
                status: 'True'
                type: Reconciled
          

          backup

          piVersion: velero.io/v1
          kind: Backup
          metadata:
          ...
            namespace: openshift-adp
            labels:
              velero.io/storage-location: ts-dpa-1
          spec:
            csiSnapshotTimeout: 10m0s
            defaultVolumesToFsBackup: false
            includedNamespaces:
              - default
            itemOperationTimeout: 4h0m0s
            snapshotMoveData: true
            storageLocation: ts-dpa-1
            ttl: 720h0m0s
          status:
            formatVersion: 1.1.0
            backupItemOperationsCompleted: 1
            backupItemOperationsAttempted: 1
            progress:
              itemsBackedUp: 128
              totalItems: 128
            expiration: '2023-11-11T05:02:46Z'
            startTimestamp: '2023-10-12T05:02:46Z'
            version: 1
            completionTimestamp: '2023-10-12T05:03:59Z'
            phase: Completed
          

          restore

          piVersion: velero.io/v1
          kind: Restore
            name: restore
            namespace: openshift-adp
            resourceVersion: '858559'
            uid: 3e0cc460-86de-4e90-b364-c8ea0d688451
          spec:
            backupName: backup
            excludedResources:
              - nodes
              - events
              - events.events.k8s.io
              - backups.velero.io
              - restores.velero.io
              - resticrepositories.velero.io
              - csinodes.storage.k8s.io
              - volumeattachments.storage.k8s.io
              - backuprepositories.velero.io
            itemOperationTimeout: 4h0m0s
          status:
            completionTimestamp: '2023-10-12T05:21:29Z'
            phase: Completed
            progress:
              itemsRestored: 66
              totalItems: 66
            restoreItemOperationsAttempted: 1
            restoreItemOperationsCompleted: 1
            startTimestamp: '2023-10-12T05:19:31Z'
            warnings: 17
          

          PVC after restore

          kind: PersistentVolumeClaim
          apiVersion: v1
          metadata:
            ...
            name: centos7-visible-crab
            namespace: default
            ownerReferences:
              - apiVersion: cdi.kubevirt.io/v1beta1
                kind: DataVolume
                name: centos7-visible-crab
                uid: f98bc037-6988-41a2-bd18-8b8762f76031
                controller: true
                blockOwnerDeletion: true
            finalizers:
              - kubernetes.io/pvc-protection
            labels:
              app: containerized-data-importer
              velero.io/volume-snapshot-name: velero-centos7-visible-crab-4xk8q
              app.kubernetes.io/part-of: hyperconverged-cluster
              app.kubernetes.io/version: 4.14.0
              velero.io/restore-name: restore
              app.kubernetes.io/component: storage
              app.kubernetes.io/managed-by: cdi-controller
              velero.io/backup-name: backup
              kubevirt.io/created-by: 23789748-208a-4032-95cb-c7a0e9af0a48
          spec:
            accessModes:
              - ReadWriteMany
            selector:
              matchLabels:
                velero.io/dynamic-pv-restore: default.centos7-visible-crab.bkfr9
            resources:
              requests:
                storage: '10737418240'
            volumeName: pvc-0ccba12e-25b2-4c91-8674-822c71f7136d
            storageClassName: ocs-storagecluster-ceph-rbd
            volumeMode: Block <---------------------
          status:
            phase: Bound
            accessModes:
              - ReadWriteMany
            capacity:
              storage: 10Gi
          

          Amos Mastbaum added a comment - verified 1.3.0-117 with running VM Covered by https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415 Results node agent mounts - name: host-pods mountPath: /host_pods mountPropagation: HostToContainer - name: host-plugins mountPath: / var /lib/kubelet/plugins mountPropagation: HostToContainer - name: scratch mountPath: /scratch - name: certs mountPath: /etc/ssl/certs - name: kube-api-access-qw92t readOnly: true mountPath: / var /run/secrets/kubernetes.io/serviceaccount dpa apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: creationTimestamp: '2023-10-12T04:30:01Z' generation: 1 ... name: ts-dpa namespace: openshift-adp resourceVersion: '801415' uid: 016b2695-6fe1-49fa-8633-8ddff760895a spec: backupLocations: - velero: config: insecureSkipTLSVerify: ' true ' profile: default region: minio s3ForcePathStyle: ' true ' s3Url: 'http: //10.0.188.30:9000' credential: key: cloud name: cloud-credentials default : true objectStorage: bucket: amos-28aug2023 prefix: velero-e2e-018de116-68b8-11ee-910a-0c9a3c9340c2 provider: aws configuration: nodeAgent: enable: true podConfig: resourceAllocations: requests: cpu: 100m memory: 64Mi uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi podConfig: resourceAllocations: requests: cpu: 100m memory: 64Mi podDnsConfig: {} snapshotLocations: [] status: conditions: - lastTransitionTime: '2023-10-12T04:30:01Z' message: Reconcile complete reason: Complete status: 'True' type: Reconciled backup piVersion: velero.io/v1 kind: Backup metadata: ... namespace: openshift-adp labels: velero.io/storage-location: ts-dpa-1 spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - default itemOperationTimeout: 4h0m0s snapshotMoveData: true storageLocation: ts-dpa-1 ttl: 720h0m0s status: formatVersion: 1.1.0 backupItemOperationsCompleted: 1 backupItemOperationsAttempted: 1 progress: itemsBackedUp: 128 totalItems: 128 expiration: '2023-11-11T05:02:46Z' startTimestamp: '2023-10-12T05:02:46Z' version: 1 completionTimestamp: '2023-10-12T05:03:59Z' phase: Completed restore piVersion: velero.io/v1 kind: Restore name: restore namespace: openshift-adp resourceVersion: '858559' uid: 3e0cc460-86de-4e90-b364-c8ea0d688451 spec: backupName: backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s status: completionTimestamp: '2023-10-12T05:21:29Z' phase: Completed progress: itemsRestored: 66 totalItems: 66 restoreItemOperationsAttempted: 1 restoreItemOperationsCompleted: 1 startTimestamp: '2023-10-12T05:19:31Z' warnings: 17 PVC after restore kind: PersistentVolumeClaim apiVersion: v1 metadata: ... name: centos7-visible-crab namespace: default ownerReferences: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume name: centos7-visible-crab uid: f98bc037-6988-41a2-bd18-8b8762f76031 controller: true blockOwnerDeletion: true finalizers: - kubernetes.io/pvc-protection labels: app: containerized-data-importer velero.io/volume-snapshot-name: velero-centos7-visible-crab-4xk8q app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.14.0 velero.io/restore-name: restore app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller velero.io/backup-name: backup kubevirt.io/created-by: 23789748-208a-4032-95cb-c7a0e9af0a48 spec: accessModes: - ReadWriteMany selector: matchLabels: velero.io/dynamic-pv-restore: default .centos7-visible-crab.bkfr9 resources: requests: storage: '10737418240' volumeName: pvc-0ccba12e-25b2-4c91-8674-822c71f7136d storageClassName: ocs-storagecluster-ceph-rbd volumeMode: Block <--------------------- status: phase: Bound accessModes: - ReadWriteMany capacity: storage: 10Gi

          Amos Mastbaum added a comment - - edited

          Verified
          verified 1.3.0-117

          Amos Mastbaum added a comment - - edited Verified verified 1.3.0-117

          Amos Mastbaum added a comment - - edited

          Yes, it does

          wnstb

          TC: https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415 (wip - but can be reviewed )
          will close this once we are happy with the coverage I think.

          Amos Mastbaum added a comment - - edited Yes, it does wnstb TC: https://polarion.engineering.redhat.com/polarion/#/project/OADP/workitem?id=OADP-415 (wip - but can be reviewed ) will close this once we are happy with the coverage I think.

          Wes Hayutin added a comment -

          doh, amastbau   hope it works

          Wes Hayutin added a comment - doh, amastbau    hope it works

            wnstb Wes Hayutin
            wnstb Wes Hayutin
            Amos Mastbaum Amos Mastbaum
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: