Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-18306

[2086825] VM restore PVC uses exact source PVC request size

XMLWordPrintable

    • Storage Core Sprint 220, Storage Core Sprint 221
    • Urgent

      Description of problem:
      VM restore PVC uses exact source PVC request size, discarding the possibility that status.capacity.storage (and thus volumesnapshot.status.restoreSize) is different

      Version-Release number of selected component (if applicable):
      CNV 4.11.0

      How reproducible:
      100%

      Steps to Reproduce:
      1. Attempt to restore a VM with PVC whose spec.storage != status.capacity (manifests below)

      Actual results:
      ceph csi driver does not allow srcSize != targetSize, restored PVC Pending:
      Warning ProvisioningFailed 116s (x11 over 9m24s) openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-7d96f8b4d5-s6lzj_dc0077ff-a0f2-4034-ae60-c6e15766ee54 failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": error getting handle for DataSource Type VolumeSnapshot by Name vmsnapshot-74690574-f6a4-457c-8682-c830f74d5e0b-volume-dv-disk: requested volume size 8053063680 is less than the size 8589934592 for the source snapshot vmsnapshot-74690574-f6a4-457c-8682-c830f74d5e0b-volume-dv-disk

      Expected results:
      Success

      Additional info:
      Reproduced on OCS using 7.5Gi which gets round-up by the provisioner
      [akalenyu@localhost manifests]$ oc get pvc simple-dv -o yaml | grep storage:
      storage: 7680Mi
      storage: 8Gi

      [akalenyu@localhost manifests]$ cat dv.yaml
      apiVersion: cdi.kubevirt.io/v1beta1
      kind: DataVolume
      metadata:
      name: simple-dv
      namespace: akalenyu
      spec:
      source:
      http:
      url: "http://.../Fedora-Cloud-Base-34-1.2.x86_64.qcow2"
      pvc:
      storageClassName: ocs-storagecluster-ceph-rbd
      accessModes:

      • ReadWriteOnce
        resources:
        requests:
        storage: 7.5Gi
        [akalenyu@localhost manifests]$ cat vm.yaml
        apiVersion: kubevirt.io/v1
        kind: VirtualMachine
        metadata:
        name: simple-vm
        spec:
        running: true
        template:
        metadata:
        labels: {kubevirt.io/domain: simple-vm, kubevirt.io/vm: simple-vm}

        spec:
        domain:
        devices:
        disks:

      • disk: {bus: virtio}

        name: dv-disk

      • disk: {bus: virtio}

        name: cloudinitdisk
        resources:
        requests:

        {memory: 2048M}

        volumes:

      • dataVolume: {name: simple-dv}

        name: dv-disk

      • cloudInitNoCloud:
        userData: |
        #cloud-config
        password: fedora
        chpasswd: { expire: False }

        name: cloudinitdisk
        [akalenyu@localhost manifests]$ cat snapshot.yaml
        apiVersion: snapshot.kubevirt.io/v1alpha1
        kind: VirtualMachineSnapshot
        metadata:
        name: snap-simple-vm
        spec:
        source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: simple-vm
        [akalenyu@localhost manifests]$ cat restore.yaml
        apiVersion: snapshot.kubevirt.io/v1alpha1
        kind: VirtualMachineRestore
        metadata:
        name: restore-simple-vm
        spec:
        target:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: simple-vm
        virtualMachineSnapshotName: snap-simple-vm

            skagan@redhat.com Shelly Kagan
            akalenyu Alex Kalenyuk
            Yan Du Yan Du
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: