Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-49353

Filesystem overhead is not considered while creating scratch PVC

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Major Major
    • CNV v4.15.7
    • None
    • Storage Platform
    • None
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • CNV v4.99.0.rhel9-1121, CNV v4.16.2.rhel9-150, CNV v4.15.6.rhel9-108
    • ---
    • ---
    • Storage Core Sprint 262
    • High
    • None

      Description of problem:

      While doing a migration of VM from OCP to OCP using MTV, the scratch space is created without considering the filesystem overhead. When the source VM disk is on block storage, the migration will fail with "no space space left on device" since it will download the entire blocks.

      The source disk is on block PVC with 30 GiB size:

       

      rhel9-increased-bison                                   Bound         pvc-11f75641-4edb-48b2-a939-093f5ef322b3   30Gi       RWX            ocs-external-storagecluster-ceph-rbd   6d23h

       

      MTV created DV with below spec:

       

      # oc get dv bison-a54871db-4ebe-40c5-b4a6-f4066a4ff618-5crkc -o yaml|yq '.spec'
      source:
        http:
          certConfigMap: bison-a54871db-4ebe-40c5-b4a6-f4066a4ff618-wddvq
          secretExtraHeaders:
            - bison-a54871db-4ebe-40c5-b4a6-f4066a4ff618-mpn2l
          url: https://virt-exportproxy-openshift-cnv.apps.aries.tt.testing/api/export.kubevirt.io/v1alpha1/namespaces/nijin-cnv/virtualmachineexports/rhel9-increased-bison/volumes/rhel9-increased-bison/disk.img.gz
      storage:
        resources:
          requests:
            storage: "32212254720"
        storageClassName: ocs-external-storagecluster-ceph-rbd

       

      The scratch PVC was created with 30 GiB in destination:

       

      prime-0944b362-555f-47da-b823-accd2b0e5a32-scratch   Bound     pvc-8edadf63-0183-4ab0-ba6e-8594d3fb96c5   30Gi       RWO            ocs-external-storagecluster-ceph-rbd   52m

       

      Because of the ext4 overhead, the total available size is only around 29.36 GiB.

       

      Filesystem     1K-blocks     Used Available Use% Mounted on
      /dev/rbd1       30787492 21242400   9528708  70% /scratch
      

      So while downloading the image, it fails at the final stage with error "no space left on device":

       

      I0724 11:56:03.022607       1 importer.go:103] Starting importer
      I0724 11:56:03.023885       1 importer.go:176] begin import process
      I0724 11:56:03.036407       1 http-datasource.go:240] Attempting to get certs from /certs/ca.pem
      E0724 11:56:03.055880       1 http-datasource.go:406] http: expected status code 200, got 400
      I0724 11:56:03.065280       1 data-processor.go:360] Calculating available size
      I0724 11:56:03.066364       1 data-processor.go:368] Checking out block volume size.
      I0724 11:56:03.066377       1 data-processor.go:380] Request image size not empty.
      I0724 11:56:03.066384       1 data-processor.go:385] Target size 32212254720.
      I0724 11:56:03.066732       1 data-processor.go:259] New phase: TransferScratch
      I0724 11:56:03.066902       1 util.go:194] Writing data...
      E0724 11:59:38.418447       1 util.go:196] Unable to write file from dataReader: write /scratch/tmpimage: no space left on device
      E0724 11:59:38.815443       1 data-processor.go:255] write /scratch/tmpimage: no space left on device
      unable to write to file
      

      Version-Release number of selected component (if applicable):

      OpenShift Virtualization   4.15.3

      How reproducible:

      100%

      Steps to Reproduce:

      1. Create a VM on the source OCP cluster in a block PVC.
      2. Using MTV, start a migration of this VM to another cluster and also make sure that the destination storage class default volume mode is block.
      3. Do an oc logs -f on the created importer pod. It will fail with "no space left on device". The importer pod will keep restarting itself and we may miss the logs. So we need to watch it using oc logs -f <pod>

      Actual results:

      Not able to migrate VM from OCP to another OCP if the source and target is block PVC.
       

      Additional info:

       

              rhn-support-awels Alexander Wels
              rhn-support-nashok Nijin Ashok
              Kevin Alon Goldblatt Kevin Alon Goldblatt
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: