Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-19163

[2097436] Online disk expansion ignores filesystem overhead change

XMLWordPrintable

    • Storage Core Sprint 223, Storage Core Sprint 224
    • Medium

      Description of problem:
      After tuning the filesystem overhead in the HCO cr to 20%, pvc created as part of a vm with template clone is created correctly including the overhead. However the online disk expansion ignores the change and uses the default filesystem overhead of 5.5%

      Version-Release number of selected component (if applicable):
      The error ocurred using the following code:
      --------------------------------------------------------
      oc version
      Client Version: 4.11.0-202206090038.p0.g194e99e.assembly.stream-194e99e
      Kustomize Version: v4.5.4
      Server Version: 4.11.0-fc.0
      Kubernetes Version: v1.24.0+beaaed6
      [cnv-qe-jenkins@stg10-kevin-6v8qf-executor ~]$ oc get csv -n openshift-cnv
      NAME DISPLAY VERSION REPLACES PHASE
      kubevirt-hyperconverged-operator.v4.11.0 OpenShift Virtualization 4.11.0 kubevirt-hyperconverged-operator.v4.10.1 Succeeded

      How reproducible:
      100%

      Steps to Reproduce:
      1. Edit the hco cr 'oc edit hco -n openshift-cnv' and change the filesystem overhead to 20%:
      filesystemOverhead:
      storageClass:
      nfs: "0.2"
      2. Create a vm with the yaml below requesting a volume size of 2G
      3. Check the storage request with 'oc get pvc cirros-dv4 -oyaml':
      resources:
      requests:
      storage: "2684354560"
      storageClassName: nfs

      The pvc size was correctly created to include the filesystem overhead
      4. Check the online expansion requested by the vmi 'oc get vmi vm-cirros-datavolume4 -oyaml':
      filesystemOverhead: "0.055" >>>>>>>THE DEFAULT FILESYSTEM OVERHEAD WAS REQUESTED
      requests:
      storage: "2684354560"
      volumeMode: Filesystem

      Actual results:
      The default filesystem overhead of 0.055(5.5%) was requested ignoring the updated value of 20%

      Expected results:
      The updated filesystem overhead of 20% should have been used in the online expansion

      Additional info:
      HCO----------------------
      oc edit hco -n openshift-cnv
      filesystemOverhead:
      storageClass:
      nfs: "0.2"

      PVC---------------------
      oc get pvc cirros-dv4 -oyaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      annotations:
      cdi.kubevirt.io/storage.condition.running: "false"
      cdi.kubevirt.io/storage.condition.running.message: Import Complete
      cdi.kubevirt.io/storage.condition.running.reason: Completed
      cdi.kubevirt.io/storage.contentType: kubevirt
      cdi.kubevirt.io/storage.import.endpoint: http://cnv-qe-server.rhevdev.lab.eng.rdu2.redhat.com/files/cnv-tests/cirros-images/cirros-0.5.1-x86_64-disk.img
      cdi.kubevirt.io/storage.import.importPodName: importer-cirros-dv4
      cdi.kubevirt.io/storage.import.source: http
      cdi.kubevirt.io/storage.pod.phase: Succeeded
      cdi.kubevirt.io/storage.pod.restarts: "0"
      cdi.kubevirt.io/storage.preallocation.requested: "false"
      pv.kubernetes.io/bind-completed: "yes"
      pv.kubernetes.io/bound-by-controller: "yes"
      creationTimestamp: "2022-06-15T15:45:00Z"
      finalizers:

      • kubernetes.io/pvc-protection
        labels:
        alerts.k8s.io/KubePersistentVolumeFillingUp: disabled
        app: containerized-data-importer
        app.kubernetes.io/component: storage
        app.kubernetes.io/managed-by: cdi-controller
        app.kubernetes.io/part-of: hyperconverged-cluster
        app.kubernetes.io/version: 4.11.0
        name: cirros-dv4
        namespace: default
        ownerReferences:
      • apiVersion: cdi.kubevirt.io/v1beta1
        blockOwnerDeletion: true
        controller: true
        kind: DataVolume
        name: cirros-dv4
        uid: c2f5ea59-ff9d-426b-a271-b49a007168d9
        resourceVersion: "4188763"
        uid: 644a9c0b-616d-4450-ad5b-7c399c9bd37c
        spec:
        accessModes:
      • ReadWriteOnce
        resources:
        requests:
        storage: "2684354560"
        storageClassName: nfs
        volumeMode: Filesystem
        volumeName: nfs-pv-08
        status:
        accessModes:
      • ReadWriteMany
      • ReadWriteOnce
        capacity:
        storage: 5Gi
        phase: Bound

      VMI--------------------------------
      oc get vmi vm-cirros-datavolume4 -oyaml
      apiVersion: kubevirt.io/v1
      kind: VirtualMachineInstance
      metadata:
      annotations:
      kubevirt.io/latest-observed-api-version: v1
      kubevirt.io/storage-observed-api-version: v1alpha3
      creationTimestamp: "2022-06-15T15:56:49Z"
      finalizers:

      • kubevirt.io/virtualMachineControllerFinalize
      • foregroundDeleteVirtualMachine
        generation: 8
        labels:
        kubevirt.io/nodeName: stg10-kevin-6v8qf-worker-0-dkg2q
        kubevirt.io/vm: vm-cirros-datavolume4
        name: vm-cirros-datavolume4
        namespace: default
        ownerReferences:
      • apiVersion: kubevirt.io/v1
        blockOwnerDeletion: true
        controller: true
        kind: VirtualMachine
        name: vm-cirros-datavolume4
        uid: c7f5f992-590e-42bd-b234-0aaa95e414f8
        resourceVersion: "4203188"
        uid: 2ea9cb38-fff6-4ec4-ac95-4b17e2301319
        spec:
        domain:
        cpu:
        cores: 1
        model: host-model
        sockets: 1
        threads: 1
        devices:
        disks:
      • disk:
        bus: virtio
        name: datavolumedisk4
        interfaces:
      • masquerade: {}
        name: default
        features:
        acpi:
        enabled: true
        firmware:
        uuid: 5e29c93e-7ab4-5c76-b86a-7a8b58f279af
        machine:
        type: pc-q35-rhel8.4.0
        resources:
        requests:
        memory: 128Mi
        networks:
      • name: default
        pod: {}
        terminationGracePeriodSeconds: 0
        volumes:
      • dataVolume:
        name: cirros-dv4
        name: datavolumedisk4
        status:
        activePods:
        a82f1f4c-f0d3-4722-a7af-8f42a0b0b534: stg10-kevin-6v8qf-worker-0-dkg2q
        conditions:
      • lastProbeTime: null
        lastTransitionTime: "2022-06-15T15:56:55Z"
        status: "True"
        type: Ready
      • lastProbeTime: null
        lastTransitionTime: null
        message: 'cannot migrate VMI: PVC cirros-dv4 is not shared, live migration requires
        that all PVCs must be shared (using ReadWriteMany access mode)'
        reason: DisksNotLiveMigratable
        status: "False"
        type: LiveMigratable
        guestOSInfo: {}
        interfaces:
      • infoSource: domain
        ipAddress: 10.128.2.62
        ipAddresses:
      • 10.128.2.62
        mac: 52:54:00:38:d1:3f
        name: default
        launcherContainerImageVersion: registry.redhat.io/container-native-virtualization/virt-launcher@sha256:a2e887eb37fc7573a4aaba855f1d6ba64aa6c14f8a2c01b1e8bfd51526c51e99
        migrationMethod: BlockMigration
        migrationTransport: Unix
        nodeName: stg10-kevin-6v8qf-worker-0-dkg2q
        phase: Running
        phaseTransitionTimestamps:
      • phase: Pending
        phaseTransitionTimestamp: "2022-06-15T15:56:49Z"
      • phase: Scheduling
        phaseTransitionTimestamp: "2022-06-15T15:56:50Z"
      • phase: Scheduled
        phaseTransitionTimestamp: "2022-06-15T15:56:56Z"
      • phase: Running
        phaseTransitionTimestamp: "2022-06-15T15:57:03Z"
        qosClass: Burstable
        runtimeUser: 107
        virtualMachineRevisionName: revision-start-vm-c7f5f992-590e-42bd-b234-0aaa95e414f8-2
        volumeStatus:
      • name: datavolumedisk4
        persistentVolumeClaimInfo:
        accessModes:
      • ReadWriteOnce
        capacity:
        storage: 5Gi
        filesystemOverhead: "0.055"
        requests:
        storage: "2684354560"
        volumeMode: Filesystem
        target: vda

            rh-ee-alromero Alvaro Romero
            kgoldbla Kevin Alon Goldblatt
            Kevin Alon Goldblatt Kevin Alon Goldblatt
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: