Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-25723

[rhos17.1] Cinder fails to resize if NFS volume has snapshots

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • rhos-17.1.z
    • openstack-cinder
    • None
    • Important

      Steps to reproduce:
      1) Create a new nfs volume with a size of 1G. `qemu-img info` will match the size in Cinder.

      $ qemu-img info -f raw volume-87790836-8b53-416d-8a04-b5ca5bef53ad
      image: volume-87790836-8b53-416d-8a04-b5ca5bef53ad
      file format: raw
      virtual size: 1 GiB (1073741824 bytes)
      disk size: 21.5 MiB

      2) Create a snapshot of the nfs volume
      3) resize the NFS volume to 2G
      4) volume list shows new volume size of 2GB but the active nfs image still have the virtual size set to 1G (should be updated to 2G as well)

      $ qemu-img info -f qcow2 volume-87790836-8b53-416d-8a04-b5ca5bef53ad.55514be7-e1b0-408e-b747-a612c22d2700
      image: volume-87790836-8b53-416d-8a04-b5ca5bef53ad.55514be7-e1b0-408e-b747-a612c22d2700
      file format: qcow2
      virtual size: 1 GiB (1073741824 bytes)
      disk size: 196 KiB
      cluster_size: 65536
      backing file: volume-87790836-8b53-416d-8a04-b5ca5bef53ad
      backing file format: raw
      Format specific information:
          compat: 1.1
          compression type: zlib
          lazy refcounts: false
          refcount bits: 16
          corrupt: false
          extended l2: false

      Cinder logs shows the operation completed successfully, tho the nfs image didn't get properly updated.

      Mar 20 19:39:07 fesilva-devstack1 cinder-volume[261353]: INFO cinder.volume.drivers.nfs [req-b21ae6df-3c4f-4497-9649-ae49d2bd4e36 req-a8acb903-1a77-446c-aa81-e671a64515fb admin None] Resizing file to 2G...
      Mar 20 19:39:07 fesilva-devstack1 cinder-volume[261353]: DEBUG oslo_concurrency.processutils [req-b21ae6df-3c4f-4497-9649-ae49d2bd4e36 req-a8acb903-1a77-446c-aa81-e671a64515fb admin None] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img resize -f raw /opt/stack/data/cinder/mnt/896fb15da6036b68a917322e72ebfe57/volume-87790836-8b53-416d-8a04-b5ca5bef53ad.55514be7-e1b0-408e-b747-a612c22d2700 2G (pid=261353) execute /opt/stack/data/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:349
      Mar 20 19:39:07 fesilva-devstack1 sudo[271028]: stack : PWD=/ ; USER=root ; COMMAND=/opt/stack/data/venv/bin/cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img resize -f raw /opt/stack/data/cinder/mnt/896fb15da6036b68a917322e72ebfe57/volume-87790836-8b53-416d-8a04-b5ca5bef53ad.55514be7-e1b0-408e-b747-a612c22d2700 2G
      Mar 20 19:39:07 fesilva-devstack1 sudo[271028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1001)
      Mar 20 19:39:07 fesilva-devstack1 sudo[271028]: pam_unix(sudo:session): session closed for user root
      Mar 20 19:39:07 fesilva-devstack1 cinder-volume[261353]: DEBUG oslo_concurrency.processutils [req-b21ae6df-3c4f-4497-9649-ae49d2bd4e36 req-a8acb903-1a77-446c-aa81-e671a64515fb admin None] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img resize -f raw /opt/stack/data/cinder/mnt/896fb15da6036b68a917322e72ebfe57/volume-87790836-8b53-416d-8a04-b5ca5bef53ad.55514be7-e1b0-408e-b747-a612c22d2700 2G" returned: 0 in 0.282s (pid=261353) execute /opt/stack/data/venv/lib/python3.10/site-packages/oslo_concurrency/processutils.py:372
      Mar 20 19:39:07 fesilva-devstack1 cinder-volume[261353]: INFO cinder.volume.manager [req-b21ae6df-3c4f-4497-9649-ae49d2bd4e36 req-a8acb903-1a77-446c-aa81-e671a64515fb admin None] Extend volume completed successfully.

      Expected result:
      The active image should have the virtual size of 3GB, corresponding to the size in Cinder's database.

              rh-ee-fesilva Fernando Silva
              rh-ee-fesilva Fernando Silva
              rhos-storage-cinder
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: