Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-8280

[RFE] Enable discard support for pvmove command when "issue_discards = 1" set in /etc/lvm/lvm.conf

    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None

      Description of problem:

      Discard IO should be sent to the source PV (if supported) to reclaim space after successful pvmove provided "issue_discards = 1" is set in /etc/lvm/lvm.conf

      Version-Release number of selected component (if applicable):

      RHEL-8
      lvm2

      How reproducible:

      Always

      Steps to Reproduce:

      1. do pvmove from one pv to another pv backed by thinly provisioned device like vmdk file

      Actual results:

      Unused space is not reclaimed on source device

      Expected results:

      Issue discard IO and reclaim space on device

      Additional info:

      Asked customer to try "blkdiscard" command on source PV to reclaim space but it seemed to remove all the data.

      Reply in customer's own word:

      "blkdiscard does reclaim the space, but also removes all data from the PV. This doesn't really help because if the PV gets erased then we can just remove the lun from VMware and get the space back anyway. It would of been advantageous if pvmove issued a discard, just like lvremove does when moving specific LVS and not evacuating the whole PV. My main query is what LVM commands do support issuing discards. Do LVM discards only apply to the logical volume commands, such as lvremove and lvreduce?"

              zkabelac@redhat.com Zdenek Kabelac
              rhn-support-rgirase Rupesh Girase (Inactive)
              Zdenek Kabelac Zdenek Kabelac
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

                Created:
                Updated:
                Resolved: