Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-744 Implement Glance support for clone v2 deferred deletion in RBD driver
  3. OSPRH-3721

TRAC Blocker: BZ#2031741 Implement Glance support for clone v2 deferred deletion in RBD driver

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • Targeted
    • OSPRH-5024 - Ceph RBD clone optimisations
    • ?
    • ?
    • Storage; Glance

      What is the probability and severity of the issue? I.e. the overall risk

      This issue prevents our users from deleting images when using Ceph as a backend. They are quite likely to be faced with this issue, as it will arise whenever an image they are trying to delete has leftover COW lones.

      This behavior is turned on by default.

      Does this affect specific configurations, hardware, environmental factors, etc.?*

      This only affects deployments that run Ceph.

      Are any partners relying on this functionality in order to ship an ecosystem product?

      Not sure.

      What proportion of our customers could hit this issue?

      All users using Ceph and trying to delete an image which has COW clones will run into this issue.

      Does this happen for only a specific use case?

      This only happens when deleting an image that still has COW clones, created by either Cinder or Nova.

      What proportion of our CI infrastructure, automation, and test cases does this issue impact?

      Not sure.

      Is this a regression in supported functionality from a previous release?

      No.

      Is there a clear workaround?

      No.

      Is there potential doc impact?

      Yes: configuring Ceph backend trash purge scheduling and minimum client compatibility version.

      If this is a UI issue:
      Is the UI still fit for its purpose/goal?

      N/A.

      Does the bug compromise the overall trustworthiness of the UI?

      N/A.

      Overall context and effort – is the overall impact bigger/worse than the bug in isolation? For example, 1 workaround might seem ok, 5 is getting ugly, 20 might be unacceptable (rough numbers).

      Not sure.

            Unassigned Unassigned
            jjoyce@redhat.com Jason Joyce
            rhos-dfg-storage-squad-glance
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: