Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-9081

Destroy OCP takes huge time with bigger swift container

XMLWordPrintable

    • -
    • Moderate
    • None
    • ShiftStack Sprint 233, ShiftStack Sprint 234, ShiftStack Sprint 235, ShiftStack Sprint 236
    • 4
    • Unspecified
    • Hide
      * Previously, for clusters that run on {rh-openstack}, in the deprovisioning phase of installation, the installer deleted Object storage containers sequentially. This behavior caused slow and inefficient deletion of objects, especially with large containers. This problem occurred in part because image streams that use Swift containers accumulated objects over time. Now, bulk object deletion now occurs concurrently with up to 3 calls to the {rh-openstack} API, improving efficiency by handling a higher object count per call. This optimization speeds up resource cleanup during deprovisioning. (link:https://issues.redhat.com/browse/OCPBUGS-9081[*OCPBUGS-9081*])
      Show
      * Previously, for clusters that run on {rh-openstack}, in the deprovisioning phase of installation, the installer deleted Object storage containers sequentially. This behavior caused slow and inefficient deletion of objects, especially with large containers. This problem occurred in part because image streams that use Swift containers accumulated objects over time. Now, bulk object deletion now occurs concurrently with up to 3 calls to the {rh-openstack} API, improving efficiency by handling a higher object count per call. This optimization speeds up resource cleanup during deprovisioning. (link: https://issues.redhat.com/browse/OCPBUGS-9081 [* OCPBUGS-9081 *])
    • Bug Fix
    • Done

      Version:

      $ openshift-install version

      ./openshift-install 4.9.11
      built from commit 4ee186bb88bf6aeef8ccffd0b5d4e98e9ddd895f
      release image quay.io/openshift-release-dev/ocp-release@sha256:0f72e150329db15279a1aeda1286c9495258a4892bc5bf1bf5bb89942cd432de
      release architecture amd64

      Platform: Openstack

      install type: IPI

      What happened?

      Image streams using the swift container to store the images, after running so many image streams I am able to see the huge number of objects in the swift container if I destroy the cluster now, it takes huge time based on the size of the swift container

      What did you expect to happen?

      The destroy script should clean the resources in some reasonable time

      How to reproduce it (as minimally and precisely as possible)?

      deploy OCP, run some workload which creates a lot of image streams and destroy the cluster, it will take a lot of time to complete the destroy cmd

      Anything else we need to know?

      here is the output of the swift state cmd and the time it took to complete the destroy job

      $ swift stat vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Account: AUTH_2b4d979a2a9e4cf88b2509e9c5e0e232
      Container: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Objects: 723756
      Bytes: 652448740473
      Read ACL:
      Write ACL:
      Sync To:
      Sync Key:
      Meta Name: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Meta Openshiftclusterid: vlan609-26jxm
      Content-Type: application/json; charset=utf-8
      X-Timestamp: 1640248399.77606
      Last-Modified: Thu, 23 Dec 2021 08:34:48 GMT
      Accept-Ranges: bytes
      X-Storage-Policy: Policy-0
      X-Trans-Id: txb0717d5198e344a5a095d-0061c93b70
      X-Openstack-Request-Id: txb0717d5198e344a5a095d-0061c93b70

      Time took to complete the destroy: 6455.42s

              pprinett@redhat.com Pierre Prinetti
              mkaliyam@redhat.com Kaliyamoorthy Masco
              Itshak Brown Itshak Brown
              Red Hat Employee
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

                Created:
                Updated:
                Resolved: