Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-9081

Destroy OCP takes huge time with bigger swift container

    XMLWordPrintable

Details

    • -
    • Moderate
    • ShiftStack Sprint 233, ShiftStack Sprint 234, ShiftStack Sprint 235, ShiftStack Sprint 236
    • 4
    • Unspecified
    • Hide
      In the deprovision phase, the Installer deletes all Object storage containers linked to the OCP cluster. In order to delete the containers, all objects in them must be deleted.
      Before this patch, bulk object deletion was initiated sequentially, in batches of 50 objects. With this change, bulk object deletion is executed with a maximum of 3 concurrent calls to the OpenStack API, each deleting an unbounded number of objects, based on the default pagination of the object listing endpoint (which accounts to 10000 in the default OpenStack settings).
      Show
      In the deprovision phase, the Installer deletes all Object storage containers linked to the OCP cluster. In order to delete the containers, all objects in them must be deleted. Before this patch, bulk object deletion was initiated sequentially, in batches of 50 objects. With this change, bulk object deletion is executed with a maximum of 3 concurrent calls to the OpenStack API, each deleting an unbounded number of objects, based on the default pagination of the object listing endpoint (which accounts to 10000 in the default OpenStack settings).
    • Bug Fix

    Description

      Version:

      $ openshift-install version

      ./openshift-install 4.9.11
      built from commit 4ee186bb88bf6aeef8ccffd0b5d4e98e9ddd895f
      release image quay.io/openshift-release-dev/ocp-release@sha256:0f72e150329db15279a1aeda1286c9495258a4892bc5bf1bf5bb89942cd432de
      release architecture amd64

      Platform: Openstack

      install type: IPI

      What happened?

      Image streams using the swift container to store the images, after running so many image streams I am able to see the huge number of objects in the swift container if I destroy the cluster now, it takes huge time based on the size of the swift container

      What did you expect to happen?

      The destroy script should clean the resources in some reasonable time

      How to reproduce it (as minimally and precisely as possible)?

      deploy OCP, run some workload which creates a lot of image streams and destroy the cluster, it will take a lot of time to complete the destroy cmd

      Anything else we need to know?

      here is the output of the swift state cmd and the time it took to complete the destroy job

      $ swift stat vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Account: AUTH_2b4d979a2a9e4cf88b2509e9c5e0e232
      Container: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Objects: 723756
      Bytes: 652448740473
      Read ACL:
      Write ACL:
      Sync To:
      Sync Key:
      Meta Name: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
      Meta Openshiftclusterid: vlan609-26jxm
      Content-Type: application/json; charset=utf-8
      X-Timestamp: 1640248399.77606
      Last-Modified: Thu, 23 Dec 2021 08:34:48 GMT
      Accept-Ranges: bytes
      X-Storage-Policy: Policy-0
      X-Trans-Id: txb0717d5198e344a5a095d-0061c93b70
      X-Openstack-Request-Id: txb0717d5198e344a5a095d-0061c93b70

      Time took to complete the destroy: 6455.42s

      Attachments

        Activity

          People

            pprinett@redhat.com Pierre Prinetti
            mkaliyam@redhat.com Kaliyamoorthy Masco
            Itshak Brown Itshak Brown
            Red Hat Employee
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

              Created:
              Updated: