-
Bug
-
Resolution: Done-Errata
-
Minor
-
4.9
-
-
-
Moderate
-
None
-
ShiftStack Sprint 233, ShiftStack Sprint 234, ShiftStack Sprint 235, ShiftStack Sprint 236
-
4
-
Unspecified
-
-
Bug Fix
-
Done
Version:
$ openshift-install version
./openshift-install 4.9.11
built from commit 4ee186bb88bf6aeef8ccffd0b5d4e98e9ddd895f
release image quay.io/openshift-release-dev/ocp-release@sha256:0f72e150329db15279a1aeda1286c9495258a4892bc5bf1bf5bb89942cd432de
release architecture amd64
Platform: Openstack
install type: IPI
What happened?
Image streams using the swift container to store the images, after running so many image streams I am able to see the huge number of objects in the swift container if I destroy the cluster now, it takes huge time based on the size of the swift container
What did you expect to happen?
The destroy script should clean the resources in some reasonable time
How to reproduce it (as minimally and precisely as possible)?
deploy OCP, run some workload which creates a lot of image streams and destroy the cluster, it will take a lot of time to complete the destroy cmd
Anything else we need to know?
here is the output of the swift state cmd and the time it took to complete the destroy job
$ swift stat vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
Account: AUTH_2b4d979a2a9e4cf88b2509e9c5e0e232
Container: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
Objects: 723756
Bytes: 652448740473
Read ACL:
Write ACL:
Sync To:
Sync Key:
Meta Name: vlan609-26jxm-image-registry-nseyclolgfgxoaiysrlejlhvoklcawbxt
Meta Openshiftclusterid: vlan609-26jxm
Content-Type: application/json; charset=utf-8
X-Timestamp: 1640248399.77606
Last-Modified: Thu, 23 Dec 2021 08:34:48 GMT
Accept-Ranges: bytes
X-Storage-Policy: Policy-0
X-Trans-Id: txb0717d5198e344a5a095d-0061c93b70
X-Openstack-Request-Id: txb0717d5198e344a5a095d-0061c93b70
Time took to complete the destroy: 6455.42s
- links to
-
RHEA-2023:5006 rpm