-
Bug
-
Resolution: Done-Errata
-
Critical
-
Logging 5.5.z, Logging 5.6.z
-
False
-
None
-
False
-
NEW
-
ASSIGNED
-
-
Bug Fix
-
Proposed
-
-
-
Log Collection - Sprint 235, Log Collection - Sprint 238, Log Collection - Sprint 239, Log Collection - Sprint 240, Log Collection - Sprint 241, Log Collection - Sprint 242, Log Collection - Sprint 243
-
Critical
Description of problem:
Disk usage was consistently filling up, following one application pod around the environment. du did not show the disk usage, but lsof showed a large number of deleted files were still being locked by Vector:
vector 3430171 root 163r REG 8,4 105040954 1040189142 /var/log/pods/example-dev_example-cmd-linux-2_a9a87c45-ecad-49af-bdb7-3877273e5b95/example-cmd-linux-pod/0.log.20230403-205041 (deleted)
Deleting the collector pod (or killing the vector process) releases the files and they fully delete, clearing the space.
Version-Release number of selected component (if applicable):
cluster-logging.5.5.4
How reproducible:
So far failed to reproduce. At this time the application which caused the issue is no longer running so not currently able to gather data from original cluster as the issue is active.
Expected results:
Vector should release deleted files.
Additional info:
- clones
-
LOG-3949 Vector not releasing deleted file handles
- Closed
- relates to
-
LOG-4241 Fluentd not releasing deleted file handles
- Closed
- links to
-
RHBA-2023:5530 Logging Subsystem 5.7.7 - Red Hat OpenShift
-
RHBA-2023:119497 Logging Subsystem 5.7.6 - Red Hat OpenShift
- mentioned on