-
Bug
-
Resolution: Not a Bug
-
Normal
-
None
-
4.19
-
None
-
None
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
My customer is having issues with a very full /var/lib filesystem on baremetal nodes, they would like to understand why this is happening and is this going to cause the node problems? Why does the node not clear down or prune this? For example on master01: 717G /var/lib on the same node: 743G /sysroot also to note that device /dev/sdb4 on both nodes master01 had xfs_repair run on it (https://access.redhat.com/solutions/5350721) and is now at 6% where as master02 that did is at 83% (which master01 was also at pre the repair) master01 Filesystem Size Used Avail Use% Mounted on composefs 5.8M 5.8M 0 100% / /dev/sdb4 894G 47G 848G 6% /etc <<<--- 6% usage efivarfs 496K 239K 253K 49% /sys/firmware/efi/efivars devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 252G 84K 252G 1% /dev/shm tmpfs 101G 96M 101G 1% /run shm 64M 0 64M 0% /run/containers/storage/overlay-containers/08a4e07c84285d46e925c70b43d6ac8567fd9b4fc1702b2880c023c086e44479/userdata/shm shm 64M 0 64M 0% /run/containers/storage/overlay-containers/3e5ccc40f0d201ea307ac6ba1c869df370c7f9c1e79b7a5bfdd6a894b49a3cbc/userdata/shm shm 64M 0 64M 0% /run/containers/storage/overlay-containers/073e0ad7f604c578e17c59d1fa03236249444f8d2957087f0787977ff37afc84/userdata/shm shm 64M 4.0K 64M 1% /run/containers/storage/overlay-containers/f0e2ced4202d9e1378d107648210cc4b3e8f3d7385040f6914047e984c5e1c71/userdata/shm shm 64M 0 64M 0% /run/containers/storage/overlay-containers/549549ac5475087f79c217df5fec989a7521e241d13e696db1e66bf54f1a6549/userdata/shm shm 64M 0 64M 0% /run/containers/storage/overlay-containers/f3c5b41642baa63673182482ddc0de7ea4daf155723409f21aacf95437f4c15d/userdata/shm Compare this to any other node. master02 Filesystem Size Used Avail Use% Mounted on /dev/sdb4 894G 742G 153G 83% / <<<---- 83% usage efivarfs 496K 313K 179K 64% /sys/firmware/efi/efivars devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 252G 84K 252G 1% /dev/shm tmpfs 101G 107M 101G 1% /run shm 64M 0 64M 0% /run/containers/storage/overla
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info: