-
Story
-
Resolution: Unresolved
-
Normal
-
None
-
rhel-8.4.0
-
rhel-sst-filesystems
-
ssg_filesystems_storage_and_HA
-
5
-
False
-
-
None
-
None
-
None
-
None
-
If docs needed, set a value
-
-
All
-
None
Description of problem:
Sometimes users report about performance issues when using posix locks with e.g. gfs2. gfs2 redirects all posix lock request to dlm and dlm handles them via corosync protocol in dlm_controld.
There are many layers involved. Those performance issues e.g. lock acquiring takes too long could be normal as there might be lock contention, so it works as intended.
With posix locking a lot of processes acquiring different lock ranges on a file can be involved. dlm_controld stores a local but cluster-wide posix lock database about the current lock modes from each possible cluster-wide process acquire locks.
To give an general overview and knowing if there was contention or not we can visualize posix lock modes per file in an plot diagram. Then the user can see where contention comes from and which process on which cluster node held the look in a specific time.
Note:
There are many layers and communications e.g. kernel<->user, corosync involved. Each dlm_controld distance will store their own lock database which should be compared with others to see how much overhead is involved there. However I think it should be enough to only show contention/lock states in the plot that the user can figure out which process helds a specific lock in a specific time range. Other communications e.g. corosync/kernel will result in "wider" gabs between possible contention state and lock state.