-
Bug
-
Resolution: Unresolved
-
Critical
-
odf-4.13
-
None
Description of problem (please be detailed as possible and provide log
snippests):
ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled in FIO tests. We see this for both RBD and Cephfs storage classes. We are measuring IOPS
I am using FIO with 50 servers, numjobs 4, and this relates to 128 and 4096K block sizes. The spreadsheet showing this degration is here:
https://docs.google.com/spreadsheets/d/101e3upvYuOG2lYxIstKjnIR4HvrDYEk6l8LD9Wf1-vQ/edit#gid=0
Dell 740xd systems
12 OSDs spread over 3 workers in 6 node cluster
nvme disks are 1.5 Tb
Systems have 192 Gb memory
Version of all relevant components (if applicable):
OCP v4.13.0-rc6
ODF v4.13.0-186
local storage 4.12.0-202304190215
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
no
Is there any workaround available to the best of your knowledge?
no
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3
Can this issue reproducible?
yes
Can this issue reproduce from the UI?
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. configure ODF storagecluster w/ on-wire encryption disabled
2. run FIO tests are 128 and 4096K block sizes as described above
3. capture IOPS and other info in perf dashboards
4 re-start same 3 steps w/ on-wire encryption enabled
Actual results:
Expected results:
Additional info: