-
Task
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
The objective is to provide an automated comprehensive FIO benchmark against several environments in this case using IBM as backing storage.
The first part of this task will describe the test case and scenarios the second part will describe automation requirements.
Conduct a comparative performance benchmark of IBM Storage volumes to measure IOPS and Throughput across three distinct compute environments. The goal is to be able to identify IO behavior for these environments and allow for comparison.
Scope We need to test both Fibre Channel (FC) and iSCSI paths on the following environments:
- Bare Metal & CoreOS worker (Baseline) (mounted lun)
- Standard VM (VMware with PVSCSI/VirtIO drivers) disk on datastore to ibm
- OpenShift KubeVirt (CNV/Virtualization on OpenShift)
Methodology
- Tool: Use fio (Flexible I/O Tester). IOPS.
- Engine: libaio (Linux Asynchronous I/O).
- Test Cases:
-
- Max IOPS: Random Read/Write (4k block size, QD=64).
-
- Max Throughput: Sequential Read/Write (1M block size, QD=16).
Steps
1. Environment Verification
Before running tests, verify the active storage path (FC vs iSCSI) to ensure no failover is occurring.
- Command: ls -l /dev/disk/by-path/
- Verification: Look for fc- (Fibre Channel) or ip- (iSCSI) in the device path.
- Note: For KubeVirt, this must be verified on the OpenShift Worker Node (via oc debug node), not inside the guest VM.
2. FIO Configuration (bench_job.fio)
[global] ioengine=libaio direct=1 buffered=0 size=10G runtime=60 time_based=1 group_reporting filename=/data/testfile # --- IOPS TESTS --- [random-read-iops] rw=randread bs=4k iodepth=64 numjobs=4 stonewall [random-write-iops] rw=randwrite bs=4k iodepth=64 numjobs=4 stonewall # --- THROUGHPUT TESTS --- [seq-read-bandwidth] rw=read bs=1M iodepth=16 numjobs=1 stonewall [seq-write-bandwidth] rw=write bs=1M iodepth=16 numjobs=1 stonewall
Create the following job file on all target systems to ensure consistency:
3. Execution
Run the benchmark on each environment: fio bench_job.fio --output=results_<environment>_<protocol>.txt
Testing Acceptance Criteria
[ ] Benchmark completed on Bare Metal (RHEL) mounted lun (IBM)
[ ] Benchmark completed on Standard VM in Vmware (datastore IBM)
[ ] Benchmark completed on OpenShift KubeVirt VM. (lun from IBM)
[ ] Benchmark completed on OpenShift KubeVirt Worker. (lun from IBM)
[ ] Active paths verified (FC vs iSCSI) prior to testing.
Results compiled into a comparison table on elastic search showing IOPS, Bandwidth, and Latency for all 3 environments.
Automation Requirements
The automation should (install, run, and collect the results of fio).
The script should also collect basic metadata about the result in regards to environment information, fio run time flags, hostname, information about the lun vendor used and any other relevant info about the workload run and stored as a json summary report. This json report should be sent to elasticsearch. For this testing we will need to create an elasticindex called fio.
*IMPORTANT* before engaging this task please speak with Elvir, David V, and Guy C, in order to make sure this effort aligns to similar work loads being done for GPFS validation.
This automation is intended to allow us to have quick IOPS checks in a variety environments in our lab to ensure performance matches our expectations.