-
Epic
-
Resolution: Done
-
Major
-
None
-
storage-checks-for-cnv
-
-
Green
-
0% To Do, 0% In Progress, 100% Done
-
dev-ready, doc-ready, po-ready, qe-ready, ux-ready
-
Goal
Lately we have seen an increase in PoC environments with some basic misconfigurations and red flags that prevent Openshift Virtualization from working correctly or optimally. We should automate these checks, consolidate the results, and surface them to the administrator so that action can be taken.
User Stories
- As a CNV administrator I want to understand if there are any known storage-related issues with my cluster so that I can take any required actions to ensure an optimal experience with CNV.
Non-Requirements
- List of things not included in this epic, to alleviate any doubt raised during the grooming process.
Notes
We will use the Kiagnose diagnostic framework which enables validation of cluster functionality, and used by CNV network team for several checkups (kubevirt-vm-latency, kubevirt-dpdk-checkup, kubevirt-rt-checkup).
OpenShift documentation for the cluster checkup framework is here.
The storage checkup repo is here.
Here is a partial list of the checks that we could perform:
- Does the cluster have a default storage class defined?
- Do any StorageProfiles have empty claimPropertySets (unknown provisioners)?
- Have any StorageProfiles been overridden using the spec field?
- Are there missing VolumeSnapshotClass CRs for StorageClasses that would use a snapshot-based clone?
- Is there any storage which can support a RWX access mode?
- Is the default storage class limited to copy clone strategy (no smart clone)?
Storage backend-specific checks:
- ODF: Are VMs using the plain RBD storageclass when the virtualization storageclass exists?
- EFS: Are there VMs using an EFS storageclass where the fs_gid and fs_uid are not set in the storageclass?
- is documented by
-
CNV-28092 DOC: Document CNV cluster checks
- Closed