-
Story
-
Resolution: Unresolved
-
Major
-
None
-
3
-
False
-
None
-
False
-
-
ACM-11816 - Request for advanced configuration option in cluster creation wizard for KubeVirt on ACM console
-
-
-
ACM Console Sprint 262
-
None
Value Statement
Defining StorageClass and VolumeSnapshotClass mappings allows users to expose storage available on their hosting infrastructure to HCP guest clusters running in VMs. Supporting this functionality in the UI avoids users needing to manually edit the YAML in order to access these API features.
Definition of Done for Engineering Story Owner (Checklist)
- New storage mapping step is added to the wizard
- User can optionally add as many StorageClass and VolumeSnapshotClass mappings as they like (start with no mappings by default)
- Group field is optional, for linking these two classes for taking and restoring snapshots
- Each field should have a helper tooltip explaining it in more detail
- Fields have appropriate validation
- StorageClasses and VolumeSnapshotClasses are pre-populated from local cluster, but user can enter custom value in case they are using external infrastructure
Development Complete
- The code is complete.
- Functionality is working.
- Any required downstream Docker file changes are made.
Tests Automated
- [ ] Unit/function tests have been automated and incorporated into the
build. - [ ] 100% automated unit/function test coverage for new or changed APIs.
Secure Design
- [ ] Security has been assessed and incorporated into your threat model.
Multidisciplinary Teams Readiness
- [ ] Create an informative documentation issue using the Customer
Portal Doc template that you can access from [The Playbook](
and ensure doc acceptance criteria is met.
- Call out this sentence as it's own action:
- [ ] Link the development issue to the doc issue.
Support Readiness
- [ ] The must-gather script has been updated.
More information
From https://docs.google.com/document/d/1L74VHlQyeVuvU1tGvwzdtt-ryS7GgnkP-GMNTB0un2g/edit?tab=t.0
StorageClass and VolumeSnapshotClass groupings explained
KubeVirt CSI works by mapping StorageClass and VolumeSnapshotClasses from the underlying infra cluster (the cluster the KubeVirt VMs are running on) into the HCP guest cluster (the hosted cluster).
For example, if the underlying infra cluster has StorageClass “A” and “B” we can pass that StorageClass to the hosted cluster so that the hosted cluster can reuse the same storage as the infra cluster. Within the hosted cluster, we could expose the infra StorageClasses by mapping “A” and “B” to any naming convention we want.
This infra cluster storage reuse is helpful because it means that hosted clusters don’t need their own dedicated storage, instead they can leverage the storage already available on the underlying infra cluster.
Configuring StorageClass and VolumeSnapshotClass mappings
The HostedCluster object exposes mechanisms for mapping infra StorageClasses and infra VolumeSnapshotClasses to the guest cluster. The two primary fields involved with this are…
hostedCluster.spec.platform.kubeVirt.storageDriver.manual.storageClassMapping
hostedCluster.spec.platform.kubeVirt.storageDriver.manual.volumeSnapshotClassMapping
The way these fields work is that we specify infra storage and volumesnapshot classes as strings, and then we choose what the corresponding infra storage or volumesnapshot classes should be called. This covers the infra to guest mapping.
The optional “group” field within both of these structs is a convention that tells us which storageClassMappings and volumeSnapshotClassMappings are compatible with one another. So, a storageClassMapping and volumeSnapshotClassMapping with the same “group” name tells kubevirt-csi that we can take snapshots of PVCs of a specific storageClass type within the guest using a specific volumeSnapshotClass within the guest, and that we can restore a snapshot within the guest to a specific PVC storageClass.
UI Wizard fields
To make this simple, we could expose the StorageClass and VolumeSnapshotClass struct fields exactly as they are in the HostedCluster api. We’d just need to make sure this is treated as a list that people can expand, similar to the NodePool concept.