Raising this Jira regarding the use case of having multiple ceph clusters as backend for cinder and nova.
We can have something like[1] in cinder where we can define multiple secret UUIDs or FSID in cinder but seems for nova it's just having possiblity of one [libvirt] section.
More discussion on rhos-storage folks on - https://redhat-internal.slack.com/archives/C04GLFJE57Y/p1764334516545009
This RFE is as per discussion in slack channel with experts.
Business justification - Since multiple ceph cluster backend is supported. We need a way to be used from nova point of view.
As of now any instance creation is failing with
Secret not found: no secret with matching uuid '5384ee52-xxx-xxx-xxx-xxxxxx
[1]
cinderVolumes:
ceph:
customServiceConfig: |
[DEFAULT]
enabled_backends=ceph
[ceph]
volume_backend_name=ceph
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=openstack
rbd_pool=volumes
rbd_flatten_volume_from_snapshot=False
rbd_secret_uuid=4c2e425e-xxx-xxxx-xxxxx
networkAttachments:
- storage
replicas: 1
resources: {}
ceph-external:
customServiceConfig: |
[ceph-hdd-pool]
volume_backend_name=ceph-hdd-pool
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf=/etc/ceph-hdd-pool/ceph.conf
rbd_user=cinder-volumes
rbd_pool=cinder-volumes
rbd_flatten_volume_from_snapshot=False
rbd_secret_uuid=5384ee52-xxxx-xxx-xxxx-xxxxx
[2]
apiVersion: v1
data:
03-ceph-nova.conf: |
[libvirt]
images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
images_rbd_glance_store_name=default_backend
images_rbd_glance_copy_poll_interval=15
images_rbd_glance_copy_timeout=600
rbd_user=openstack
rbd_secret_uuid=4c2e425e-xxxx-xxx-xxxx-xxxxxx <<<<<<<<<<<
kind: ConfigMap
Functional requirements (mandatory)
Since multiple ceph backend is something we support we need a way to get this working in RHOSO. Expecting more cu to use this.
Describe the customer impact
Case is linked for which cu is impacted.