-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
2
-
False
-
-
False
-
Proposed
-
Proposed
-
Proposed
-
Proposed
-
No
-
-
-
2024Q2
-
Low
Ceph supports multiple cephfs filesystems since the Pacific release. This functionality allows configuring separate file systems with full data separation on separate pools. Alongside data separation, each filesystem has its own metadata pools as well as metadata server/s. OpenStack Manila users benefit from this level of isolation by mapping an isolated filesystem to a separate backend. This means that you could have multiple backends configured in Manila provisioning onto the same ceph cluster, but each within its own CephFS filesystem.
The CephFS driver permits this by the use of a configuration option: ``cephfs_filesystem_name``.
There are a couple of deficiencies of this approach:
(1) If the request doesn't specify any backend specific characteristic, the scheduler may end up favoring one backend (the "first" backend in a list compiled within the scheduler) all the time. This is because there's nothing differentiating the backends as far as the basic characteristics go - the capacity information is for the whole cluster. It'd be nice if the backend starts reporting allocated_capacity_gb, and if the scheduler took note of that.
(2) When the filesystem isn't called "cephfs", users/automation software will need to specify the filesystem name when mounting subvolumes (manila shares).
This issue was also discussed at the Oct 2023 (Caracal) PTG. One solution was for the driver to set the filesystem name as the share's metadata so users and automation tools can view this information.