-
Epic
-
Resolution: Done
-
Normal
-
None
-
None
-
2024Q1
-
Approved
Problem:
There are DataPlaneServices that needs to distribute information across multiple NodeSets. For example: Today the edpm_ssh_known_hosts role can only generate a proper ssh known_hosts file for the EDPM nodes within the given NodeSet. This means If there are multiple NodeSets in a Deployment then an ssh connection between nodes in different NodeSets will not be secure as the host keys will be not known between different NodeSets.
Limitation:
The current per NodeSet known_hosts file limits the VM move operations(live-migration, resize, cold-migration). If a single nova cell consists of multiple set of differently configured EDPM compute nodes (e.g. different cpu_shared_ and cpu_dedicated_set) then those sets of computes are deployed as separate NodeSets and therefore VM move operations between those set of computes will not be possible.
Solution:
Operator will be able to specify services they want deployed across all node sets, simultaneously. This will be accomplished by supplying all, node set specific, inventories to ansibleEE operator, while at the same time overriding default play target to `all`.
Further context:
Possible solution (quoting jslagle@redhat.com):
if ansible merges the inventories as we expect, then we could have a separate service that uses a playbook just to configure known_hosts that uses hosts: all in the playbook. Then we would have to change all the playbooks to use hosts: <some other group> to mean just the one nodeset for this execution
Original slack discussion: https://redhat-internal.slack.com/archives/CQXJFGMK6/p1696419778959799
Revisited discussion: https://redhat-internal.slack.com/archives/CQXJFGMK6/p1702387896179729
- is related to
-
OSPRH-4767 Some services have to be deployed only after other services reach specific state
- Backlog
-
OSPRH-5296 Distribute ssh host keys across NodeSets
- Closed
-
OSPRH-5362 Add ssh-known-hosts service to dataplane samples and remove from configure_os playbook
- Closed