Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-17065

[FFU] HostnameMap doesn't have a record for a node that was moved to new role: old role hostname is there

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • None
    • Important

      To Reproduce Steps to reproduce the behavior:
      This problem was originally reported as a problem with inventory generation. During investigation it looked like broken HostnameMap is a cause:

      2025-05-27 12:14:04.417233 | 566f0855-0029-11e0-4bd0-000000000009 |      FATAL | Generate ansible inventory | localhost | error={"changed": false, "error": "No IPs found for Rhel9Compute role on ctlplane network", "msg": "Error generating inventory for overcloud: No IPs found for Rhel9Compute role on ctlplane network", "success": false}
      

      After applying workaround from https://access.redhat.com/solutions/7120161 (adding "provisioned: false" to source role and aligning counts) during FFU, generated baremetal-deployment.yaml has invalid HostnameMap: a mapping for destination role's hostname is not there. Instead a record is added for original role's instance.

      This looks similar to the problem described and solved in tripleo-ansible patch attached to https://issues.redhat.com/browse/OSPRH-16632 , but I am not 100% sure. Please help us understand the path forward.

      Expected behavior
      A valid baremetal-deployment.yaml is generated after node is moved to Rhel9 role and inventory generation is not broken.

      Bug impact
      FFU process is blocked

      Known workaround
      None

              Unassigned Unassigned
              rhn-support-astupnik Alex Stupnikov
              rhos-dfg-df
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: