## remove faulty node using this guide https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_replacing-controller-nodes (totp17)[cloud-admin@openstackclient: ~]$ for i in 163.162.31.86 163.162.31.87 ; do echo "*** $i ***" ; ssh cloud-tate_comment'; SHOW STATUS LIKE 'wsrep_cluster_size';\""; done *** 163.162.31.86 *** Variable_name Value wsrep_local_state_comment Synced Variable_name Value wsrep_cluster_size 2 *** 163.162.31.87 *** Variable_name Value wsrep_local_state_comment Synced Variable_name Value wsrep_cluster_size 2 (totp17)[cloud-admin@openstackclient: ~]$ ssh cloud-admin@163.162.31.86 "sudo podman exec \$(sudo podman ps -f name=rabbitmq-bundle -q) rabbitmqctl cluster_status";ssh cloud-admin@163.162.31.87 "sudo podman exec \$(sudo podman ps -f name=rabbitmq-bundle -q) rabbitmqctl cluster_status" Cluster status of node rabbit@totp-ctr-1.internalapi.nfv.cselt.it ... Basics Cluster name: rabbit@totp-ctr-1.nfv.cselt.it Disk Nodes rabbit@totp-ctr-1.internalapi.nfv.cselt.it rabbit@totp-ctr-2.internalapi.nfv.cselt.it Running Nodes rabbit@totp-ctr-1.internalapi.nfv.cselt.it rabbit@totp-ctr-2.internalapi.nfv.cselt.it Versions rabbit@totp-ctr-1.internalapi.nfv.cselt.it: RabbitMQ 3.9.10 on Erlang 24.3.4.2 rabbit@totp-ctr-2.internalapi.nfv.cselt.it: RabbitMQ 3.9.10 on Erlang 24.3.4.2 Maintenance status Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, status: not under maintenance Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, status: not under maintenance Alarms (none) Network Partitions (none) Listeners Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: 192.168.158.13, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: 192.168.158.14, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Feature flags Flag: drop_unroutable_metric, state: enabled Flag: empty_basic_get_metric, state: enabled Flag: implicit_default_bindings, state: enabled Flag: maintenance_mode_status, state: enabled Flag: quorum_queue, state: enabled Flag: stream_queue, state: enabled Flag: user_limits, state: enabled Flag: virtual_host_metadata, state: enabled Cluster status of node rabbit@totp-ctr-2.internalapi.nfv.cselt.it ... Basics Cluster name: rabbit@totp-ctr-2.nfv.cselt.it Disk Nodes rabbit@totp-ctr-1.internalapi.nfv.cselt.it rabbit@totp-ctr-2.internalapi.nfv.cselt.it Running Nodes rabbit@totp-ctr-1.internalapi.nfv.cselt.it rabbit@totp-ctr-2.internalapi.nfv.cselt.it Versions rabbit@totp-ctr-1.internalapi.nfv.cselt.it: RabbitMQ 3.9.10 on Erlang 24.3.4.2 rabbit@totp-ctr-2.internalapi.nfv.cselt.it: RabbitMQ 3.9.10 on Erlang 24.3.4.2 Maintenance status Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, status: not under maintenance Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, status: not under maintenance Alarms (none) Network Partitions (none) Listeners Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit@totp-ctr-1.internalapi.nfv.cselt.it, interface: 192.168.158.13, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit@totp-ctr-2.internalapi.nfv.cselt.it, interface: 192.168.158.14, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Feature flags Flag: drop_unroutable_metric, state: enabled Flag: empty_basic_get_metric, state: enabled Flag: implicit_default_bindings, state: enabled Flag: maintenance_mode_status, state: enabled Flag: quorum_queue, state: enabled Flag: stream_queue, state: enabled Flag: user_limits, state: enabled Flag: virtual_host_metadata, state: enabled ssh cloud-admin@163.162.31.87 "sudo pcs property show stonith-enabled"87 "sudo pcs property show stonith-enabled" Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead. Cluster Properties: stonith-enabled: false ssh cloud-admin@163.162.31.85 "sudo systemctl stop tripleo_*" Last login: Fri Jul 18 15:20:49 2025 from 163.162.31.84 [cloud-admin@totp-ctr-0 ~]$ sudo su [root@totp-ctr-0 cloud-admin]# pcs cluster stop Stopping Cluster (pacemaker)... Stopping Cluster (corosync)... [root@totp-ctr-0 cloud-admin]# pcs cluster status Error: cluster is not currently running on this node (totp17)[cloud-admin@openstackclient: ~]$ ssh cloud-admin@163.162.31.86 "sudo pcs cluster node remove totp-ctr-0 --skip-offline --force" Warning: Host 'totp-ctr-0' is not known to pcs, try to authenticate the host using 'pcs host auth totp-ctr-0' command Warning: Unable to determine whether this action will cause a loss of the quorum Warning: Removed node 'totp-ctr-0' could not be reached and subsequently deconfigured. Run 'pcs cluster destroy' on the unreachable node. Sending updated corosync.conf to nodes... totp-ctr-1: Succeeded totp-ctr-2: Succeeded totp-ctr-1: Corosync configuration reloaded sh cloud-admin@163.162.31.86 "sudo pcs status"cloud-admin@163.162.31.86 "sudo pcs status" Cluster name: tripleo_cluster Status of pacemakerd: 'Pacemaker is running' (last updated 2025-07-18 15:42:54 +02:00) Cluster Summary: * Stack: corosync * Current DC: totp-ctr-2 (version 2.1.5-9.el9_2.4-a3f44794f94) - partition with quorum * Last updated: Fri Jul 18 15:42:55 2025 * Last change: Fri Jul 18 15:41:53 2025 by hacluster via crm_node on totp-ctr-1 * 10 nodes configured * 33 resource instances configured Node List: * Online: [ totp-ctr-1 totp-ctr-2 ] * RemoteOnline: [ totp-dpdk6-0 totp-dpdk6-1 ] * GuestOnline: [ galera-bundle-0 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 ] Full List of Resources: * totp-dpdk6-0 (ocf:pacemaker:remote): Started totp-ctr-1 * totp-dpdk6-1 (ocf:pacemaker:remote): Started totp-ctr-2 * ip-163.162.31.94 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * stonith-fence_kubevirt-020ab000001e (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_kubevirt-020ab0000012 (stonith:fence_kubevirt): Started totp-ctr-1 * ip-163.162.31.51 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * ip-192.168.158.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-1 * ip-172.16.4.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * Container bundle set: haproxy-bundle [cluster.common.tag/haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started totp-ctr-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Stopped * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started totp-ctr-1 * Container bundle set: galera-bundle [cluster.common.tag/mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted totp-ctr-2 * galera-bundle-1 (ocf:heartbeat:galera): Stopped * galera-bundle-2 (ocf:heartbeat:galera): Promoted totp-ctr-1 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-2 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Stopped * stonith-fence_kubevirt-020ab0000018 (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_ipmilan-d4f5ef1b5648 (stonith:fence_ipmilan): Started totp-ctr-1 * stonith-fence_ipmilan-d4f5ef1c2058 (stonith:fence_ipmilan): Started totp-ctr-1 * Container bundle: openstack-cinder-volume [cluster.common.tag/cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started totp-ctr-2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ssh cloud-admin@163.162.31.86 "sudo pcs host deauth totp-ctr-0"162.31.86 "sudo pcs host deauth totp-ctr-0" Error: Following hosts were not found: 'totp-ctr-0' ssh cloud-admin@163.162.31.86 "sudo pcs resource unmanage galera-bundle" Cluster name: tripleo_cluster Status of pacemakerd: 'Pacemaker is running' (last updated 2025-07-18 15:49:05 +02:00) Cluster Summary: * Stack: corosync * Current DC: totp-ctr-2 (version 2.1.5-9.el9_2.4-a3f44794f94) - partition with quorum * Last updated: Fri Jul 18 15:49:06 2025 * Last change: Fri Jul 18 15:48:58 2025 by root via cibadmin on totp-ctr-1 * 10 nodes configured * 33 resource instances configured Node List: * GuestNode galera-bundle-0: maintenance * GuestNode galera-bundle-2: maintenance * Online: [ totp-ctr-1 totp-ctr-2 ] * RemoteOnline: [ totp-dpdk6-0 totp-dpdk6-1 ] * GuestOnline: [ rabbitmq-bundle-0 rabbitmq-bundle-1 ] Full List of Resources: * totp-dpdk6-0 (ocf:pacemaker:remote): Started totp-ctr-1 * totp-dpdk6-1 (ocf:pacemaker:remote): Started totp-ctr-2 * ip-163.162.31.94 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * stonith-fence_kubevirt-020ab000001e (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_kubevirt-020ab0000012 (stonith:fence_kubevirt): Started totp-ctr-1 * ip-163.162.31.51 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * ip-192.168.158.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-1 * ip-172.16.4.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-2 * Container bundle set: haproxy-bundle [cluster.common.tag/haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started totp-ctr-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Stopped * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started totp-ctr-1 * Container bundle set: galera-bundle [cluster.common.tag/mariadb:pcmklatest] (unmanaged): * galera-bundle-0 (ocf:heartbeat:galera): Promoted totp-ctr-2 (unmanaged) * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged) * galera-bundle-2 (ocf:heartbeat:galera): Promoted totp-ctr-1 (unmanaged) * Container bundle set: rabbitmq-bundle [cluster.common.tag/rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-2 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-1 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Stopped * stonith-fence_kubevirt-020ab0000018 (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_ipmilan-d4f5ef1b5648 (stonith:fence_ipmilan): Started totp-ctr-1 * stonith-fence_ipmilan-d4f5ef1c2058 (stonith:fence_ipmilan): Started totp-ctr-1 * Container bundle: openstack-cinder-volume [cluster.common.tag/cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started totp-ctr-2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ssh cloud-admin@163.162.31.86 sudo podman exec ovn_cluster_north_db_server ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound 2>/dev/null|grep -A4 Servers: 2b56 (2b56 at tcp:192.168.158.12:6643) next_index=904 match_index=0 last msg 82203980 ms ago 5c00 (5c00 at tcp:192.168.158.14:6643) next_index=904 match_index=903 last msg 1095 ms ago ac0a (ac0a at tcp:192.168.158.13:6643) (self) next_index=903 match_index=903 [root@totp-ctr-1 cloud-admin]# podman exec ovn_cluster_north_db_server ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/kick OVN_Northbound ac0a [root@totp-ctr-1 cloud-admin]# [root@totp-ctr-1 cloud-admin]# sudo podman exec ovn_cluster_south_db_server ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound 2>/dev/null|grep -A4 Servers: Servers: b247 (b247 at tcp:192.168.158.14:6644) next_index=1073 match_index=1072 last msg 1238 ms ago ea19 (ea19 at tcp:192.168.158.13:6644) (self) next_index=1072 match_index=1072 a580 (a580 at tcp:192.168.158.12:6644) next_index=1073 match_index=0 last msg 18656809 ms ago [root@totp-ctr-1 cloud-admin]# sudo podman exec ovn_cluster_south_db_server ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/kick OVN_Southbound b247 started removal [root@totp-ctr-1 cloud-admin]# podman exec ovn_cluster_north_db_server ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/kick OVN_Northbound 2b56 sent removal request to leader ssh cloud-admin@163.162.31.85 sudo systemctl disable --now tripleo_ovn_cluster_south_db_server.service tripleo_ovn_cluster_north_db_server.service ssh cloud-admin@163.162.31.85 sudo rm -rfv /var/lib/openvswitch/ovn/.ovn* /var/lib/openvswitch/ovn/ovn*.db (totp17)[cloud-admin@openstackclient: ~]$ ssh cloud-admin@163.162.31.86 "sudssh cloud-admin@163.162.31.86 "sudo hiera -c /etc/puppet/hiera.yaml pacemaker_short_bootstrap_node_name" totp-ctr-0 ## THe controller VM that need to be replaced is the bootstrap node https://access.redhat.com/solutions/5662621 + pacemaker_short_bootstrap_node_name: totp-ctr-1 + mysql_short_bootstrap_node_name: totp-ctr-1 + + AllNodesExtraMapData: + ovn_dbs_bootstrap_node_ip: 192.168.158.13 + ovn_dbs_short_bootstrap_node_name: totp-ctr-1 #### VM deletion (totp17)[aborgarello@dev-bastion-10]:~$ oc get pods; oc get vm; oc get osvms; oc get osips; oc get pvc NAME READY STATUS RESTARTS AGE deploy-openstack-default-bll7w 0/1 Error 0 4h22m fileserver-6f665c578d-v7l4g 1/1 Running 0 65d openstack-provision-server-745bbb45fd-qrldl 2/2 Running 0 65d openstackclient 1/1 Running 0 3d4h osp-director-operator-controller-manager-5d6cfb54ff-pbpcj 2/2 Running 0 94d postgresql-repository-f7f8cdddc-zzvkr 1/1 Running 0 94d repository-8484f7486-8xjtt 1/1 Running 2 (94d ago) 94d virt-launcher-totp-ctr-0-mfhpq 1/1 Running 0 5h5m virt-launcher-totp-ctr-1-zkkjb 1/1 Running 0 65d virt-launcher-totp-ctr-2-mq2h8 1/1 Running 0 65d NAME AGE STATUS READY totp-ctr-0 5h31m Running True totp-ctr-1 65d Running True totp-ctr-2 65d Running True NAME CORES RAM ROOTDISK DESIRED READY STATUS REASON totp-ctr 12 40 100 3 3 Provisioned All requested VirtualMachines have been provisioned NAME DESIRED RESERVED NETWORKS STATUS REASON controlplane 1 1 4 Provisioned All requested IPs have been reserved openstackclient 1 1 3 Provisioned All requested IPs have been reserved totp-ctr 3 3 5 Provisioned All requested IPs have been reserved totp-dpdk6 2 2 4 Provisioned All requested IPs have been reserved totp-novanohp 2 2 4 Provisioned All requested IPs have been reserved NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fileserver-pvc Bound pvc-fcbcea6b-bedc-4965-b5ab-c2112370bac7 2Gi RWO ocs-storagecluster-ceph-rbd 65d openstack-base-img-9.2 Bound pvc-05de0193-87eb-43df-b330-9c7afc5e78b5 50Gi RWX ocs-storagecluster-ceph-rbd 65d openstackclient-cloud-admin Bound pvc-23a0704b-5df3-48d1-bfd5-b98b76227a9d 4Gi RWO ocs-storagecluster-cephfs 65d openstackclient-hosts Bound pvc-3bd02991-99f1-4c17-aee0-eeac2850d2f7 956Mi RWO ocs-storagecluster-cephfs 65d openstackclient-kolla-src Bound pvc-ad4ab31b-bd0f-498d-bfda-5abadd5ff83a 956Mi RWO ocs-storagecluster-cephfs 65d postgresql-repository-pvc Bound pvc-ed22e41c-a7c1-463f-bf31-edc9caf935e4 4Gi RWO ocs-storagecluster-ceph-rbd 134d repository-pvc Bound pvc-503df6ae-8dbd-427f-aed8-e6260f069b8f 4Gi RWO ocs-storagecluster-ceph-rbd 134d totp-ctr-0-896f Bound pvc-033ef294-70bc-49c3-b9a6-e1152d7897f6 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 5h31m totp-ctr-1-896f Bound pvc-879f1e6e-1ce0-4838-8a73-ab5af365aaa8 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d totp-ctr-2-896f Bound pvc-e0422453-21e5-496e-8357-57277bc1c5ea 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d (totp17)[aborgarello@dev-bastion-10]:~$ oc delete vm totp-ctr-0 ## Controller node (content host) deletion on Satellite (totp17)[aborgarello@dev-bastion-10]:~$ hammer host list --search "name ~ ${SITE}-" | grep ctr 4726 | totp-ctr-0.nfv.cselt.it | RedHat 9.2 | | | | Warning | OCP-OSP17 | ToTP 4728 | totp-ctr-1.nfv.cselt.it | RedHat 9.2 | | | | Warning | OCP-OSP17 | ToTP 4727 | totp-ctr-2.nfv.cselt.it | RedHat 9.2 | | | | Warning | OCP-OSP17 | ToTP (totp17)[aborgarello@dev-bastion-10]:~$ hammer host delete --id 4726 Host deleted. (totp17)[aborgarello@dev-bastion-10]:~$ hammer host list --search "name ~ ${SITE}-" | grep ctr 4728 | totp-ctr-1.nfv.cselt.it | RedHat 9.2 | | | | Warning | OCP-OSP17 | ToTP 4727 | totp-ctr-2.nfv.cselt.it | RedHat 9.2 | | | | Warning | OCP-OSP17 | ToTP (totp17)[aborgarello@dev-bastion-10]:~$ oc get pods; oc get vm; oc get osvms; oc get osips; oc get pvc NAME READY STATUS RESTARTS AGE 969b1e01-c997-4ddc-91c7-1434fe5f6b6d-source-pod 1/1 Running 0 2m20s cdi-upload-tmp-pvc-317eaf15-e941-41b1-906a-60d77bca3c21 1/1 Running 0 2m27s deploy-openstack-default-bll7w 0/1 Error 0 4h25m fileserver-6f665c578d-v7l4g 1/1 Running 0 65d openstack-provision-server-745bbb45fd-qrldl 2/2 Running 0 65d openstackclient 1/1 Running 0 3d4h osp-director-operator-controller-manager-5d6cfb54ff-pbpcj 2/2 Running 0 94d postgresql-repository-f7f8cdddc-zzvkr 1/1 Running 0 94d repository-8484f7486-8xjtt 1/1 Running 2 (94d ago) 94d virt-launcher-totp-ctr-1-zkkjb 1/1 Running 0 65d virt-launcher-totp-ctr-2-mq2h8 1/1 Running 0 65d NAME AGE STATUS READY totp-ctr-0 2m27s Provisioning False totp-ctr-1 65d Running True totp-ctr-2 65d Running True NAME CORES RAM ROOTDISK DESIRED READY STATUS REASON totp-ctr 12 40 100 3 2 Provisioning Provisioning of VirtualMachines in progress NAME DESIRED RESERVED NETWORKS STATUS REASON controlplane 1 1 4 Provisioned All requested IPs have been reserved openstackclient 1 1 3 Provisioned All requested IPs have been reserved totp-ctr 3 3 5 Provisioned All requested IPs have been reserved totp-dpdk6 2 2 4 Provisioned All requested IPs have been reserved totp-novanohp 2 2 4 Provisioned All requested IPs have been reserved NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fileserver-pvc Bound pvc-fcbcea6b-bedc-4965-b5ab-c2112370bac7 2Gi RWO ocs-storagecluster-ceph-rbd 65d openstack-base-img-9.2 Bound pvc-05de0193-87eb-43df-b330-9c7afc5e78b5 50Gi RWX ocs-storagecluster-ceph-rbd 65d openstackclient-cloud-admin Bound pvc-23a0704b-5df3-48d1-bfd5-b98b76227a9d 4Gi RWO ocs-storagecluster-cephfs 65d openstackclient-hosts Bound pvc-3bd02991-99f1-4c17-aee0-eeac2850d2f7 956Mi RWO ocs-storagecluster-cephfs 65d openstackclient-kolla-src Bound pvc-ad4ab31b-bd0f-498d-bfda-5abadd5ff83a 956Mi RWO ocs-storagecluster-cephfs 65d postgresql-repository-pvc Bound pvc-ed22e41c-a7c1-463f-bf31-edc9caf935e4 4Gi RWO ocs-storagecluster-ceph-rbd 134d repository-pvc Bound pvc-503df6ae-8dbd-427f-aed8-e6260f069b8f 4Gi RWO ocs-storagecluster-ceph-rbd 134d tmp-pvc-317eaf15-e941-41b1-906a-60d77bca3c21 Bound pvc-969b1e01-c997-4ddc-91c7-1434fe5f6b6d 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 2m27s totp-ctr-0-896f Pending ocs-storagecluster-ceph-rbd-virtualization 2m27s totp-ctr-1-896f Bound pvc-879f1e6e-1ce0-4838-8a73-ab5af365aaa8 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d totp-ctr-2-896f Bound pvc-e0422453-21e5-496e-8357-57277bc1c5ea 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d oc get pods; oc get vm; oc get osvms; oc get osips; oc get pvc dev-bastion-10: Fri Jul 18 16:53:06 2025 NAME READY STATUS RESTARTS AGE deploy-openstack-default-bll7w 0/1 Error 0 4h49m fileserver-6f665c578d-v7l4g 1/1 Running 0 65d openstack-provision-server-745bbb45fd-qrldl 2/2 Running 0 65d openstackclient 1/1 Running 0 3d4h osp-director-operator-controller-manager-5d6cfb54ff-pbpcj 2/2 Running 0 94d postgresql-repository-f7f8cdddc-zzvkr 1/1 Running 0 94d repository-8484f7486-8xjtt 1/1 Running 2 (94d ago) 94d virt-launcher-totp-ctr-0-7hpzt 1/1 Running 0 61s virt-launcher-totp-ctr-1-zkkjb 1/1 Running 0 65d virt-launcher-totp-ctr-2-mq2h8 1/1 Running 0 65d NAME AGE STATUS READY totp-ctr-0 26m Running True totp-ctr-1 65d Running True totp-ctr-2 65d Running True NAME CORES RAM ROOTDISK DESIRED READY STATUS REASON totp-ctr 12 40 100 3 3 Provisioned All requested VirtualMachines have been provisioned NAME DESIRED RESERVED NETWORKS STATUS REASON controlplane 1 1 4 Provisioned All requested IPs have been reserved openstackclient 1 1 3 Provisioned All requested IPs have been reserved totp-ctr 3 3 5 Provisioned All requested IPs have been reserved totp-dpdk6 2 2 4 Provisioned All requested IPs have been reserved totp-novanohp 2 2 4 Provisioned All requested IPs have been reserved NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fileserver-pvc Bound pvc-fcbcea6b-bedc-4965-b5ab-c2112370bac7 2Gi RWO ocs-storagecluster-ceph-rbd 65d openstack-base-img-9.2 Bound pvc-05de0193-87eb-43df-b330-9c7afc5e78b5 50Gi RWX ocs-storagecluster-ceph-rbd 65d openstackclient-cloud-admin Bound pvc-23a0704b-5df3-48d1-bfd5-b98b76227a9d 4Gi RWO ocs-storagecluster-cephfs 65d openstackclient-hosts Bound pvc-3bd02991-99f1-4c17-aee0-eeac2850d2f7 956Mi RWO ocs-storagecluster-cephfs 65d openstackclient-kolla-src Bound pvc-ad4ab31b-bd0f-498d-bfda-5abadd5ff83a 956Mi RWO ocs-storagecluster-cephfs 65d postgresql-repository-pvc Bound pvc-ed22e41c-a7c1-463f-bf31-edc9caf935e4 4Gi RWO ocs-storagecluster-ceph-rbd 134d repository-pvc Bound pvc-503df6ae-8dbd-427f-aed8-e6260f069b8f 4Gi RWO ocs-storagecluster-ceph-rbd 134d totp-ctr-0-896f Bound pvc-969b1e01-c997-4ddc-91c7-1434fe5f6b6d 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 26m totp-ctr-1-896f Bound pvc-879f1e6e-1ce0-4838-8a73-ab5af365aaa8 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d totp-ctr-2-896f Bound pvc-e0422453-21e5-496e-8357-57277bc1c5ea 100Gi RWX ocs-storagecluster-ceph-rbd-virtualization 65d ## deploy overcloud ## post-deploy sudo pcs resource refresh galera-bundle sudo pcs resource manage galera-bundle -> cluster up & running Cluster name: tripleo_cluster Status of pacemakerd: 'Pacemaker is running' (last updated 2025-07-18 23:06:47 +02:00) Cluster Summary: * Stack: corosync * Current DC: totp-ctr-2 (version 2.1.5-9.el9_2.4-a3f44794f94) - partition with quorum * Last updated: Fri Jul 18 23:06:48 2025 * Last change: Fri Jul 18 23:03:39 2025 by root via cibadmin on totp-ctr-2 * 11 nodes configured * 34 resource instances configured Node List: * Online: [ totp-ctr-0 totp-ctr-1 totp-ctr-2 ] * RemoteOnline: [ totp-dpdk6-0 totp-dpdk6-1 ] * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 ] Full List of Resources: * totp-dpdk6-0 (ocf:pacemaker:remote): Started totp-ctr-1 * totp-dpdk6-1 (ocf:pacemaker:remote): Started totp-ctr-2 * ip-163.162.31.94 (ocf:heartbeat:IPaddr2): Started totp-ctr-0 * stonith-fence_kubevirt-020ab000001e (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_kubevirt-020ab0000012 (stonith:fence_kubevirt): Started totp-ctr-1 * ip-163.162.31.51 (ocf:heartbeat:IPaddr2): Started totp-ctr-0 * ip-192.168.158.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-0 * ip-172.16.4.10 (ocf:heartbeat:IPaddr2): Started totp-ctr-1 * Container bundle set: haproxy-bundle [cluster.common.tag/haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started totp-ctr-0 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started totp-ctr-1 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started totp-ctr-2 * Container bundle set: galera-bundle [cluster.common.tag/mariadb:pcmklatest]: * galera-bundle-0 (ocf:heartbeat:galera): Promoted totp-ctr-2 * galera-bundle-1 (ocf:heartbeat:galera): Promoted totp-ctr-0 * galera-bundle-2 (ocf:heartbeat:galera): Promoted totp-ctr-1 * Container bundle set: rabbitmq-bundle [cluster.common.tag/rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-0 * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-2 * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started totp-ctr-1 * stonith-fence_kubevirt-020ab0000018 (stonith:fence_kubevirt): Started totp-ctr-2 * stonith-fence_ipmilan-d4f5ef1b5648 (stonith:fence_ipmilan): Started totp-ctr-1 * stonith-fence_ipmilan-d4f5ef1c2058 (stonith:fence_ipmilan): Started totp-ctr-1 * Container bundle: openstack-cinder-volume [cluster.common.tag/cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started totp-ctr-2 * stonith-fence_kubevirt-020ab000002a (stonith:fence_kubevirt): Started totp-ctr-2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ## to-do https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_replacing-controller-nodes#proc_cleaning-up-after-controller-node-replacement_replacing-controller-nodes