-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
rhel-10.0
-
None
-
None
-
None
-
rhel-ha-pacemaker
-
None
-
False
-
False
-
-
None
-
None
-
None
-
None
-
Unspecified
-
Unspecified
-
Unspecified
-
None
What were you trying to do that didn't work?
I was trying to create 2 clones with instances on 4 nodes in the 5 nodes cluster.
What is the impact of this issue to you?
Please provide the package NVR for which the bug is seen:
pacemaker-3.0.0-5.el10.x86_64
How reproducible is this bug?:
always, easily
Steps to reproduce
- Setup cluster with 5 nodes.
- Create 2 clones with instances on 4 nodes in the 5 nodes cluster.
- Check pcs status
Expected results
I expect to see 2 clones and each clone has 2 instances.
Actual results
Setup cluster with 5 nodes:
[root@hvirt-274 ~]# pcs status Cluster name: STSRHTS2632 Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: hvirt-356 (version 3.0.0-5.el10-5b53b7e) - partition with quorum * Last updated: Thu Feb 19 07:40:09 2026 on hvirt-274 * Last change: Thu Feb 19 07:05:26 2026 by root via root on hvirt-351 * 5 nodes configured * 5 resource instances configured Node List: * Online: [ hvirt-274 hvirt-283 hvirt-339 hvirt-351 hvirt-356 ] Full List of Resources: * fence-hvirt-351 (stonith:fence_virt): Started hvirt-274 * fence-hvirt-339 (stonith:fence_virt): Started hvirt-283 * fence-hvirt-283 (stonith:fence_virt): Started hvirt-339 * fence-hvirt-356 (stonith:fence_virt): Started hvirt-351 * fence-hvirt-274 (stonith:fence_virt): Started hvirt-356 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Create 2 clones with instances on 4 nodes in the 5 nodes cluster:
[root@hvirt-274 ~]# pcs resource create resource-1 ocf:pacemaker:Stateful [root@hvirt-274 ~]# pcs resource create resource-2 ocf:pacemaker:Stateful [root@hvirt-274 ~]# pcs resource clone resource-1 meta clone-max=2 [root@hvirt-274 ~]# pcs resource clone resource-2 meta clone-max=2
Check pcs status
[root@hvirt-274 ~]# pcs status --full
Cluster name: STSRHTS2632
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: hvirt-356 (4) (version 3.0.0-5.el10-5b53b7e) - partition with quorum
* Last updated: Thu Feb 19 07:41:09 2026 on hvirt-274
* Last change: Thu Feb 19 07:40:58 2026 by root via root on hvirt-274
* 5 nodes configured
* 9 resource instances configured
Node List:
* Node hvirt-274 (5): online, feature set 3.20.0
* Node hvirt-283 (3): online, feature set 3.20.0
* Node hvirt-339 (2): online, feature set 3.20.0
* Node hvirt-351 (1): online, feature set 3.20.0
* Node hvirt-356 (4): online, feature set 3.20.0
Full List of Resources:
* fence-hvirt-351 (stonith:fence_virt): Started hvirt-274
* fence-hvirt-339 (stonith:fence_virt): Started hvirt-283
* fence-hvirt-283 (stonith:fence_virt): Started hvirt-339
* fence-hvirt-356 (stonith:fence_virt): Started hvirt-351
* fence-hvirt-274 (stonith:fence_virt): Started hvirt-356
* Clone Set: resource-1-clone [resource-1]:
* resource-1 (ocf:pacemaker:Stateful): Started hvirt-274
* resource-1 (ocf:pacemaker:Stateful): Started hvirt-339
* Clone Set: resource-2-clone [resource-2]:
* resource-2 (ocf:pacemaker:Stateful): Started hvirt-283
* resource-2 (ocf:pacemaker:Stateful): Started hvirt-351
* resource-2 (ocf:pacemaker:Stateful): ORPHANED Stopped
Node Attributes:
* Node: hvirt-274 (5):
* master-resource-1 : 5
* Node: hvirt-283 (3):
* master-resource-2 : 5
* Node: hvirt-339 (2):
* master-resource-1 : 5
* Node: hvirt-351 (1):
* master-resource-2 : 5
Migration Summary:
Tickets:
PCSD Status:
hvirt-274: Online
hvirt-283: Online
hvirt-339: Online
hvirt-351: Online
hvirt-356: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
There is and unexpected clone instance which is "ORPHANED Stopped".