-
Bug
-
Resolution: Unresolved
-
Undefined
-
rhel-10.0
-
None
-
No
-
None
-
1
-
rhel-ha
-
13
-
5
-
False
-
False
-
-
None
-
HA-infra Sprint #1: Oct 6 2025
-
None
-
None
-
Unspecified
-
Unspecified
-
Unspecified
-
-
x86_64
-
None
What were you trying to do that didn't work?
I was trying to setup and start resource ocf:heartbeat:nginx in a 2 node cluster.
What is the impact of this issue to you?
Our tests fails on this issue.
Please provide the package NVR for which the bug is seen:
resource-agents-4.16.0-22.el10.x86_64
pacemaker-3.0.1-1.el10.x86_64
How reproducible is this bug?:
easily, always
Steps to reproduce
Create and start resource ocf:heartbeat:nginx in a two node cluster
[root@hvirt-312 ~]# pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf status10url=http://localhost:8080/status httpd=/usr/sbin/nginx op monitor interval=30s Warning: Validating resource options using the resource agent itself is enabled by default and produces warnings. In a future version, this might be changed to errors. Specify --agent-validation to switch to the future behavior. Warning: Validation result from agent: Aug 21 08:04:52 INFO: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Expected results
Resource is started.
Actual results
Resource fails on one node and then starts on another node:
[root@hvirt-312 ~]# crm_mon -r -f -m -1 Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: hvirt-312 (version 3.0.1-1.el10-89c3d7d) - partition with quorum * Last updated: Thu Aug 21 08:05:23 2025 on hvirt-312 * Last change: Thu Aug 21 08:04:52 2025 by root via root on hvirt-312 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ hvirt-312 hvirt-313 ] Full List of Resources: * fence-hvirt-312 (stonith:fence_virt): Started hvirt-312 * fence-hvirt-313 (stonith:fence_virt): Started hvirt-313 * webserver (ocf:heartbeat:nginx): Started hvirt-313 Migration Summary: * Node: hvirt-312: * webserver: migration-threshold=1000000 fail-count=1000000 last-failure='Thu Aug 21 08:04:52 2025' Failed Resource Actions: * webserver start on hvirt-312 returned 'Not installed' at Thu Aug 21 08:04:52 2025 after 104ms*
pacemaker log:
[root@hvirt-312 ~]# grep "nginx(webserver)" /var/log/pacemaker/pacemaker.log Aug 20 08:17:39 nginx(webserver)[333257]: INFO: nginx not running Aug 20 08:17:39 nginx(webserver)[333305]: ERROR: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied) nginx: configuration file /etc/nginx/nginx.conf test failed Aug 20 08:17:39 nginx(webserver)[333346]: INFO: nginx is not running. Aug 21 08:04:52 nginx(webserver)[384330]: INFO: nginx not running Aug 21 08:04:52 nginx(webserver)[384378]: ERROR: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied) nginx: configuration file /etc/nginx/nginx.conf test failed Aug 21 08:04:53 nginx(webserver)[384419]: INFO: nginx is not running.
avc log:
SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 33 selinux-policy-42.1.5-1.el10.noarch ---- time->Thu Aug 21 08:53:46 2025 type=PROCTITLE msg=audit(1755780826.516:835): proctitle=2F7573722F7362696E2F6E67696E78002D74002D63002F6574632F6E67696E782F6E67696E782E636F6E66 type=SYSCALL msg=audit(1755780826.516:835): arch=c000003e syscall=257 success=no exit=-13 a0=ffffff9c a1=55e842da6e6e a2=42 a3=1a4 items=0 ppid=16121 pid=16122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null) type=AVC msg=audit(1755780826.516:835): avc: denied { read write } for pid=16122 comm="nginx" name="nginx.pid" dev="tmpfs" ino=2376 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file permissive=0
Check out selinux context:
[root@hvirt-312 ~]# ls -lZ /run/nginx.pid -rw-r-----. 1 root root system_u:object_r:var_run_t:s0 0 Aug 20 08:17 /run/nginx.pid [root@hvirt-313 ~]# ls -lZ /run/nginx.pid -rw-r-----. 1 root root system_u:object_r:httpd_var_run_t:s0 7 Aug 21 08:04 /run/nginx.pid