-
Bug
-
Resolution: Unresolved
-
Critical
-
rhel-9.6
-
None
-
httpd-2.4.62-9.el9
-
No
-
Critical
-
ZStream
-
1
-
rhel-stacks-web-servers
-
0
-
False
-
False
-
-
None
-
_WS-Refined_
-
Regression Exception
-
Pass
-
Automated
-
Unspecified
-
Unspecified
-
Unspecified
-
-
All
-
None
What were you trying to do that didn't work?
Healthchecks from mod_proxy_hcheck can stop after the original child process running the singleton is released.
What is the impact of this issue to you?
Healthchecks can completely stop until httpd is restarted.
Please provide the package NVR for which the bug is seen:
httpd-2.4.62-4.el9.x86_64
How reproducible is this bug?:
Very
Steps to reproduce
- Install httpd and tomcat for an easy local proxy destination:
$ dnf install httpd tomcat
- Configure httpd like below with some members proxying to the tomcat 8080 port and enable trace logs to follow the hcheck activity easily. The 8080 hchecks get 404s as expected while the others get connection refused. The low MaxConnectionsPerChild max it easy to trigger a process reclamation for testing:
LogLevel debug proxy_hcheck:trace8 watchdog:trace8 ProxyTimeout 15 <Proxy balancer://mycluster> BalancerMember http://127.0.0.1:8080 route=node1 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8180 route=node2 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8280 route=node3 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8380 route=node4 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8480 route=node5 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8580 route=node6 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8680 route=node7 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8780 route=node8 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 BalancerMember http://127.0.0.1:8080 route=node8 hcmethod=HEAD hcuri=/test hcinterval=1 hcfails=1 hcpasses=1 ProxySet timeout=5 ProxySet lbmethod=byrequests ProxySet nofailover=Off ProxySet stickysession=JSESSIONID|jsessionid </Proxy> ProxyPass /test balancer://mycluster/test ThreadLimit 25 ServerLimit 2 StartServers 2 MinSpareThreads 50 MaxSpareThreads 50 AsyncRequestWorkerFactor 2 ThreadsPerChild 25 MaxRequestWorkers 50 MaxConnectionsPerChild 1
- Start Tomcat and httpd and also place Tomcat in a paused state after a moment before the next tests so the 8080 hchecks will run long:
sudo systemctl start httpd sudo systemctl start tomcat sudo killall -STOP java
- Follow the httpd log to see when 8080 hchecks are starting and the process they are on:
[Thu Jul 24 15:56:42.644534 2025] [proxy_hcheck:trace3] [pid 10177:tid 10178] mod_proxy_hcheck.c(1049): Checking balancer://mycluster worker: http://127.0.0.1:8080 [3] (5576b0b63938) [Thu Jul 24 15:56:42.644670 2025] [proxy_hcheck:debug] [pid 10177:tid 10313] mod_proxy_hcheck.c(930): AH03256: Threaded Health checking http://127.0.0.1:8080 [Thu Jul 24 15:56:42.644702 2025] [proxy:debug] [pid 10177:tid 10313] proxy_util.c(2797): AH00942: HCOH: has acquired connection for (127.0.0.1:8080)
- After such 8080 hchecks start, send httpd a simple request that will trigger the MaxConnectionsPerChild recycling. Confirm from the logs the child running the hcheck singleton was the one serving this request and being recycled:
curl -v localhost/foo
- You can continue the tomcat process if you want and follow the httpd logs and see hchecks for other members continue but the 8080 members no longer repeat their healthchecks:
sudo killall -CONT java
Expected results
Healthchecks can stop across the release of child processes running the singleton.
Actual results
Healthchecks always continue after the release of the child process running the singleton task.
- links to
-
RHBA-2025:156677
httpd update