-
Bug
-
Resolution: Not a Bug
-
Critical
-
None
-
None
-
None
-
False
-
-
False
-
-
-
-
-
-
-
I think that is no added value to report status of server before server fully up.
[0m[33m14:05:59,250 WARN [org.wildfly.extension.microprofile.health.smallrye] (management I/O-2) WFLYMPHEALTH0003: Reporting health down status: \{"status":"DOWN","checks":[{"name":"boot-errors","status":"DOWN"},\{"name":"server-state","status":"DOWN"},\{"name":"deployments-status","status":"DOWN"},\{"name":"suspend-state","status":"DOWN"},\{"name":"empty-readiness-checks","status":"UP"}]} [0m[0m14:06:00,627 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 82) WFLYCLJG0032: Connecting 'ee' channel. 'hsc-1-tl5jv' joining cluster 'ee' via /10.128.5.109:7600 [0m[0m14:06:00,630 INFO [org.jgroups.JChannel] (ServerService Thread Pool -- 82) local_addr: d74a68be-64ff-40d1-919d-a605abd80960, name: hsc-1-tl5jv [0m[0m14:06:00,751 INFO [org.jgroups.protocols.FD_SOCK2] (ServerService Thread Pool -- 82) server listening on /10.128.5.109:57600 [0m[0m14:06:02,827 INFO [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 82) hsc-1-tl5jv: no members discovered after 2049 ms: creating cluster as coordinator [0m[0m14:06:02,933 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 82) WFLYCLJG0033: Connected 'ee' channel. 'hsc-1-tl5jv' joined cluster 'ee' with view: [hsc-1-tl5jv|0] (1) [hsc-1-tl5jv] [0m[33m14:06:05,998 WARN [org.wildfly.extension.microprofile.health.smallrye] (management I/O-2) WFLYMPHEALTH0003: Reporting health down status: \{"status":"DOWN","checks":[{"name":"boot-errors","status":"DOWN"},\{"name":"server-state","status":"DOWN"},\{"name":"deployments-status","status":"DOWN"},\{"name":"suspend-state","status":"DOWN"},\{"name":"empty-readiness-checks","status":"UP"}]} [0m[0m14:06:08,640 INFO [org.infinispan.CONTAINER] (ServerService Thread Pool -- 75) ISPN000556: Starting user marshaller 'org.wildfly.clustering.cache.infinispan.marshalling.UserMarshaller' [0m[0m14:06:08,658 INFO [org.infinispan.CONTAINER] (ServerService Thread Pool -- 81) ISPN000556: Starting user marshaller 'org.wildfly.clustering.cache.infinispan.marshalling.UserMarshaller' [0m[33m14:06:09,245 WARN [org.wildfly.extension.microprofile.health.smallrye] (management I/O-2) WFLYMPHEALTH0003: Reporting health down status: \{"status":"DOWN","checks":[{"name":"boot-errors","status":"DOWN"},\{"name":"server-state","status":"DOWN"},\{"name":"deployments-status","status":"DOWN"},\{"name":"suspend-state","status":"DOWN"},\{"name":"empty-readiness-checks","status":"UP"}]} [0m[0m14:06:09,645 INFO [org.infinispan.CONTAINER] (ServerService Thread Pool -- 75) ISPN000390: Persisted state, version=15.0.14.Final-redhat-00002 timestamp=2025-05-20T14:06:09.634733102Z [0m[0m14:06:09,645 INFO [org.infinispan.CONTAINER] (ServerService Thread Pool -- 81) ISPN000390: Persisted state, version=15.0.14.Final-redhat-00002 timestamp=2025-05-20T14:06:09.634735005Z [0m[0m14:06:14,276 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 81) Started hibernate cache container [0m[0m14:06:14,279 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 75) Started web cache container [0m[0m14:06:14,707 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 81) WFLYCLINF0002: Started ROOT.war cache from web container [0m[0m14:06:14,707 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 75) WFLYCLINF0002: Started default-server cache from web container [0m[0m14:06:18,619 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 81) WFLYUT0021: Registered web context: '/' for server 'default-server' [0m[33m14:06:18,890 WARN [org.wildfly.extension.metrics] (Controller Boot Thread) WFLYMETRICS0006: Additional metrics systems discovered while configuring WildFly Metrics: OpenTelemetry Metrics. Please see the administration guide for more information. [0m[33m14:06:19,257 WARN [org.wildfly.extension.microprofile.health.smallrye] (management I/O-2) WFLYMPHEALTH0003: Reporting health down status: \{"status":"DOWN","checks":[{"name":"boot-errors","status":"DOWN"},\{"name":"server-state","status":"DOWN"},\{"name":"deployments-status","status":"DOWN"},\{"name":"suspend-state","status":"DOWN"},\{"name":"empty-readiness-checks","status":"UP"}]} [0m[0m14:06:19,983 INFO [org.jboss.as.server] (ServerService Thread Pool -- 41) WFLYSRV0010: Deployed "ROOT.war" (runtime-name : "ROOT.war") [0m[0m14:06:20,175 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server [0m[0m14:06:20,188 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9990/management [0m[0m14:06:20,190 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0054: Admin console is not enabled [0m[0m14:06:20,202 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 8.1 (WildFly Core 27.1.0.Final-redhat-00001) started in 35638ms - Started 408 of 609 services (357 services are lazy, passive or on-demand) - Server configuration file in use: standalone.xml
What is happening is Kubernetes is calling readiness probe (as you can see in events in attachmant) Readiness probe failed: Get "http://10.128.5.103:9990/health/ready"
I can't reproduce in base server. This is different behaviour compared to base server.