-
Enhancement
-
Resolution: Duplicate
-
Major
-
20.0.1.Final
-
None
-
Undefined
I have a general conceptual question about microprofile-health api.
In my application I implemented a detailed health status which results in a output with HTTP Code 200 as followed:
{ "status": "UP", "checks": [ { "name": "imixs-workflow", "status": "UP", "data": { "engine.version": "5.2.9-SNAPSHOT", "model.groups": 1, "model.versions": 1, "index.status": "ok", "database.status": "ok" } }, { "name": "ready-deployment.imixs-office-workflow.war", "status": "UP" } ] }
I am running this on Wildfly 20.0.1-Final
My Kubernetes health check looks like this:
spec: containers: ... livenessProbe: httpGet: path: /health port: 9990 initialDelaySeconds: 120 periodSeconds: 10 failureThreshold: 3 ...
So I am using the default behaviour from Kubernetes here to validate the http response code 200=OK.
But I run into situations where something bad happens and my application did not start correctly and is not deployed at all. This results on the server in a health status like this:
{ "status": "UP", }
And so again we have http response code 200=OK and kubernetes thinks everything is fine, which is not the case.
Is there any chance in eclipse microprofile-health or in WIlfly to force the Status=DOWN in case a specific deployment is missing?
For example wildfly could return HTTP Status 204 or 206 if a deplyoment failed.
I also asked the same question on microprofile-health on Github:
https://github.com/eclipse/microprofile-health/issues/288
And Martin Stefanko suggested asking you at this place.
- duplicates
-
WFLY-12342 Integrate server probes in MP Health readiness check
- Closed