-
Bug
-
Resolution: Done
-
Undefined
-
None
-
None
-
2
-
False
-
None
-
False
-
-
-
GRC Sprint 2024-07
-
Moderate
-
No
When the Gatekeeper installation status changes (i.e. not installed to installed or the reverse), the governance-policy-framework pod becomes unhealthy and gets restarted by Kubernetes so that on startup, the gatekeeper-constraint-status-sync controller can be started.
This makes Gatekeeper related tests slower, less reliable, and hard to debug since the pod logs are lost on restart. We should instead, have a goroutine that monitors the Gatekeeper installation state and starts/stops a controller-runtime manager just dedicated to the gatekeeper-constraint-status-sync controller.
Note that the health endpoint proxy in "startHealthProxy" function needs to account for this new dynamically created endpoint.
- links to
-
RHBA-2024:130865 Red Hat Advanced Cluster Management 2.10.3 bug fixes and container updates