-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
None
-
4.13
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
A lot capi provider pods stuck in Init status on management clusters with the logs:
{"level":"error","ts":"2023-12-20T01:25:47Z","msg":"Request failed, retrying...","sleepTime":"1s","error":"Get \"https://kube-apiserver:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)","stacktrace":"github.com/openshift/hypershift/availability-prober.NewStartCommand.func2\n\t/hypershift/availability-prober/availability_prober.go:99\ngithub.com/spf13/cobra.(*Command).execute\n\t/hypershift/vendor/github.com/spf13/cobra/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/hypershift/vendor/github.com/spf13/cobra/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/hypershift/vendor/github.com/spf13/cobra/command.go:968\nmain.main\n\t/hypershift/control-plane-operator/main.go:66\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:250"}
The kube-apiserver service in the HCP namespace is having 443 exposed, but it's using 6443 to do the health check.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info: