-
Bug
-
Resolution: Won't Do
-
Major
-
None
-
4.15.z, 4.16
-
Moderate
-
No
-
Rejected
-
False
-
-
Release Note Not Required
-
In Progress
Description of problem:
After configuring external OIDC, cluster displays "v1.oauth.openshift.io" and "v1.user.openshift.io" apiservices "False (FailedDiscoveryCheck)" which can lead to user confusion.
Version-Release number of selected component (if applicable):
4.16.0-0.nightly-2024-02-26-155043
How reproducible:
Always
Steps to Reproduce:
1. Install fresh HCP env and configure external OIDC as steps 1 ~ 4 of https://issues.redhat.com/browse/OCPBUGS-29154 (to avoid repeated typing those steps, only referencing as is here). 2. Verified the described "two issues" in https://issues.redhat.com/browse/OCPBUGS-29154 indeed are gone now. 3. Check `oc get apiservices`, they becomes False as below: $ oc get apiservices | grep False v1.oauth.openshift.io default/openshift-oauth-apiserver False (FailedDiscoveryCheck) 171m v1.user.openshift.io default/openshift-oauth-apiserver False (FailedDiscoveryCheck) 171m $ oc get apiservices v1.oauth.openshift.io -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: "2024-02-28T06:50:12Z" ... status: conditions: - lastTransitionTime: "2024-02-28T09:20:52Z" message: 'failing or missing response from https://172.30.18.228:443/apis/oauth.openshift.io/v1: Get "https://172.30.18.228:443/apis/oauth.openshift.io/v1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)' reason: FailedDiscoveryCheck status: "False" type: Available $ oc get apiservices v1.user.openshift.io -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: "2024-02-28T06:50:12Z" ... status: conditions: - lastTransitionTime: "2024-02-28T09:20:52Z" message: 'failing or missing response from https://172.30.18.228:443/apis/user.openshift.io/v1: Get "https://172.30.18.228:443/apis/user.openshift.io/v1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)' reason: FailedDiscoveryCheck status: "False" type: Available 4. The backend service and pods are reachable from kube-apiserver, though: $ oc get po -n clusters-$HC_NAME --kubeconfig $MGMT_KUBECONFIG -o wide | grep oauth-apiserver openshift-oauth-apiserver-654477d69c-22q88 2/2 Running 0 4h36m 10.129.2.20 ... openshift-oauth-apiserver-654477d69c-26r77 2/2 Running 0 4h36m 10.128.2.20 ... openshift-oauth-apiserver-654477d69c-m7llg 2/2 Running 0 4h36m 10.131.0.41 ... $ oc exec -it -n clusters-$HC_NAME --kubeconfig $MGMT_KUBECONFIG -c kube-apiserver kube-apiserver-596dcb97f-n5nqn -- bash bash-5.1$ curl -k -I https://172.30.18.228:443/apis/user.openshift.io/v1 HTTP/2 403 ... bash-5.1$ curl -k -I https://10.129.2.20:8443/healthz HTTP/2 200 ... bash-5.1$ curl -k -I https://10.128.2.20:8443/healthz HTTP/2 200 ... bash-5.1$ curl -k -I https://10.131.0.41:8443/healthz HTTP/2 200
Actual results:
Step 3: hit above False apiservices.
Expected results:
We know this may be expected, given the cluster already configures external OIDC. However, displaying "False (FailedDiscoveryCheck)" can leads to user confusion. Thus, we'd better handle it in better way without user confusion.
Additional info:
- is related to
-
OCPSTRAT-933 Hypershift guest cluster can use external OIDC token issuer
- Closed
- relates to
-
OCPBUGS-35335 Failed to pull/push blob from/to image registry on external OIDC cluster
- Closed
-
OCPBUGS-30424 Unable to delete namespaces in a Hypershift hosted cluster using external OIDC
- Closed