-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.16.0
-
None
-
Proposed
-
False
-
Description of problem:
inactivityTimeoutSeconds parameter intermittently does not work during oauthaccesstoken creation. This results in the parameter being missing from the token.
Version-Release number of selected component (if applicable):
4.16.0-0.nightly-2024-04-22-023835
How reproducible:
Intermittent
Steps to Reproduce:
1. Update the OAuth cluster configuration parameter to accessTokenInactivityTimeout: 300
2. seconds and wait for the cluster to restart.
3. Log in with a normal user and get a user token.
4. Check if the token includes inactivityTimeoutSeconds: 300. issue observed Intermittently 1 or 2 time out of 10 time token creation.
Further command and logs are under additional Info part.
Actual results:
The inactivityTimeoutSeconds: 300 parameter is missing from the created user token after enable
Expected results:
The user token should include the inactivityTimeoutSeconds: 300 parameter.
Additional Info:
Observed "APIServer panic'd: net/http: abort Handler" error in audit failure logs during same timestamps
Audit logs: attached
Must-gather logs: attached
CI failure logs : Attached
Token creation logs, do't not have inactivityTimeoutSeconds :
# grep -hir 'sha256~YW0..<snip>..AnZo3jSbM' . | jq { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "RequestResponse", "auditID": "99b8a965-f9d1-474c-a55d-d383d7229131", "stage": "ResponseComplete", "requestURI": "/apis/oauth.openshift.io/v1/oauthaccesstokens", "verb": "create", "user": { "username": "system:serviceaccount:openshift-authentication:oauth-openshift", "groups": [ "system:serviceaccounts", "system:serviceaccounts:openshift-authentication", "system:authenticated" ], "extra": { "authentication.kubernetes.io/node-name": [ "ip-10-0-63-74.ec2.internal" ], "authentication.kubernetes.io/node-uid": [ "b30ae22f-a47d-4e1c-90e5-e6a752e9152a" ], "authentication.kubernetes.io/pod-name": [ "oauth-openshift-5dfdb8498b-2mwnp" ], "authentication.kubernetes.io/pod-uid": [ "7ec7803a-7172-4fff-9a47-f634b9d638ab" ] } }, "sourceIPs": [ "10.0.63.74", "10.130.0.2" ], "userAgent": "oauth-server/v0.0.0 (linux/amd64) kubernetes/$Format", "objectRef": { "resource": "oauthaccesstokens", "name": "sha256~YW0..<snip>..AnZo3jSbM", "apiGroup": "oauth.openshift.io", "apiVersion": "v1" }, "responseStatus": { "metadata": {}, "code": 201 }, "requestObject": { "kind": "OAuthAccessToken", "apiVersion": "oauth.openshift.io/v1", "metadata": { "name": "sha256~YW0..<snip>..AnZo3jSbM", "creationTimestamp": null }, "clientName": "openshift-challenging-client", "expiresIn": 86400, "scopes": [ "user:full" ], "redirectURI": "https://oauth-openshift.apps.ci-op-2r89prs7-68d86.qe.devcluster.openshift.com/oauth/token/implicit", "userName": "testuser-1", "userUID": "496de345-40c3-4f22-afcb-223c8ea9cfe0", "authorizeToken": "sha256~d7LPmIBv-BhG6D76PDTYDY3LF76pFcJUzv29emH3F3A" }, "responseObject": { "kind": "OAuthAccessToken", "apiVersion": "oauth.openshift.io/v1", "metadata": { "name": "sha256~YW0..<snip>..AnZo3jSbM", "uid": "484cc097-0434-4d4f-a356-3c2bb0902116", "resourceVersion": "739908", "creationTimestamp": "2024-04-08T16:33:16Z" }, "clientName": "openshift-challenging-client", "expiresIn": 86400, "scopes": [ "user:full" ], "redirectURI": "https://oauth-openshift.apps.ci-op-2r89prs7-68d86.qe.devcluster.openshift.com/oauth/token/implicit", "userName": "testuser-1", "userUID": "496de345-40c3-4f22-afcb-223c8ea9cfe0", "authorizeToken": "sha256~d7LPmIBv-BhG6D76PDTYDY3LF76pFcJUzv29emH3F3A" }, "requestReceivedTimestamp": "2024-04-08T16:33:16.973025Z", "stageTimestamp": "2024-04-08T16:33:16.985994Z", "annotations": { "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:openshift:openshift-authentication\" of ClusterRole \"cluster-admin\" to ServiceAccount \"oauth-openshift/openshift-authentication\"" } }
User useroauthaccesstokens Apiserver Panic logs during that timestamp:
# grep -hir '2024-04-08T16:33' . | egrep -i 'fail|error' | grep oauthaccesstokens | jq { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "Metadata", "auditID": "05bfb2a0-41fa-470a-a052-f5d6d1d537ba", "stage": "Panic", "requestURI": "/apis/oauth.openshift.io/v1/useroauthaccesstokens?allowWatchBookmarks=true&resourceVersion=705601&timeout=8m19s&timeoutSeconds=499&watch=true", "verb": "watch", "user": { "username": "system:kube-controller-manager", "groups": [ "system:authenticated" ] }, "sourceIPs": [ "10.0.69.141" ], "userAgent": "kube-controller-manager/v1.29.3+e994e5d (linux/amd64) kubernetes/9ebebe1/kube-controller-manager", "objectRef": { "resource": "useroauthaccesstokens", "apiGroup": "oauth.openshift.io", "apiVersion": "v1" }, "responseStatus": { "metadata": {}, "status": "Failure", "message": "APIServer panic'd: net/http: abort Handler", "reason": "InternalError", "code": 500 }, "requestReceivedTimestamp": "2024-04-08T16:30:30.435578Z", "stageTimestamp": "2024-04-08T16:33:14.327166Z", "annotations": { "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:kube-controller-manager\" of ClusterRole \"system:kube-controller-manager\" to User \"system:kube-controller-manager\"" } } { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "Metadata", "auditID": "6b9a0d14-3605-4f70-9e3f-b87d89428bc2", "stage": "Panic", "requestURI": "/apis/oauth.openshift.io/v1/useroauthaccesstokens?allowWatchBookmarks=true&resourceVersion=739837&timeout=8m59s&timeoutSeconds=539&watch=true", "verb": "watch", "user": { "username": "system:kube-controller-manager", "groups": [ "system:authenticated" ] }, "sourceIPs": [ "10.0.69.141" ], "userAgent": "kube-controller-manager/v1.29.3+e994e5d (linux/amd64) kubernetes/9ebebe1/kube-controller-manager", "objectRef": { "resource": "useroauthaccesstokens", "apiGroup": "oauth.openshift.io", "apiVersion": "v1" }, "responseStatus": { "metadata": {}, "status": "Failure", "message": "APIServer panic'd: net/http: abort Handler", "reason": "InternalError", "code": 500 }, "requestReceivedTimestamp": "2024-04-08T16:33:15.661370Z", "stageTimestamp": "2024-04-08T16:34:26.580751Z", "annotations": { "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:kube-controller-manager\" of ClusterRole \"system:kube-controller-manager\" to User \"system:kube-controller-manager\"" } }
Executed commands:
[16:31:57] INFO> Shell Commands: oc new-project 59bni --kubeconfig=/alabama/workdir/aws-ipi-disc-priv-tp-amd-f9-destructive-cucushift-ex/ocp4_testuser-1.kubeconfig Now using project "59bni" on server "https://api.ci-op-2r89prs7-68d86.qe.devcluster.openshift.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname [16:31:58] INFO> Exit Status: 0 [16:32:06] INFO> Shell Commands: oc get oauth cluster -o yaml --kubeconfig=/alabama/workdir/aws-ipi-disc-priv-tp-amd-f9-destructive-cucushift-ex/ocp4_admin.kubeconfig apiVersion: config.openshift.io/v1 kind: OAuth metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" release.openshift.io/create-only: "true" creationTimestamp: "2024-04-08T03:16:27Z" generation: 12 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: dc192b4d-95aa-492d-a823-112e839bab11 resourceVersion: "736369" uid: 149155f1-69b4-45ac-99a6-6518ebbd9836 spec: identityProviders: - htpasswd: fileData: name: cucushift-htpass-secret mappingMethod: claim name: cucushift-htpasswd-provider type: HTPasswd [16:32:07] INFO> Shell Commands: oc patch oauth.config cluster -p \{\"spec\":\{\"tokenConfig\":\{\"accessTokenInactivityTimeout\":\ \"300s\"\}\}\} --type=merge --kubeconfig=/alabama/workdir/aws-ipi-disc-priv-tp-amd-f9-destructive-cucushift-ex/ocp4_admin.kubeconfig oauth.config.openshift.io/cluster patched [16:33:15] INFO> #### Operator kube-apiserver Expected conditions: {"Available"=>"True", "Progressing"=>"False", "Degraded"=>"False"} [16:33:15] INFO> #### After 1.004526632001216 seconds and 1 iterations operator kube-apiserver becomes: {"Available"=>"True", "Progressing"=>"False", "Degraded"=>"False"} And I wait up to 180 seconds for the steps to pass: # features/step_definitions/meta_steps.rb:33 """ When I run the :get admin command with: | resource | pod | | l | app=oauth-openshift | | n | openshift-authentication | Then the step should succeed And the output should not contain "Terminating" """ [16:33:15] INFO> Shell Commands: oc get pod -l app\=oauth-openshift --kubeconfig=/alabama/workdir/aws-ipi-disc-priv-tp-amd-f9-destructive-cucushift-ex/ocp4_admin.kubeconfig -n openshift-authentication NAME READY STATUS RESTARTS AGE oauth-openshift-5dfdb8498b-2k428 1/1 Running 0 3m7s oauth-openshift-5dfdb8498b-2mwnp 1/1 Running 0 2m11s oauth-openshift-5dfdb8498b-bz96z 1/1 Running 0 2m39s [16:33:16] INFO> Exit Status: 0 When I run the :login client command with: # features/step_definitions/cli.rb:13 | server | <%= env.api_endpoint_url %> | | username | <%= user.name %> | | password | <%= user.password %> | | config | test.kubeconfig | | skip_tls_verify | true | WARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have one project on this server: "59bni" Using project "59bni". [16:33:17] INFO> Exit Status: 0 Then the step should succeed # features/step_definitions/common.rb:4 When I run the :whoami client command with: # features/step_definitions/cli.rb:13 | t | | | config | test.kubeconfig | Then the step should succeed # features/step_definitions/common.rb:4 And evaluation of `@result[:stdout].chomp` is stored in the :tokenval clipboard # features/step_definitions/common.rb:128 When I run the :get admin command with: # features/step_definitions/cli.rb:37 | resource | oauthaccesstoken | | resource_name | <%= get_oauthaccesstoken(cb.tokenval) %> | | o | yaml | [16:33:18] INFO> Shell Commands: oc get oauthaccesstoken sha256\~YW0oKa..<snip>..o3jSbM -o yaml --kubeconfig=/alabama/workdir/aws-ipi-disc-priv-tp-amd-f9-destructive-cucushift-ex/ocp4_admin.kubeconfig apiVersion: oauth.openshift.io/v1 authorizeToken: sha256~d7LPmIBv-BhG6D76PDTYDY3LF76pFcJUzv29emH3F3A clientName: openshift-challenging-client expiresIn: 86400 kind: OAuthAccessToken metadata: creationTimestamp: "2024-04-08T16:33:16Z" name: sha256~YW0oKa..<snip>..o3jSbM resourceVersion: "739908" uid: 484cc097-0434-4d4f-a356-3c2bb0902116 redirectURI: https://oauth-openshift.apps.ci-op-2r89prs7-68d86.qe.devcluster.openshift.com/oauth/token/implicit scopes: - user:full userName: testuser-1 userUID: 496de345-40c3-4f22-afcb-223c8ea9cfe0 [16:33:19] INFO> Exit Status: 0 Then the output should contain: # features/step_definitions/common.rb:33 | inactivityTimeoutSeconds: 300 | pattern not found: inactivityTimeoutSeconds: 300 (RuntimeError) /verification-tests/features/step_definitions/common.rb:103:in `block (3 levels) in <top (required)>' /verification-tests/features/step_definitions/common.rb:97:in `each' /verification-tests/features/step_definitions/common.rb:97:in `block (2 levels) in <top (required)>' /verification-tests/features/step_definitions/common.rb:56:in `each' /verification-tests/features/step_definitions/common.rb:56:in `/^(the|all)? outputs?( by order)? should( not)? (contain|match)(?: ([0-9]+|<%=.+?%>) times)?:$/' features/tierN/apiserver/auth/token.feature:103:in `the output should contain:' Given 60 seconds have passed # features/step_definitions/common.rb:124