-
Bug
-
Resolution: Unresolved
-
Major
-
4.15
-
None
-
Moderate
-
No
-
MON Sprint 248
-
1
-
Rejected
-
False
-
-
Release Note Not Required
-
In Progress
Description of problem
Build02, a years old cluster currently running 4.15.0-ec.2 with TechPreviewNoUpgrade, has been Available=False for days:
$ oc get -o json clusteroperator monitoring | jq '.status.conditions[] | select(.type == "Available")' { "lastTransitionTime": "2024-01-14T04:09:52Z", "message": "UpdatingMetricsServer: reconciling MetricsServer Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/metrics-server: context deadline exceeded", "reason": "UpdatingMetricsServerFailed", "status": "False", "type": "Available" }
Both pods had been having CA trust issues. We deleted one pod, and it's replacement is happy:
$ oc -n openshift-monitoring get -l app.kubernetes.io/component=metrics-server pods NAME READY STATUS RESTARTS AGE metrics-server-9cc8bfd56-dd5tx 1/1 Running 0 136m metrics-server-9cc8bfd56-k2lpv 0/1 Running 0 36d
The young, happy pod has occasional node-removed noise, which is expected in this cluster with high levels of compute-node autoscaling:
$ oc -n openshift-monitoring logs --tail 3 metrics-server-9cc8bfd56-dd5tx E0117 17:16:13.492646 1 scraper.go:140] "Failed to scrape node" err="Get \"https://10.0.32.33:10250/metrics/resource\": dial tcp 10.0.32.33:10250: connect: connection refused" node="build0-gstfj-ci-builds-worker-b-srjk5" E0117 17:16:28.611052 1 scraper.go:140] "Failed to scrape node" err="Get \"https://10.0.32.33:10250/metrics/resource\": dial tcp 10.0.32.33:10250: connect: connection refused" node="build0-gstfj-ci-builds-worker-b-srjk5" E0117 17:16:56.898453 1 scraper.go:140] "Failed to scrape node" err="Get \"https://10.0.32.33:10250/metrics/resource\": context deadline exceeded" node="build0-gstfj-ci-builds-worker-b-srjk5"
While the old, sad pod is complaining about unknown authorities:
$ oc -n openshift-monitoring logs --tail 3 metrics-server-9cc8bfd56-k2lpv E0117 17:19:09.612161 1 scraper.go:140] "Failed to scrape node" err="Get \"https://10.0.0.3:10250/metrics/resource\": tls: failed to verify certificate: x509: certificate signed by unknown authority" node="build0-gstfj-m-2.c.openshift-ci-build-farm.internal" E0117 17:19:09.620872 1 scraper.go:140] "Failed to scrape node" err="Get \"https://10.0.32.90:10250/metrics/resource\": tls: failed to verify certificate: x509: certificate signed by unknown authority" node="build0-gstfj-ci-prowjobs-worker-b-cg7qd" I0117 17:19:14.538837 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
More details in the Additional details section, but the timeline seems to have been something like:
- 2023-12-11, metrics-server-* pods come up, and are running happily, scraping kubelets with a CA trust store descended from openshift-config-managed's kubelet-serving-ca ConfigMap.
- 2024-01-02, a new openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 is created.
- 2024-01-04, kubelets rotate their serving CA. Not entirely clear how this works yet outside of bootstrapping, but at least for bootstrapping it uses a CertificateSigningRequest, approved by cluster-machine-approver, and signed by the kubernetes.io/kubelet-serving signing component in the kube-controller-manager-* pods in the openshift-kube-controller-manager namespace.
- 2024-01-04, the csr-signer Secret in openshift-kube-controller-manager has the new openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 issuing a certificate for kube-csr-signer_@1704338196.
- The kubelet-serving-ca ConfigMap gets updated to include a CA for the new kube-csr-signer_@1704338196, signed by the new openshift-kube-controller-manager-operator_csr-signer-signer@1704206554.
- Local /etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt updated in metrics-server-* containers.
- But metrics-server-* pods fail to notice the file change and reload /etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt, so the existing pods do not trust the new kubelet server certs.
- Mysterious time delay. Perhaps the monitoring operator does not notice sad metrics-server-* pods outside of things that trigger DeploymentRollout?
- 2024-01-14, monitoring ClusterOperator goes Available=False on UpdatingMetricsServerFailed.
- 2024-01-17, deleting one metrics-server-* pod triggers replacement-pod creation, and the replacement pod comes up fine.
So addressing the metrics-server /etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt change detection should resolve this use-case. And triggering a container or pod restart would be an aggressive-but-sufficient mechanism, although loading the new data without rolling the process would be less invasive.
Version-Release number of selected component (if applicable)
4.15.0-ec.3, which has fast CA rotation, see discussion in API-1687.
How reproducible
Unclear.
Steps to Reproduce
Unclear.
Actual results
metrics-server pods having trouble with CA trust when attempting to scrape nodes.
Expected results
metrics-server pods successfully trusting kubelets when scraping nodes.
Additional details
The monitoring operator sets up the metrics server with --kubelet-certificate-authority=/etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt, which is the "Path to the CA to use to validate the Kubelet's serving certificates" and is mounted from the kubelet-serving-ca-bundle ConfigMap. But that mount point only contains openshift-kube-controller-manager-operator_csr-signer-signer@... CAs:
$ oc --as system:admin -n openshift-monitoring debug pod/metrics-server-9cc8bfd56-k2lpv -- cat /etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt | while openssl x509 -noout -text; do :; done | grep '^Certificate:\|Issuer\|Subject:\|Not ' Starting pod/metrics-server-9cc8bfd56-k2lpv-debug-gtctn ... Removing debug pod ... Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Not Before: Dec 3 14:42:33 2023 GMT Not After : Feb 1 14:42:34 2024 GMT Subject: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Not Before: Dec 20 03:16:35 2023 GMT Not After : Jan 19 03:16:36 2024 GMT Subject: CN = kube-csr-signer_@1703042196 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 Not Before: Jan 4 03:16:35 2024 GMT Not After : Feb 3 03:16:36 2024 GMT Subject: CN = kube-csr-signer_@1704338196 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 Not Before: Jan 2 14:42:34 2024 GMT Not After : Mar 2 14:42:35 2024 GMT Subject: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 unable to load certificate 137730753918272:error:0909006C:PEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE
While actual kubelets seem to be using certs signed by kube-csr-signer_@1704338196 (which is one of the Subjects in /etc/tls/kubelet-serving-ca-bundle/ca-bundle.crt):
$ oc get -o wide -l node-role.kubernetes.io/master= nodes NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME build0-gstfj-m-0.c.openshift-ci-build-farm.internal Ready master 3y240d v1.28.3+20a5764 10.0.0.4 <none> Red Hat Enterprise Linux CoreOS 415.92.202311271112-0 (Plow) 5.14.0-284.41.1.el9_2.x86_64 cri-o://1.28.2-2.rhaos4.15.gite7be4e1.el9 build0-gstfj-m-1.c.openshift-ci-build-farm.internal Ready master 3y240d v1.28.3+20a5764 10.0.0.5 <none> Red Hat Enterprise Linux CoreOS 415.92.202311271112-0 (Plow) 5.14.0-284.41.1.el9_2.x86_64 cri-o://1.28.2-2.rhaos4.15.gite7be4e1.el9 build0-gstfj-m-2.c.openshift-ci-build-farm.internal Ready master 3y240d v1.28.3+20a5764 10.0.0.3 <none> Red Hat Enterprise Linux CoreOS 415.92.202311271112-0 (Plow) 5.14.0-284.41.1.el9_2.x86_64 cri-o://1.28.2-2.rhaos4.15.gite7be4e1.el9 $ oc --as system:admin -n openshift-monitoring debug pod/metrics-server-9cc8bfd56-k2lpv -- openssl s_client -connect 10.0.0.3:10250 -showcerts </dev/null Starting pod/metrics-server-9cc8bfd56-k2lpv-debug-ksl2k ... Can't use SSL_get_servername depth=0 O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal verify error:num=21:unable to verify the first certificate verify return:1 depth=0 O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal verify return:1 CONNECTED(00000003) --- Certificate chain 0 s:O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal i:CN = kube-csr-signer_@1704338196 -----BEGIN CERTIFICATE----- MIIC5DCCAcygAwIBAgIQAbKVl+GS6s2H20EHAWl4WzANBgkqhkiG9w0BAQsFADAm MSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MDQzMzgxOTYwHhcNMjQwMTE3 MDMxNDMwWhcNMjQwMjAzMDMxNjM2WjBhMRUwEwYDVQQKEwxzeXN0ZW06bm9kZXMx SDBGBgNVBAMTP3N5c3RlbTpub2RlOmJ1aWxkMC1nc3Rmai1tLTIuYy5vcGVuc2hp ZnQtY2ktYnVpbGQtZmFybS5pbnRlcm5hbDBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABFqT+UgohFAxJrGYQUeYsEhNB+ufFo14xYDedKBCeNzMhaC+5/I4UN1e1u2X PH7J4ncmH+M/LXI7v+YfEIG7cH+jgZ0wgZowDgYDVR0PAQH/BAQDAgeAMBMGA1Ud JQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAU394ABuS2 9i0qss9AKk/mQ9lhJ88wRAYDVR0RBD0wO4IzYnVpbGQwLWdzdGZqLW0tMi5jLm9w ZW5zaGlmdC1jaS1idWlsZC1mYXJtLmludGVybmFshwQKAAADMA0GCSqGSIb3DQEB CwUAA4IBAQCiKelqlgK0OHFqDPdIR+RRdjXoCfFDa0JGCG0z60LYJV6Of5EPv0F/ vGZdM/TyGnPT80lnLCh2JGUvneWlzQEZ7LEOgXX8OrAobijiFqDZFlvVwvkwWNON rfucLQWDFLHUf/yY0EfB0ZlM8Sz4XE8PYB6BXYvgmUIXS1qkV9eGWa6RPLsOnkkb q/dTLE/tg8cz24IooDC8lmMt/wCBPgsq9AnORgNdZUdjCdh9DpDWCw0E4csSxlx2 H1qlH5TpTGKS8Ox9JAfdAU05p/mEhY9PEPSMfdvBZep1xazrZyQIN9ckR2+11Syw JlbEJmapdSjIzuuKBakqHkDgoq4XN0KM -----END CERTIFICATE----- --- Server certificate subject=O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal issuer=CN = kube-csr-signer_@1704338196 --- Acceptable client certificate CA names OU = openshift, CN = admin-kubeconfig-signer CN = openshift-kube-controller-manager-operator_csr-signer-signer@1699022534 CN = kube-csr-signer_@1700450189 CN = kube-csr-signer_@1701746196 CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 CN = openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1691004449 CN = openshift-kube-apiserver-operator_kube-control-plane-signer@1702234292 CN = openshift-kube-apiserver-operator_kube-control-plane-signer@1699642292 OU = openshift, CN = kubelet-bootstrap-kubeconfig-signer CN = openshift-kube-apiserver-operator_node-system-admin-signer@1678905372 Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1 Shared Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512 Peer signing digest: SHA256 Peer signature type: ECDSA Server Temp Key: X25519, 253 bits --- SSL handshake has read 1902 bytes and written 383 bytes Verification error: unable to verify the first certificate --- New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256 Server public key is 256 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 21 (unable to verify the first certificate) --- DONE Removing debug pod ... $ openssl x509 -noout -text <<EOF 2>/dev/null > -----BEGIN CERTIFICATE----- MIIC5DCCAcygAwIBAgIQAbKVl+GS6s2H20EHAWl4WzANBgkqhkiG9w0BAQsFADAm MSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MDQzMzgxOTYwHhcNMjQwMTE3 MDMxNDMwWhcNMjQwMjAzMDMxNjM2WjBhMRUwEwYDVQQKEwxzeXN0ZW06bm9kZXMx SDBGBgNVBAMTP3N5c3RlbTpub2RlOmJ1aWxkMC1nc3Rmai1tLTIuYy5vcGVuc2hp ZnQtY2ktYnVpbGQtZmFybS5pbnRlcm5hbDBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABFqT+UgohFAxJrGYQUeYsEhNB+ufFo14xYDedKBCeNzMhaC+5/I4UN1e1u2X PH7J4ncmH+M/LXI7v+YfEIG7cH+jgZ0wgZowDgYDVR0PAQH/BAQDAgeAMBMGA1Ud JQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAU394ABuS2 9i0qss9AKk/mQ9lhJ88wRAYDVR0RBD0wO4IzYnVpbGQwLWdzdGZqLW0tMi5jLm9w ZW5zaGlmdC1jaS1idWlsZC1mYXJtLmludGVybmFshwQKAAADMA0GCSqGSIb3DQEB CwUAA4IBAQCiKelqlgK0OHFqDPdIR+RRdjXoCfFDa0JGCG0z60LYJV6Of5EPv0F/ vGZdM/TyGnPT80lnLCh2JGUvneWlzQEZ7LEOgXX8OrAobijiFqDZFlvVwvkwWNON rfucLQWDFLHUf/yY0EfB0ZlM8Sz4XE8PYB6BXYvgmUIXS1qkV9eGWa6RPLsOnkkb q/dTLE/tg8cz24IooDC8lmMt/wCBPgsq9AnORgNdZUdjCdh9DpDWCw0E4csSxlx2 H1qlH5TpTGKS8Ox9JAfdAU05p/mEhY9PEPSMfdvBZep1xazrZyQIN9ckR2+11Syw JlbEJmapdSjIzuuKBakqHkDgoq4XN0KM -----END CERTIFICATE----- > EOF ... Issuer: CN = kube-csr-signer_@1704338196 Validity Not Before: Jan 17 03:14:30 2024 GMT Not After : Feb 3 03:16:36 2024 GMT Subject: O = system:nodes, CN = system:node:build0-gstfj-m-2.c.openshift-ci-build-farm.internal ...
The monitoring operator populates the openshift-monitoring kubelet-serving-ca-bundle} ConfigMap using data from the openshift-config-managed kubelet-serving-ca ConfigMap, and that propagation is working, but does not contain the kube-csr-signer_ CA:
$ oc -n openshift-config-managed get -o json configmap kubelet-serving-ca | jq -r '.data["ca-bundle.crt"]' | while openssl x509 -noout -text; do :; done | grep '^Certificate:\|Issuer\|Subject:\|Not ' Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Not Before: Dec 3 14:42:33 2023 GMT Not After : Feb 1 14:42:34 2024 GMT Subject: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1701614554 Not Before: Dec 20 03:16:35 2023 GMT Not After : Jan 19 03:16:36 2024 GMT Subject: CN = kube-csr-signer_@1703042196 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 Not Before: Jan 4 03:16:35 2024 GMT Not After : Feb 3 03:16:36 2024 GMT Subject: CN = kube-csr-signer_@1704338196 Certificate: Issuer: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 Not Before: Jan 2 14:42:34 2024 GMT Not After : Mar 2 14:42:35 2024 GMT Subject: CN = openshift-kube-controller-manager-operator_csr-signer-signer@1704206554 unable to load certificate 140531510617408:error:0909006C:PEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE $ oc -n openshift-config-managed get -o json configmap kubelet-serving-ca | jq -r '.data["ca-bundle.crt"]' | sha1sum a32ab44dff8030c548087d70fea599b0d3fab8af - $ oc -n openshift-monitoring get -o json configmap kubelet-serving-ca-bundle | jq -r '.data["ca-bundle.crt"]' | sha1sum a32ab44dff8030c548087d70fea599b0d3fab8af -
Flipping over to the kubelet side, nothing in the machine-config operator's template is jumping out at me as a key/cert pair for serving on 10250. The kubelet seems to set up server certs via serverTLSBootstrap: true. But we don't seem to set the beta RotateKubeletServerCertificate, so I'm not clear on how these are supposed to rotate on the kubelet side. But there are CSRs from kubelets requesting serving certs:
$ oc get certificatesigningrequests | grep 'NAME\|kubelet-serving' NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-8stgd 51m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-xkdw2 <none> Approved,Issued csr-blbjx 9m1s kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-longtests-worker-b-5w9dz <none> Approved,Issued csr-ghxh5 64m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-sdwdn <none> Approved,Issued csr-hng85 33m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-longtests-worker-d-7d7h2 <none> Approved,Issued csr-hvqxz 24m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-fp6wb <none> Approved,Issued csr-vc52m 50m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-xlmt6 <none> Approved,Issued csr-vflcm 40m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-djpgq <none> Approved,Issued csr-xfr7d 51m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-8v4vk <none> Approved,Issued csr-zhzbs 51m kubernetes.io/kubelet-serving system:node:build0-gstfj-ci-builds-worker-b-rqr68 <none> Approved,Issued $ oc get -o json certificatesigningrequests csr-blbjx { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2024-01-17T19:20:43Z", "generateName": "csr-", "name": "csr-blbjx", "resourceVersion": "4719586144", "uid": "5f12d236-3472-485f-8037-3896f51a809c" }, "spec": { "groups": [ "system:nodes", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQlh6Q0NBUVFDQVFBd1ZqRVZNQk1HQTFVRUNoTU1jM2x6ZEdWdE9tNXZaR1Z6TVQwd093WURWUVFERXpSegplWE4wWlcwNmJtOWtaVHBpZFdsc1pEQXRaM04wWm1vdFkya3RiRzl1WjNSbGMzUnpMWGR2Y210bGNpMWlMVFYzCk9XUjZNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUV5Y0dhSDMvZ3F4ZHNZWkdmQXovTEpoZVgKd1o0Z1VRbjB6TlZUenJncHpvd1VPOGR6NTN4UUZTOTRibm40NldlZFg3Q2xidUpVSUpUN2pCblV1WEdnZktCTQpNRW9HQ1NxR1NJYjNEUUVKRGpFOU1Ec3dPUVlEVlIwUkJESXdNSUlvWW5WcGJHUXdMV2R6ZEdacUxXTnBMV3h2CmJtZDBaWE4wY3kxM2IzSnJaWEl0WWkwMWR6bGtlb2NFQ2dBZ0F6QUtCZ2dxaGtqT1BRUURBZ05KQURCR0FpRUEKMHlRVzZQOGtkeWw5ZEEzM3ppQTJjYXVJdlhidTVhczNXcUZLYWN2bi9NSUNJUURycEQyVEtScHJOU1I5dExKTQpjZ0ZpajN1dVNieVJBcEJ5NEE1QldEZm02UT09Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=", "signerName": "kubernetes.io/kubelet-serving", "usages": [ "digital signature", "server auth" ], "username": "system:node:build0-gstfj-ci-longtests-worker-b-5w9dz" }, "status": { "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN6ekNDQWJlZ0F3SUJBZ0lSQUlGZ1NUd0ovVUJLaE1hWlE4V01KcEl3RFFZSktvWklodmNOQVFFTEJRQXcKSmpFa01DSUdBMVVFQXd3YmEzVmlaUzFqYzNJdGMybG5ibVZ5WDBBeE56QTBNek00TVRrMk1CNFhEVEkwTURFeApOekU1TVRVME0xb1hEVEkwTURJd016QXpNVFl6Tmxvd1ZqRVZNQk1HQTFVRUNoTU1jM2x6ZEdWdE9tNXZaR1Z6Ck1UMHdPd1lEVlFRREV6UnplWE4wWlcwNmJtOWtaVHBpZFdsc1pEQXRaM04wWm1vdFkya3RiRzl1WjNSbGMzUnoKTFhkdmNtdGxjaTFpTFRWM09XUjZNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUV5Y0dhSDMvZwpxeGRzWVpHZkF6L0xKaGVYd1o0Z1VRbjB6TlZUenJncHpvd1VPOGR6NTN4UUZTOTRibm40NldlZFg3Q2xidUpVCklKVDdqQm5VdVhHZ2ZLT0JrakNCanpBT0JnTlZIUThCQWY4RUJBTUNCNEF3RXdZRFZSMGxCQXd3Q2dZSUt3WUIKQlFVSEF3RXdEQVlEVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCVGYzZ0FHNUxiMkxTcXl6MEFxVCtaRAoyV0VuenpBNUJnTlZIUkVFTWpBd2dpaGlkV2xzWkRBdFozTjBabW90WTJrdGJHOXVaM1JsYzNSekxYZHZjbXRsCmNpMWlMVFYzT1dSNmh3UUtBQ0FETUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBRE5ad0pMdkp4WWNta2RHV08KUm5ocC9rc3V6akJHQnVHbC9VTmF0RjZScml3eW9mdmpVNW5Kb0RFbGlLeHlDQ2wyL1d5VXl5a2hMSElBK1drOQoxZjRWajIrYmZFd0IwaGpuTndxQThudFFabS90TDhwalZ5ZzFXM0VwR2FvRjNsZzRybDA1cXBwcjVuM2l4WURJClFFY2ZuNmhQUnlKN056dlFCS0RwQ09lbU8yTFllcGhqbWZGY2h5VGRZVGU0aE9IOW9TWTNMdDdwQURIM2kzYzYKK3hpMDhhV09LZmhvT3IybTVBSFBVN0FkTjhpVUV0M0dsYzI0SGRTLzlLT05tT2E5RDBSSk9DMC8zWk5sKzcvNAoyZDlZbnYwaTZNaWI3OGxhNk5scFB0L2hmOWo5TlNnMDN4OFZYRVFtV21zN29xY1FWTHMxRHMvWVJ4VERqZFphCnEwMnIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "conditions": [ { "lastTransitionTime": "2024-01-17T19:20:43Z", "lastUpdateTime": "2024-01-17T19:20:43Z", "message": "This CSR was approved by the Node CSR Approver (cluster-machine-approver)", "reason": "NodeCSRApprove", "status": "True", "type": "Approved" } ] } } $ oc get -o json certificatesigningrequests csr-blbjx | jq -r '.status.certificate | @base64d' | openssl x509 -noout -text | grep '^Certificate:\|Issuer\|Subject:\|Not ' Certificate: Issuer: CN = kube-csr-signer_@1704338196 Not Before: Jan 17 19:15:43 2024 GMT Not After : Feb 3 03:16:36 2024 GMT Subject: O = system:nodes, CN = system:node:build0-gstfj-ci-longtests-worker-b-5w9dz
So that's approved by cluster-machine-approver, but signerName: kubernetes.io/kubelet-serving is an upstream Kubernetes component documented here, and the signer is implemented by kube-controller-manager.
- clones
-
OCPBUGS-27289 metrics-server should handle kubelet server CA rotation
- Closed
- is blocked by
-
OCPBUGS-27289 metrics-server should handle kubelet server CA rotation
- Closed
- links to