-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
4.14.0
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
OCP version: 4.14.0-rc.0
oc get pod -n openshift-kube-controller-manager kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com
NAME READY STATUS RESTARTS AGE
kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com 4/4 Running 28 (177m ago) 4d8h
oc descibe pod -n openshift-kube-controller-manager kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com
error: unknown command "descibe" for "oc"
Did you mean this?
describe
[kni@r640-u01 ~]$ oc describe pod -n openshift-kube-controller-manager kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com
Name: kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com
Namespace: openshift-kube-controller-manager
Priority: 2000001000
Priority Class Name: system-node-critical
Node: hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com/10.1.229.17
Start Time: Fri, 22 Sep 2023 10:09:46 -0400
Labels: app=kube-controller-manager
kube-controller-manager=true
revision=6
Annotations: kubectl.kubernetes.io/default-container: kube-controller-manager
kubernetes.io/config.hash: fbb5f6cc8d26a66f2ab189739d66245e
kubernetes.io/config.mirror: fbb5f6cc8d26a66f2ab189739d66245e
kubernetes.io/config.seen: 2023-09-21T04:24:17.773672544Z
kubernetes.io/config.source: file
resources.workload.openshift.io/cluster-policy-controller:
resources.workload.openshift.io/kube-controller-manager:
{"cpushares":61}resources.workload.openshift.io/kube-controller-manager-cert-syncer:
{"cpushares":5}resources.workload.openshift.io/kube-controller-manager-recovery-controller:
{"cpushares":5}target.workload.openshift.io/management:
{"effect": "PreferredDuringScheduling"}Status: Running
IP: 10.1.229.17
IPs:
IP: 10.1.229.17
Controlled By: Node/hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com
Containers:
kube-controller-manager:
Container ID: cri-o://e647e87faca8c2148343970c5163d7b312667075fc26640cbb1937de0db039ed
Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8c96bb83632c17455d3b1c61a34a95f351b2385141cd6dbebb5204a6627c4216
Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8c96bb83632c17455d3b1c61a34a95f351b2385141cd6dbebb5204a6627c4216
Port: 10257/TCP
Host Port: 10257/TCP
Command:
/bin/bash
-euxo
pipefail
-c
Args:
timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop ( sport = 10257 ))" ]; do sleep 1; done'
if [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then
echo "Copying system trust bundle"
cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
fi
if [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then
echo "Setting custom CA bundle for cloud provider"
export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem
fi
exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \
--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
--client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \
--requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.128.0.0/14 --cluster-name=kni-qe-65-wdj8v --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=720h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --configure-cloud-routes=false --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=false --feature-gates=AdmissionWebhookMatchConditions=false --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=false --feature-gates=BuildCSIVolumes=false --feature-gates=CSIDriverSharedResource=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EventedPLEG=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=InsightsConfigAPI=false --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=NodeSwap=false --feature-gates=OpenShiftPodSecurityAdmission=true --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RetroactiveDefaultStorageClass=false --feature-gates=RouteExternalCertificate=false --feature-gates=SigstoreImageVerification=false --feature-gates=VSphereStaticIPs=false --feature-gates=ValidatingAdmissionPolicy=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=172.30.0.0/16 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12
State: Running
Started: Fri, 22 Sep 2023 10:09:47 -0400
Ready: True
Restart Count: 9
Limits:
management.workload.openshift.io/cores: 60
Requests:
management.workload.openshift.io/cores: 60
memory: 200Mi
Liveness: http-get https://:10257/healthz delay=45s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get https://:10257/healthz delay=10s timeout=10s period=10s #success=1 #failure=3
Startup: http-get https://:10257/healthz delay=0s timeout=3s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/kubernetes/static-pod-certs from cert-dir (rw)
/etc/kubernetes/static-pod-resources from resource-dir (rw)
cluster-policy-controller:
Container ID: cri-o://5c2a6f9290ee9cbbc06c9de726aa724e4afa2802104b99f03985dd2ef74b63a1
Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a1e6397b45d79db17749ba0120586cc4839146569651c65a2e53a5cdfe3941a
Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a1e6397b45d79db17749ba0120586cc4839146569651c65a2e53a5cdfe3941a
Port: 10357/TCP
Host Port: 10357/TCP
Command:
/bin/bash
-euxo
pipefail
-c
Args:
timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop ( sport = 10357 ))" ]; do sleep 1; done'
exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \
--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \
--namespace=${POD_NAMESPACE} -v=2
State: Running
Started: Mon, 25 Sep 2023 06:19:45 -0400
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 24 Sep 2023 18:19:29 -0400
Finished: Mon, 25 Sep 2023 06:19:43 -0400
Ready: True
Restart Count: 13
Limits:
management.workload.openshift.io/cores: 10
Requests:
management.workload.openshift.io/cores: 10
memory: 200Mi
Liveness: http-get https://:10357/healthz delay=45s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get https://:10357/healthz delay=10s timeout=10s period=10s #success=1 #failure=3
Startup: http-get https://:10357/healthz delay=0s timeout=3s period=10s #success=1 #failure=3
Environment:
POD_NAME: kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com (v1:metadata.name)
POD_NAMESPACE: openshift-kube-controller-manager (v1:metadata.namespace)
Mounts:
/etc/kubernetes/static-pod-certs from cert-dir (rw)
/etc/kubernetes/static-pod-resources from resource-dir (rw)
kube-controller-manager-cert-syncer:
Container ID: cri-o://5930cf00a079cbbc0c482c6b3700f5add2026b564eaf7445fa75f431f6995258
Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dfc08467c054f6f225457b9ed8ad63b93bd81af619ae47ed47c5bfd73ee5bbb
Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dfc08467c054f6f225457b9ed8ad63b93bd81af619ae47ed47c5bfd73ee5bbb
Port: <none>
Host Port: <none>
Command:
cluster-kube-controller-manager-operator
cert-syncer
Args:
--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig
--namespace=$(POD_NAMESPACE)
--destination-dir=/etc/kubernetes/static-pod-certs
State: Running
Started: Fri, 22 Sep 2023 10:09:47 -0400
Ready: True
Restart Count: 3
Limits:
management.workload.openshift.io/cores: 5
Requests:
management.workload.openshift.io/cores: 5
memory: 50Mi
Environment:
POD_NAME: kube-controller-manager-hp-e910-01.kni-qe-65.lab.eng.rdu2.redhat.com (v1:metadata.name)
POD_NAMESPACE: openshift-kube-controller-manager (v1:metadata.namespace)
Mounts:
/etc/kubernetes/static-pod-certs from cert-dir (rw)
/etc/kubernetes/static-pod-resources from resource-dir (rw)
kube-controller-manager-recovery-controller:
Container ID: cri-o://98d97eb6bf766d5668b5ca84bc46a29dc908eed22d9c2c001ede05a107646778
Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dfc08467c054f6f225457b9ed8ad63b93bd81af619ae47ed47c5bfd73ee5bbb
Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dfc08467c054f6f225457b9ed8ad63b93bd81af619ae47ed47c5bfd73ee5bbb
Port: <none>
Host Port: <none>
Command:
/bin/bash
-euxo
pipefail
-c
Args:
timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop ( sport = 9443 ))" ]; do sleep 1; done'
exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2
State: Running
Started: Fri, 22 Sep 2023 10:09:48 -0400
Ready: True
Restart Count: 3
Limits:
management.workload.openshift.io/cores: 5
Requests:
management.workload.openshift.io/cores: 5
memory: 50Mi
Environment:
POD_NAMESPACE: openshift-kube-controller-manager (v1:metadata.namespace)
Mounts:
/etc/kubernetes/static-pod-certs from cert-dir (rw)
/etc/kubernetes/static-pod-resources from resource-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
resource-dir:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-6
HostPathType:
cert-dir:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/static-pod-resources/kube-controller-manager-certs
HostPathType:
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 178m (x7 over 2d23h) kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a1e6397b45d79db17749ba0120586cc4839146569651c65a2e53a5cdfe3941a" already present on machine
Normal Created 178m (x7 over 2d23h) kubelet Created container cluster-policy-controller
Normal Started 178m (x7 over 2d23h) kubelet Started container cluster-policy-controller
Grepping the logs for errors shows many lines with: "I0925 13:15:18.067040 1 deployment_controller.go:503] "Error syncing deployment" deployment="openshift-controller-manager/controller-manager" err="Operation cannot be fulfilled on deployments.apps \"controller-manager\": the object has been modified; please apply your changes to the latest version and try again""