Name: dns-default-z4v2p Namespace: openshift-dns Priority: 2000001000 Priority Class Name: system-node-critical Service Account: dns Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:46 -0500 Labels: controller-revision-hash=75fd7bd79f dns.operator.openshift.io/daemonset-dns=default pod-template-generation=1 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.7/24"],"mac_address":"0a:58:0a:2a:00:07","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.7/24","gat... target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.42.0.7 IPs: IP: 10.42.0.7 Controlled By: DaemonSet/dns-default Containers: dns: Container ID: cri-o://267ae42eaabc1bc1261dd4096786f45295b7ece5abbd9819b0e5001f7203ca3e Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad Ports: 5353/UDP, 5353/TCP Host Ports: 0/UDP, 0/TCP Command: coredns Args: -conf /etc/coredns/Corefile State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 05:18:04 -0500 Finished: Mon, 13 Feb 2023 05:20:14 -0500 Ready: False Restart Count: 21 Requests: cpu: 50m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=10s timeout=3s period=3s #success=1 #failure=3 Environment: Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgssb (ro) kube-rbac-proxy: Container ID: cri-o://4ca318be69a33e822d971a22851c7d889959a3b61319d77fe6aad3962f7eee44 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23 Port: 9154/TCP Host Port: 0/TCP Args: --logtostderr --secure-listen-address=:9154 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --upstream=http://127.0.0.1:9153/ --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:15 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 10m memory: 40Mi Environment: Mounts: /etc/tls/private from metrics-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgssb (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: dns-default Optional: false metrics-tls: Type: Secret (a volume populated by a Secret) SecretName: dns-default-metrics-tls Optional: false kube-api-access-qgssb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node-role.kubernetes.io/master op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3h56m default-scheduler Successfully assigned openshift-dns/dns-default-z4v2p to localhost.localdomain Warning FailedMount 3h56m (x5 over 3h56m) kubelet MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad" in 8.084339154s (8.084359653s including waiting) Normal Created 3h56m kubelet Created container dns Normal Started 3h56m kubelet Started container dns Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23" in 5.624304546s (5.624316697s including waiting) Normal Created 3h56m kubelet Created container kube-rbac-proxy Normal Started 3h56m kubelet Started container kube-rbac-proxy Warning FailedCreatePodSandBox 3h43m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-z4v2p_openshift-dns_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7_0(f4c8335a880bdfa237136ae918d59e24dad233f94f8facc610fb37fc49ff5280): error adding pod openshift-dns_dns-default-z4v2p to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Started 3h43m kubelet Started container kube-rbac-proxy Normal Created 3h43m kubelet Created container dns Normal Started 3h43m kubelet Started container dns Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23" already present on machine Normal Created 3h43m kubelet Created container kube-rbac-proxy Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad" already present on machine Warning FailedCreatePodSandBox 3h33m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-z4v2p_openshift-dns_d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7_0(a9828221971a239692c64c52231c73446fbe6150205a8ba2c91542b56b83b228): error adding pod openshift-dns_dns-default-z4v2p to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Created 3h33m kubelet Created container kube-rbac-proxy Normal Created 3h33m kubelet Created container dns Normal Started 3h33m kubelet Started container dns Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23" already present on machine Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad" already present on machine Normal Started 3h33m kubelet Started container kube-rbac-proxy Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31b8a600e8388e9e87197639e73a49da0f06f75fe67e702c8681ac83538919ad" already present on machine Normal Created 76m kubelet Created container dns Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec61024e2c37e5aba93593220b12ba91a3ba4f547b2562a36539ff1d9df74a23" already present on machine Normal Started 76m kubelet Started container dns Normal Created 76m kubelet Created container kube-rbac-proxy Normal Started 76m kubelet Started container kube-rbac-proxy Warning ProbeError 76m kubelet Readiness probe error: Get "http://10.42.0.7:8181/ready": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) body: Warning Unhealthy 76m kubelet Readiness probe failed: Get "http://10.42.0.7:8181/ready": dial tcp 10.42.0.7:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 75m (x8 over 76m) kubelet Readiness probe failed: Get "http://10.42.0.7:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning ProbeError 6m24s (x650 over 76m) kubelet Readiness probe error: Get "http://10.42.0.7:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: Warning BackOff 93s (x166 over 65m) kubelet Back-off restarting failed container dns in pod dns-default-z4v2p_openshift-dns(d5abdcaf-5a6a-4845-8ad6-12ad1caadfa7) Name: node-resolver-sgsm4 Namespace: openshift-dns Priority: 2000001000 Priority Class Name: system-node-critical Service Account: node-resolver Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:18 -0500 Labels: controller-revision-hash=5665bbdcff dns.operator.openshift.io/daemonset-node-resolver= pod-template-generation=1 Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 192.168.122.17 IPs: IP: 192.168.122.17 Controlled By: DaemonSet/node-resolver Containers: dns-node-resolver: Container ID: cri-o://1cd2ca1449e5a669c8ef6af3c36102bd6527afc1dd60556b7e61c90589523315 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3 Port: Host Port: Command: /bin/bash -c #!/bin/bash set -uo pipefail trap 'jobs -p | xargs kill || true; wait; exit 0' TERM NAMESERVER=${DNS_DEFAULT_SERVICE_HOST} OPENSHIFT_MARKER="openshift-generated-node-resolver" HOSTS_FILE="/etc/hosts" TEMP_FILE="/etc/hosts.tmp" IFS=', ' read -r -a services <<< "${SERVICES}" # Make a temporary file with the old hosts file's attributes. if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then echo "Failed to preserve hosts file. Exiting." exit 1 fi while true; do declare -A svc_ips for svc in "${services[@]}"; do # Fetch service IP from cluster dns if present. We make several tries # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones # are for deployments with Kuryr on older OpenStack (OSP13) - those do not # support UDP loadbalancers and require reaching DNS through TCP. cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') for i in ${!cmds[*]} do ips=($(eval "${cmds[i]}")) if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then svc_ips["${svc}"]="${ips[@]}" break fi done done # Update /etc/hosts only if we get valid service IPs # We will not update /etc/hosts when there is coredns service outage or api unavailability # Stale entries could exist in /etc/hosts if the service is deleted if [[ -n "${svc_ips[*]-}" ]]; then # Build a new hosts file from /etc/hosts with our custom entries filtered out if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then # Only continue rebuilding the hosts entries if its original content is preserved sleep 60 & wait continue fi # Append resolver entries for services for svc in "${!svc_ips[@]}"; do for ip in ${svc_ips[${svc}]}; do echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" done done # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior # Replace /etc/hosts with our modified version if needed cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn fi sleep 60 & wait unset svc_ips done State: Running Started: Mon, 13 Feb 2023 04:05:51 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 5m memory: 21Mi Environment: SERVICES: image-registry.openshift-image-registry.svc NAMESERVER: 172.30.0.10 CLUSTER_DOMAIN: cluster.local Mounts: /etc/hosts from hosts-file (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4gs8j (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: hosts-file: Type: HostPath (bare host directory volume) Path: /etc/hosts HostPathType: File kube-api-access-4gs8j: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3h57m default-scheduler Successfully assigned openshift-dns/node-resolver-sgsm4 to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3" in 11.145190014s (11.145208815s including waiting) Normal Created 3h56m kubelet Created container dns-node-resolver Normal Started 3h56m kubelet Started container dns-node-resolver Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3" already present on machine Normal Created 3h43m kubelet Created container dns-node-resolver Normal Started 3h43m kubelet Started container dns-node-resolver Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3" already present on machine Normal Created 3h33m kubelet Created container dns-node-resolver Normal Started 3h33m kubelet Started container dns-node-resolver Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23a6b4c13b2610e9bed8205d59acea8a30e8503948633d7a02cc6a45e431b7d3" already present on machine Normal Created 76m kubelet Created container dns-node-resolver Normal Started 76m kubelet Started container dns-node-resolver Name: router-default-85d64c4987-bbdnr Namespace: openshift-ingress Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: router Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:46 -0500 Labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default pod-template-hash=85d64c4987 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.4/24"],"mac_address":"0a:58:0a:2a:00:04","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.4/24","gat... openshift.io/scc: hostnetwork target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.42.0.4 IPs: IP: 10.42.0.4 Controlled By: ReplicaSet/router-default-85d64c4987 Containers: router: Container ID: cri-o://e5f3457666c38518369411a2705160dbd818b3ad18473d1ee90d666928dbd8d5 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887 Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 80/TCP, 443/TCP, 0/TCP State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:53:12 -0500 Finished: Mon, 13 Feb 2023 04:04:40 -0500 Ready: False Restart Count: 2 Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:1936/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Startup: http-get http://:1936/healthz/ready delay=0s timeout=1s period=1s #success=1 #failure=120 Environment: ROUTER_SERVICE_NAMESPACE: openshift-ingress DEFAULT_CERTIFICATE_DIR: /etc/pki/tls/private DEFAULT_DESTINATION_CA_PATH: /var/run/configmaps/service-ca/service-ca.crt STATS_PORT: 1936 RELOAD_INTERVAL: 5s ROUTER_ALLOW_WILDCARD_ROUTES: false ROUTER_CANONICAL_HOSTNAME: router-default.apps.example.com ROUTER_CIPHERS: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ROUTER_CIPHERSUITES: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ROUTER_DISABLE_HTTP2: true ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK: false ROUTER_LOAD_BALANCE_ALGORITHM: random ROUTER_METRICS_TYPE: haproxy ROUTER_SERVICE_NAME: default ROUTER_SET_FORWARDED_HEADERS: append ROUTER_TCP_BALANCE_SCHEME: source ROUTER_THREADS: 4 SSL_MIN_VERSION: TLSv1.2 ROUTER_USE_PROXY_PROTOCOL: false GRACEFUL_SHUTDOWN_DELAY: 1s ROUTER_DOMAIN: apps.example.com Mounts: /etc/pki/tls/private from default-certificate (ro) /var/run/configmaps/service-ca from service-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gtpr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-certificate: Type: Secret (a volume populated by a Secret) SecretName: router-certs-default Optional: false service-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: service-ca-bundle Optional: false kube-api-access-5gtpr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux node-role.kubernetes.io/worker= Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3h57m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 3h56m default-scheduler Successfully assigned openshift-ingress/router-default-85d64c4987-bbdnr to localhost.localdomain Warning FailedMount 3h56m (x5 over 3h56m) kubelet MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887" in 8.289195008s (8.289212529s including waiting) Normal Created 3h56m kubelet Created container router Normal Started 3h56m kubelet Started container router Warning FailedMount 3h41m kubelet Unable to attach or mount volumes: unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition Warning FailedMount 3h41m (x9 over 3h43m) kubelet MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt Normal Created 3h39m kubelet Created container router Normal Pulled 3h39m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887" already present on machine Normal Started 3h39m kubelet Started container router Warning FailedMount 3h31m kubelet Unable to attach or mount volumes: unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition Warning FailedMount 3h31m (x9 over 3h33m) kubelet MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt Normal Pulled 3h29m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed079be59e5686a18e2a86e3f0e093a63f2ba0493c62e651e742130200bf887" already present on machine Normal Created 3h29m kubelet Created container router Normal Started 3h29m kubelet Started container router Warning FailedMount 58m kubelet Unable to attach or mount volumes: unmounted volumes=[service-ca-bundle], unattached volumes=[service-ca-bundle kube-api-access-5gtpr default-certificate]: timed out waiting for the condition Warning FailedMount 56m kubelet Unable to attach or mount volumes: unmounted volumes=[service-ca-bundle], unattached volumes=[kube-api-access-5gtpr default-certificate service-ca-bundle]: timed out waiting for the condition Warning FailedMount 31m (x15 over 74m) kubelet Unable to attach or mount volumes: unmounted volumes=[service-ca-bundle], unattached volumes=[default-certificate service-ca-bundle kube-api-access-5gtpr]: timed out waiting for the condition Warning FailedMount 75s (x45 over 76m) kubelet MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt Name: ovnkube-master-86mcc Namespace: openshift-ovn-kubernetes Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: ovn-kubernetes-controller Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:18 -0500 Labels: app=ovnkube-master component=network controller-revision-hash=596cdc6fbc kubernetes.io/os=linux openshift.io/component=network ovn-db-pod=true pod-template-generation=1 type=infra Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 192.168.122.17 IPs: IP: 192.168.122.17 Controlled By: DaemonSet/ovnkube-master Containers: northd: Container ID: cri-o://db9edeb383fd96562cef6b3283b6c05ab6078715ff61323866ef2a2a2998adf5 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping ovn-northd" OVN_MANAGE_OVSDB=no /usr/share/ovn/scripts/ovn-ctl stop_northd echo "$(date -Iseconds) - ovn-northd stopped" rm -f /var/run/ovn/ovn-northd.pid exit 0 } # end of quit trap quit TERM INT echo "$(date -Iseconds) - starting ovn-northd" exec ovn-northd \ --no-chdir "-vconsole:${OVN_LOG_LEVEL}" -vfile:off "-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ --pidfile /var/run/ovn/ovn-northd.pid & wait $! State: Running Started: Mon, 13 Feb 2023 04:05:51 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 10m memory: 10Mi Environment: OVN_LOG_LEVEL: info Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5x8k (ro) nbdb: Container ID: cri-o://f8e2769490426fa34754d283a7fd0de536e0eb47dd23862daf8f6dd004d99078 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping nbdb" /usr/share/ovn/scripts/ovn-ctl stop_nb_ovsdb echo "$(date -Iseconds) - nbdb stopped" rm -f /var/run/ovn/ovnnb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="nb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-nb-cluster-local-port=9643 --no-monitor" echo "$(date -Iseconds) - starting nbdb" exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-nb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_nb_ovsdb & db_pid=$! compact $db & wait $db_pid State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 4 Requests: cpu: 10m memory: 10Mi Readiness: exec [/bin/bash -c set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null ] delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: OVN_LOG_LEVEL: info OVN_NORTHD_PROBE_INTERVAL: 5000 Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5x8k (ro) sbdb: Container ID: cri-o://f177be15e0288672ea2dbeb13fb95d9084f8df64dc932da9015a3f8ed276a58d Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping sbdb" /usr/share/ovn/scripts/ovn-ctl stop_sb_ovsdb echo "$(date -Iseconds) - sbdb stopped" rm -f /var/run/ovn/ovnsb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="sb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-sb-cluster-local-port=9644 --no-monitor" echo "$(date -Iseconds) - starting sbdb " exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-sb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_sb_ovsdb & db_pid=$! compact $db & wait $db_pid State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 10m memory: 10Mi Readiness: exec [/bin/bash -c set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null ] delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: OVN_LOG_LEVEL: info Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5x8k (ro) ovnkube-master: Container ID: cri-o://f41a47867216419d533581465db81b38683e4bf9c10e1619892f47c3f82619b5 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Port: Host Port: Command: /bin/bash -c set -xe if [[ -f "/env/_master" ]]; then set -o allexport source "/env/_master" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovnkube-master, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" echo "I$(date "+%m%d %H:%M:%S.%N") - copy ovn-k8s-cni-overlay" cp -f /usr/libexec/cni/ovn-k8s-cni-overlay /cni-bin-dir/ echo "I$(date "+%m%d %H:%M:%S.%N") - disable conntrack on geneve port" iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK echo "I$(date "+%m%d %H:%M:%S.%N") - starting ovnkube-node" gateway_mode_flags="--gateway-mode local --gateway-interface br-ex" sysctl net.ipv4.ip_forward=1 gw_interface_flag= # if br-ex1 is configured on the node, we want to use it for external gateway traffic if [ -d /sys/class/net/br-ex1 ]; then gw_interface_flag="--exgw-interface=br-ex1" # the functionality depends on ip_forwarding being enabled fi echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-master - start ovnkube --init-master ${K8S_NODE} --init-node ${K8S_NODE}" exec /usr/bin/ovnkube \ --init-master "${K8S_NODE}" \ --init-node "${K8S_NODE}" \ --config-file=/run/ovnkube-config/ovnkube.conf \ --loglevel "${OVN_KUBE_LOG_LEVEL}" \ ${gateway_mode_flags} \ ${gw_interface_flag} \ --inactivity-probe="180000" \ --nb-address "" \ --sb-address "" \ --enable-multicast \ --disable-snat-multiple-gws \ --acl-logging-rate-limit "20" State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Error Message: intSlice total 0 items received I0213 09:03:54.604038 3234 ovnkube.go:126] Received signal terminated. Shutting down I0213 09:03:54.605605 3234 services_controller.go:180] Shutting down controller ovn-lb-controller I0213 09:03:54.608736 3234 egress_services_controller.go:222] Shutting down Egress Services controller E0213 09:03:54.626213 3234 leaderelection.go:306] Failed to release lock: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 127.0.0.1:6443: connect: connection refused I0213 09:03:54.626256 3234 network_controller_manager.go:238] No longer leader; exiting I0213 09:03:54.626266 3234 network_controller_manager.go:258] Stopped leader election panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x193c7ee] goroutine 1 [running]: github.com/ovn-org/ovn-kubernetes/go-controller/pkg/network-controller-manager.(*netAttachDefinitionController).GetAllNetworkControllers(0x0) /go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/network-controller-manager/network_attach_def_controller.go:307 +0x2e github.com/ovn-org/ovn-kubernetes/go-controller/pkg/network-controller-manager.(*networkControllerManager).Stop(0xc0000e5760) /go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/network-controller-manager/network_controller_manager.go:421 +0x5c main.runOvnKube(0xc0002f8580, 0xc0002f8580?) /go/src/github.com/openshift/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:314 +0xbc5 main.main.func1(0xc00027a800?) /go/src/github.com/openshift/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:108 +0x1d github.com/urfave/cli/v2.(*App).RunContext(0xc0003ac180, {0x2148e68?, 0xc00041d700}, {0xc000052160, 0x15, 0x16}) /go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/github.com/urfave/cli/v2/app.go:315 +0x9f5 main.main() /go/src/github.com/openshift/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:132 +0xba9 Exit Code: 2 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 10m memory: 60Mi Readiness: exec [test -f /etc/cni/net.d/10-ovn-kubernetes.conf] delay=5s timeout=1s period=5s #success=1 #failure=3 Environment: OVN_KUBE_LOG_LEVEL: 4 K8S_NODE: (v1:spec.nodeName) K8S_NODE_IP: (v1:status.hostIP) Mounts: /cni-bin-dir from host-cni-bin (rw) /dev/log from log-socket (rw) /env from env-overrides (rw) /etc/cni/net.d from host-cni-netd (rw) /etc/openvswitch from etc-openvswitch-node (rw) /etc/ovn/ from etc-openvswitch-node (rw) /etc/systemd/system from systemd-units (ro) /host from host-slash (ro) /run/netns from host-run-netns (ro) /run/openvswitch/ from run-openvswitch (rw) /run/ovn-kubernetes/ from host-run-ovn-kubernetes (rw) /run/ovn/ from run-ovn (rw) /run/ovnkube-config/ from ovnkube-config (rw) /var/lib/microshift/resources/kubeadmin from kubeconfig (rw) /var/log/ovn from node-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5x8k (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: systemd-units: Type: HostPath (bare host directory volume) Path: /etc/systemd/system HostPathType: run-openvswitch: Type: HostPath (bare host directory volume) Path: /var/run/openvswitch HostPathType: run-ovn: Type: HostPath (bare host directory volume) Path: /var/run/ovn HostPathType: host-slash: Type: HostPath (bare host directory volume) Path: / HostPathType: host-run-netns: Type: HostPath (bare host directory volume) Path: /run/netns HostPathType: etc-openvswitch-node: Type: HostPath (bare host directory volume) Path: /etc/openvswitch HostPathType: node-log: Type: HostPath (bare host directory volume) Path: /var/log/ovn HostPathType: log-socket: Type: HostPath (bare host directory volume) Path: /dev/log HostPathType: host-run-ovn-kubernetes: Type: HostPath (bare host directory volume) Path: /run/ovn-kubernetes HostPathType: host-cni-netd: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: host-cni-bin: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: kubeconfig: Type: HostPath (bare host directory volume) Path: /var/lib/microshift/resources/kubeadmin HostPathType: ovnkube-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: ovnkube-config Optional: false env-overrides: Type: ConfigMap (a volume populated by a ConfigMap) Name: env-overrides Optional: true kube-api-access-n5x8k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3h57m default-scheduler Successfully assigned openshift-ovn-kubernetes/ovnkube-master-86mcc to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" in 11.502027287s (11.50204319s including waiting) Normal Created 3h56m kubelet Created container northd Normal Started 3h56m kubelet Started container northd Normal Pulled 3h56m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h56m kubelet Created container nbdb Normal Started 3h56m kubelet Started container nbdb Normal Pulled 3h56m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h56m kubelet Created container sbdb Normal Started 3h56m kubelet Started container sbdb Normal Pulled 3h56m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h56m kubelet Created container ovnkube-master Normal Started 3h56m kubelet Started container ovnkube-master Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h43m kubelet Created container northd Normal Started 3h43m kubelet Started container northd Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h43m kubelet Created container nbdb Normal Started 3h43m kubelet Started container nbdb Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h43m kubelet Created container sbdb Normal Started 3h43m kubelet Started container sbdb Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h43m kubelet Created container ovnkube-master Normal Started 3h43m kubelet Started container ovnkube-master Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h33m kubelet Created container northd Normal Started 3h33m kubelet Started container northd Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h33m kubelet Created container nbdb Normal Started 3h33m kubelet Started container nbdb Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h33m kubelet Created container sbdb Normal Started 3h33m kubelet Started container sbdb Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h33m kubelet Created container ovnkube-master Normal Started 3h33m kubelet Started container ovnkube-master Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 76m kubelet Created container northd Normal Started 76m kubelet Started container northd Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 76m kubelet Created container nbdb Normal Started 76m kubelet Started container nbdb Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 76m kubelet Created container sbdb Normal Started 76m kubelet Started container sbdb Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 76m kubelet Created container ovnkube-master Normal Started 76m kubelet Started container ovnkube-master Name: ovnkube-node-6gpbh Namespace: openshift-ovn-kubernetes Priority: 2000001000 Priority Class Name: system-node-critical Service Account: ovn-kubernetes-node Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:18 -0500 Labels: app=ovnkube-node component=network controller-revision-hash=85c9d69867 kubernetes.io/os=linux openshift.io/component=network pod-template-generation=1 type=infra Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 192.168.122.17 IPs: IP: 192.168.122.17 Controlled By: DaemonSet/ovnkube-node Containers: ovn-controller: Container ID: cri-o://c34e60fff312b4c0635e761e5635804aee540f86b29de738e28b57e449a87950 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0 Port: Host Port: Command: /bin/bash -c set -e if [[ -f "/env/${K8S_NODE}" ]]; then set -o allexport source "/env/${K8S_NODE}" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovn-controller, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off \ --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid \ --syslog-method="null" \ --log-file=/var/log/ovn/acl-audit-log.log \ -vFACILITY:"local0" \ -vconsole:"${OVN_LOG_LEVEL}" -vconsole:"acl_log:off" \ -vPATTERN:console:"%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ -vsyslog:"acl_log:info" \ -vfile:"acl_log:info" State: Running Started: Mon, 13 Feb 2023 04:05:51 -0500 Last State: Terminated Reason: Error Message: lport openshift-storage_topolvm-node-9bnp5 for this chassis. 2023-02-13T06:49:15.111Z|00041|binding|INFO|openshift-storage_topolvm-node-9bnp5: Claiming 0a:58:0a:2a:00:05 10.42.0.5 2023-02-13T06:49:15.117Z|00042|binding|INFO|Setting lport openshift-storage_topolvm-node-9bnp5 ovn-installed in OVS 2023-02-13T06:49:15.117Z|00043|binding|INFO|Setting lport openshift-storage_topolvm-node-9bnp5 up in Southbound 2023-02-13T06:49:17.032Z|00044|binding|INFO|Claiming lport openshift-service-ca_service-ca-7bd9547b57-vhmkf for this chassis. 2023-02-13T06:49:17.032Z|00045|binding|INFO|openshift-service-ca_service-ca-7bd9547b57-vhmkf: Claiming 0a:58:0a:2a:00:03 10.42.0.3 2023-02-13T06:49:17.053Z|00046|binding|INFO|Setting lport openshift-service-ca_service-ca-7bd9547b57-vhmkf ovn-installed in OVS 2023-02-13T06:49:17.053Z|00047|binding|INFO|Setting lport openshift-service-ca_service-ca-7bd9547b57-vhmkf up in Southbound 2023-02-13T06:49:18.109Z|00048|binding|INFO|Claiming lport openshift-storage_topolvm-controller-78cbfc4867-qdfs4 for this chassis. 2023-02-13T06:49:18.109Z|00049|binding|INFO|openshift-storage_topolvm-controller-78cbfc4867-qdfs4: Claiming 0a:58:0a:2a:00:06 10.42.0.6 2023-02-13T06:49:18.126Z|00050|binding|INFO|Setting lport openshift-storage_topolvm-controller-78cbfc4867-qdfs4 ovn-installed in OVS 2023-02-13T06:49:18.126Z|00051|binding|INFO|Setting lport openshift-storage_topolvm-controller-78cbfc4867-qdfs4 up in Southbound 2023-02-13T06:53:12.234Z|00052|binding|INFO|Claiming lport openshift-ingress_router-default-85d64c4987-bbdnr for this chassis. 2023-02-13T06:53:12.234Z|00053|binding|INFO|openshift-ingress_router-default-85d64c4987-bbdnr: Claiming 0a:58:0a:2a:00:04 10.42.0.4 2023-02-13T06:53:12.251Z|00054|binding|INFO|Setting lport openshift-ingress_router-default-85d64c4987-bbdnr ovn-installed in OVS 2023-02-13T06:53:12.251Z|00055|binding|INFO|Setting lport openshift-ingress_router-default-85d64c4987-bbdnr up in Southbound 2023-02-13T09:03:54.700Z|00056|fatal_signal|WARN|terminating with signal 15 (Terminated) Exit Code: 143 Started: Mon, 13 Feb 2023 01:49:03 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 10m memory: 10Mi Environment: OVN_LOG_LEVEL: info K8S_NODE: (v1:spec.nodeName) K8S_NODE_IP: (v1:status.hostIP) Mounts: /dev/log from log-socket (rw) /env from env-overrides (rw) /etc/openvswitch from etc-openvswitch (rw) /etc/ovn/ from etc-openvswitch (rw) /run/openvswitch from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/lib/openvswitch from var-lib-openvswitch (rw) /var/log/ovn from node-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9d8p (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: var-lib-openvswitch: Type: HostPath (bare host directory volume) Path: /var/lib/openvswitch/data HostPathType: etc-openvswitch: Type: HostPath (bare host directory volume) Path: /etc/openvswitch HostPathType: run-openvswitch: Type: HostPath (bare host directory volume) Path: /var/run/openvswitch HostPathType: run-ovn: Type: HostPath (bare host directory volume) Path: /var/run/ovn HostPathType: node-log: Type: HostPath (bare host directory volume) Path: /var/log/ovn HostPathType: log-socket: Type: HostPath (bare host directory volume) Path: /dev/log HostPathType: env-overrides: Type: ConfigMap (a volume populated by a ConfigMap) Name: env-overrides Optional: true kube-api-access-q9d8p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3h57m default-scheduler Successfully assigned openshift-ovn-kubernetes/ovnkube-node-6gpbh to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" in 11.490819045s (11.490833548s including waiting) Normal Created 3h56m kubelet Created container ovn-controller Normal Started 3h56m kubelet Started container ovn-controller Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h43m kubelet Created container ovn-controller Normal Started 3h43m kubelet Started container ovn-controller Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 3h33m kubelet Created container ovn-controller Normal Started 3h33m kubelet Started container ovn-controller Normal Pulled 76m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e21569ba6b2da124a92e516e50982d42c0942007dbe217570ee84ffcbca0cef0" already present on machine Normal Created 76m kubelet Created container ovn-controller Normal Started 76m kubelet Started container ovn-controller Name: service-ca-7bd9547b57-vhmkf Namespace: openshift-service-ca Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: service-ca Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:46 -0500 Labels: app=service-ca pod-template-hash=7bd9547b57 service-ca=true Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.3/24"],"mac_address":"0a:58:0a:2a:00:03","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.3/24","gat... openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.42.0.3 IPs: IP: 10.42.0.3 Controlled By: ReplicaSet/service-ca-7bd9547b57 Containers: service-ca-controller: Container ID: cri-o://00348234f7d3901bee150fecbf8c5d4504a8eee545b9a62948529b8a0774adcf Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350 Port: 8443/TCP Host Port: 0/TCP Command: service-ca-operator controller Args: -v=2 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Mon, 13 Feb 2023 05:17:14 -0500 Finished: Mon, 13 Feb 2023 05:17:48 -0500 Ready: False Restart Count: 19 Requests: cpu: 10m memory: 120Mi Environment: Mounts: /var/run/configmaps/signing-cabundle from signing-cabundle (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghk5h (ro) /var/run/secrets/signing-key from signing-key (rw) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: signing-key: Type: Secret (a volume populated by a Secret) SecretName: signing-key Optional: false signing-cabundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: signing-cabundle Optional: false kube-api-access-ghk5h: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 120s node.kubernetes.io/unreachable:NoExecute op=Exists for 120s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3h57m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 3h56m default-scheduler Successfully assigned openshift-service-ca/service-ca-7bd9547b57-vhmkf to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350" Normal Pulled 3h56m kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350" in 5.911582005s (5.911600588s including waiting) Normal Created 3h56m kubelet Created container service-ca-controller Normal Started 3h56m kubelet Started container service-ca-controller Warning FailedCreatePodSandBox 3h43m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-7bd9547b57-vhmkf_openshift-service-ca_2e7bce65-b199-4d8a-bc2f-c63494419251_0(7d4d3757bd8ba5388ae3b938cd228d27f3fa18d93bc5c2e6e58e3b2101fb00c0): error adding pod openshift-service-ca_service-ca-7bd9547b57-vhmkf to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Created 3h43m kubelet Created container service-ca-controller Normal Pulled 3h43m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350" already present on machine Normal Started 3h43m kubelet Started container service-ca-controller Warning FailedCreatePodSandBox 3h33m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-7bd9547b57-vhmkf_openshift-service-ca_2e7bce65-b199-4d8a-bc2f-c63494419251_0(452db5bf8646143142e09df67026a84bbf1f261f7d7c5a48fa43b72ffd13e9e4): error adding pod openshift-service-ca_service-ca-7bd9547b57-vhmkf to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Pulled 3h33m kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350" already present on machine Normal Created 3h33m kubelet Created container service-ca-controller Normal Started 3h33m kubelet Started container service-ca-controller Normal Created 73m (x4 over 76m) kubelet Created container service-ca-controller Normal Started 73m (x4 over 76m) kubelet Started container service-ca-controller Normal Pulled 16m (x15 over 76m) kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c14f0342eebce36041f0904e2a4bcccd933d51c036d9319caeb19e656cf0a350" already present on machine Warning BackOff 95s (x299 over 75m) kubelet Back-off restarting failed container service-ca-controller in pod service-ca-7bd9547b57-vhmkf_openshift-service-ca(2e7bce65-b199-4d8a-bc2f-c63494419251) Name: topolvm-controller-78cbfc4867-qdfs4 Namespace: openshift-storage Priority: 0 Service Account: topolvm-controller Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:46 -0500 Labels: app.kubernetes.io/component=topolvm-controller app.kubernetes.io/managed-by=lvms-operator app.kubernetes.io/name=topolvm-csi-driver app.kubernetes.io/part-of=lvms-provisioner pod-template-hash=78cbfc4867 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.6/24"],"mac_address":"0a:58:0a:2a:00:06","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.6/24","gat... Status: Running IP: 10.42.0.6 IPs: IP: 10.42.0.6 Controlled By: ReplicaSet/topolvm-controller-78cbfc4867 Init Containers: self-signed-cert-generator: Container ID: cri-o://97a2a926204ad7c917872849ebc0e449918a10cd5c1bfc93f4f29381a484171e Image: registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671 Image ID: registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671 Port: Host Port: Command: /usr/bin/bash -c openssl req -nodes -x509 -newkey rsa:4096 -subj '/DC=self_signed_certificate' -keyout /certs/tls.key -out /certs/tls.crt -days 3650 State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:18 -0500 Finished: Mon, 13 Feb 2023 01:49:18 -0500 Ready: True Restart Count: 2 Environment: Mounts: /certs from certs (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbckr (ro) Containers: topolvm-controller: Container ID: cri-o://f27b7b3690808abc7cbd488628ee8334abc9252b32c9e8881cecbbb6c9db613a Image: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Image ID: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Port: 9808/TCP Host Port: 0/TCP Command: /topolvm-controller --cert-dir=/certs State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 13 Feb 2023 05:19:23 -0500 Finished: Mon, 13 Feb 2023 05:19:26 -0500 Ready: False Restart Count: 21 Requests: cpu: 2m memory: 31Mi Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=60s #success=1 #failure=3 Readiness: http-get http://:8080/metrics delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /certs from certs (rw) /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbckr (ro) csi-provisioner: Container ID: cri-o://f09b8082c7856eab6dbfe430b919355faadcd2b89b57ee546dd043b8cb91dd61 Image: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c Image ID: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock --enable-capacity --capacity-ownerref-level=2 --capacity-poll-interval=30s --feature-gates=Topology=true State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 13 Feb 2023 01:49:20 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 2m memory: 35Mi Environment: POD_NAME: topolvm-controller-78cbfc4867-qdfs4 (v1:metadata.name) NAMESPACE: openshift-storage (v1:metadata.namespace) Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbckr (ro) csi-resizer: Container ID: cri-o://a3fc844f08a83248a30b2c8a8beb9703f1a6a2d1eb9bf6fb94a946be9c510bda Image: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107 Image ID: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:732822902f4db2dfda555d020b62e2b117ab775b2fab0cb9a00bcd96b639c47a Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 13 Feb 2023 01:49:20 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 1m memory: 23Mi Environment: Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbckr (ro) liveness-probe: Container ID: cri-o://d6dfaf9d8719a2143467af320617c346aef0074a719f9c9221511c832e47f262 Image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0 Image ID: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:6976cb9aa79c0ab33783342f2b5a6066cda4c177d0205c73b8f63e6a0641338e Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Running Started: Mon, 13 Feb 2023 04:05:53 -0500 Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 13 Feb 2023 01:49:20 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 1m memory: 9Mi Environment: Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbckr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: socket-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: certs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-gbckr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3h57m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 3h56m default-scheduler Successfully assigned openshift-storage/topolvm-controller-78cbfc4867-qdfs4 to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" Normal Pulled 3h56m kubelet Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" in 6.651550266s (6.651565782s including waiting) Normal Created 3h56m kubelet Created container self-signed-cert-generator Normal Started 3h56m kubelet Started container self-signed-cert-generator Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" Normal Pulled 3h56m kubelet Successfully pulled image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" in 15.314273598s (15.314287345s including waiting) Normal Created 3h56m kubelet Created container topolvm-controller Normal Started 3h56m kubelet Started container topolvm-controller Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c" Normal Pulled 3h56m kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c" in 9.605049968s (9.605080919s including waiting) Normal Created 3h56m kubelet Created container csi-provisioner Normal Started 3h56m kubelet Started container csi-provisioner Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107" Normal Pulled 3h55m kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107" in 8.117500145s (8.117511574s including waiting) Normal Created 3h55m kubelet Created container csi-resizer Normal Started 3h55m kubelet Started container csi-resizer Normal Pulled 3h55m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 3h55m kubelet Created container liveness-probe Normal Started 3h55m kubelet Started container liveness-probe Warning FailedCreatePodSandBox 3h43m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-78cbfc4867-qdfs4_openshift-storage_9744aca6-9463-42d2-a05e-f1e3af7b175e_0(354126aa035d6bf7b6b6d8267fc644ce5397c3d69946e0fac3bb302dabfd380b): error adding pod openshift-storage_topolvm-controller-78cbfc4867-qdfs4 to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Pulled 3h43m kubelet Container image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" already present on machine Normal Created 3h43m kubelet Created container self-signed-cert-generator Normal Started 3h43m kubelet Started container self-signed-cert-generator Normal Pulled 3h43m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h43m kubelet Created container topolvm-controller Normal Started 3h43m kubelet Started container topolvm-controller Normal Pulled 3h43m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c" already present on machine Normal Created 3h43m kubelet Created container csi-provisioner Normal Started 3h43m kubelet Started container csi-provisioner Normal Pulled 3h43m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107" already present on machine Normal Created 3h43m kubelet Created container csi-resizer Normal Started 3h43m kubelet Started container csi-resizer Normal Pulled 3h43m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 3h43m kubelet Created container liveness-probe Normal Started 3h43m kubelet Started container liveness-probe Warning FailedCreatePodSandBox 3h33m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-controller-78cbfc4867-qdfs4_openshift-storage_9744aca6-9463-42d2-a05e-f1e3af7b175e_0(415fe639bfd9f361703f0b8a658c74d2c07dbe9b0630a0e06f07be1e85ffb84a): error adding pod openshift-storage_topolvm-controller-78cbfc4867-qdfs4 to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Pulled 3h33m kubelet Container image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" already present on machine Normal Created 3h33m kubelet Created container self-signed-cert-generator Normal Started 3h33m kubelet Started container self-signed-cert-generator Normal Pulled 3h33m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h33m kubelet Created container topolvm-controller Normal Started 3h33m kubelet Started container topolvm-controller Normal Pulled 3h33m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c" already present on machine Normal Created 3h33m kubelet Created container csi-provisioner Normal Started 3h33m kubelet Started container csi-provisioner Normal Pulled 3h33m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107" already present on machine Normal Created 3h33m kubelet Created container csi-resizer Normal Started 3h33m kubelet Started container csi-resizer Normal Pulled 3h33m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 3h33m kubelet Created container liveness-probe Normal Started 3h33m kubelet Started container liveness-probe Normal Pulled 76m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:199eac2ba4c8390daa511b040315e415cfbcfa80aa7af978a33624445b96c17c" already present on machine Normal Created 76m kubelet Created container csi-provisioner Normal Started 76m kubelet Started container csi-provisioner Normal Pulled 76m kubelet Container image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:9d486daffd348664c00d8b80bd0da973b902f3650acdef37e1b813278ed6c107" already present on machine Normal Created 76m kubelet Created container csi-resizer Normal Started 76m kubelet Started container csi-resizer Normal Pulled 76m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 76m kubelet Created container liveness-probe Normal Started 76m kubelet Started container liveness-probe Normal Pulled 76m (x2 over 76m) kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 76m (x2 over 76m) kubelet Created container topolvm-controller Normal Started 76m (x2 over 76m) kubelet Started container topolvm-controller Warning Unhealthy 76m (x4 over 76m) kubelet Readiness probe failed: Get "http://10.42.0.6:8080/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning ProbeError 76m (x5 over 76m) kubelet Readiness probe error: Get "http://10.42.0.6:8080/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: Warning BackOff 94s (x349 over 76m) kubelet Back-off restarting failed container topolvm-controller in pod topolvm-controller-78cbfc4867-qdfs4_openshift-storage(9744aca6-9463-42d2-a05e-f1e3af7b175e) Name: topolvm-node-9bnp5 Namespace: openshift-storage Priority: 0 Service Account: topolvm-node Node: localhost.localdomain/192.168.122.17 Start Time: Mon, 13 Feb 2023 01:25:46 -0500 Labels: app.kubernetes.io/component=topolvm-node app.kubernetes.io/managed-by=lvms-operator app.kubernetes.io/name=topolvm-csi-driver app.kubernetes.io/part-of=lvms-provisioner controller-revision-hash=9858944c6 pod-template-generation=1 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.5/24"],"mac_address":"0a:58:0a:2a:00:05","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.5/24","gat... lvms.microshift.io/lvmd_config_sha256sum: f19e3b3431195ba269cf869ce546154185337e94f02d9b13ced428cca55a77c3 Status: Running IP: 10.42.0.5 IPs: IP: 10.42.0.5 Controlled By: DaemonSet/topolvm-node Init Containers: file-checker: Container ID: cri-o://9ea8319697af0764b9e431751b819b8e571bd2a58f7e7c706249c359ac413df9 Image: registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671 Image ID: registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671 Port: Host Port: Command: /usr/bin/bash -c until [ -f /etc/topolvm/lvmd.yaml ]; do echo waiting for lvmd config file; sleep 5; done State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:15 -0500 Finished: Mon, 13 Feb 2023 01:49:15 -0500 Ready: True Restart Count: 2 Environment: Mounts: /etc/topolvm from lvmd-config-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjk85 (ro) Containers: lvmd: Container ID: cri-o://eb94e774151d72676fa45a79264e41379083829bd0b547474e9a96138dc14075 Image: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Image ID: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Port: Host Port: Command: /lvmd --config=/etc/topolvm/lvmd.yaml --container=true State: Running Started: Mon, 13 Feb 2023 04:05:51 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:16 -0500 Finished: Mon, 13 Feb 2023 04:03:59 -0500 Ready: True Restart Count: 3 Requests: cpu: 1m memory: 22Mi Environment: Mounts: /etc/topolvm from lvmd-config-dir (rw) /run/lvmd from lvmd-socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjk85 (ro) topolvm-node: Container ID: cri-o://285852c3ef1d109da6645efcc516923c4e1eca1a984a66d0daaa00014e5c4b49 Image: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Image ID: registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1 Port: 9808/TCP Host Port: 0/TCP Command: /topolvm-node --lvmd-socket=/run/lvmd/lvmd.socket State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 13 Feb 2023 05:19:05 -0500 Finished: Mon, 13 Feb 2023 05:19:08 -0500 Ready: False Restart Count: 21 Requests: cpu: 1m memory: 16Mi Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=60s #success=1 #failure=3 Environment: NODE_NAME: (v1:spec.nodeName) Mounts: /run/lvmd from lvmd-socket-dir (rw) /run/topolvm from node-plugin-dir (rw) /var/lib/kubelet/plugins/kubernetes.io/csi from csi-plugin-dir (rw) /var/lib/kubelet/pods from pod-volumes-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjk85 (ro) csi-registrar: Container ID: cri-o://3eb2dcf5a4fb3bed48b9dc8e16d843a09fa7df662a2133343af2f3463a613c9d Image: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860 Image ID: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:4f430448d09b0fbb6b90291fa808dbf00e081eb36196b23a459eab69fbeca761 Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock --kubelet-registration-path=/var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 13 Feb 2023 01:49:16 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 1m memory: 2Mi Environment: Mounts: /registration from registration-dir (rw) /run/topolvm from node-plugin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjk85 (ro) liveness-probe: Container ID: cri-o://cdd44d18fe75ab4ad06463eea45b283eeec4cd372acd2aae7aa1dd0fb6b95762 Image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0 Image ID: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:6976cb9aa79c0ab33783342f2b5a6066cda4c177d0205c73b8f63e6a0641338e Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Running Started: Mon, 13 Feb 2023 04:05:52 -0500 Last State: Terminated Reason: Error Exit Code: 143 Started: Mon, 13 Feb 2023 01:49:16 -0500 Finished: Mon, 13 Feb 2023 04:03:54 -0500 Ready: True Restart Count: 3 Requests: cpu: 1m memory: 7Mi Environment: Mounts: /run/topolvm from node-plugin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjk85 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: registration-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins_registry/ HostPathType: Directory node-plugin-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins/topolvm.io/node HostPathType: DirectoryOrCreate csi-plugin-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins/kubernetes.io/csi HostPathType: DirectoryOrCreate pod-volumes-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/pods/ HostPathType: DirectoryOrCreate lvmd-config-dir: Type: ConfigMap (a volume populated by a ConfigMap) Name: lvmd Optional: false lvmd-socket-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: kube-api-access-sjk85: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3h56m default-scheduler Successfully assigned openshift-storage/topolvm-node-9bnp5 to localhost.localdomain Normal Pulling 3h56m kubelet Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" Normal Pulled 3h56m kubelet Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" in 6.780216346s (6.780229369s including waiting) Normal Created 3h56m kubelet Created container file-checker Normal Started 3h56m kubelet Started container file-checker Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" Normal Pulled 3h56m kubelet Successfully pulled image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" in 12.699084892s (12.699105616s including waiting) Normal Created 3h56m kubelet Created container lvmd Normal Started 3h56m kubelet Started container lvmd Normal Pulled 3h56m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h56m kubelet Created container topolvm-node Normal Started 3h56m kubelet Started container topolvm-node Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860" Normal Pulled 3h56m kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860" in 10.260563241s (10.260580058s including waiting) Normal Created 3h56m kubelet Created container csi-registrar Normal Started 3h56m kubelet Started container csi-registrar Normal Pulling 3h56m kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" Normal Pulled 3h55m kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" in 9.469221636s (9.469236014s including waiting) Normal Created 3h55m kubelet Created container liveness-probe Normal Started 3h55m kubelet Started container liveness-probe Warning FailedCreatePodSandBox 3h43m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-9bnp5_openshift-storage_763e920a-b594-4485-bf77-dfed5dddbf03_0(cd4f590be8555fcf36a4b7bbf9b2679181eb452473ef038d46654e3465e46592): error adding pod openshift-storage_topolvm-node-9bnp5 to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Pulled 3h43m kubelet Container image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" already present on machine Normal Created 3h43m kubelet Created container file-checker Normal Started 3h43m kubelet Started container file-checker Normal Pulled 3h43m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h43m kubelet Created container lvmd Normal Started 3h43m kubelet Started container lvmd Normal Pulled 3h43m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h43m kubelet Created container topolvm-node Normal Started 3h43m kubelet Started container topolvm-node Normal Pulled 3h43m kubelet Container image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860" already present on machine Normal Created 3h43m kubelet Created container csi-registrar Normal Started 3h43m kubelet Started container csi-registrar Normal Pulled 3h43m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 3h43m kubelet Created container liveness-probe Normal Started 3h43m kubelet Started container liveness-probe Warning FailedCreatePodSandBox 3h33m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_topolvm-node-9bnp5_openshift-storage_763e920a-b594-4485-bf77-dfed5dddbf03_0(ba648bf9c7cd61cde69ddb18e8893b631edf426b310f2612a09149038348718b): error adding pod openshift-storage_topolvm-node-9bnp5 to CNI network "ovn-kubernetes": plugin type="ovn-k8s-cni-overlay" name="ovn-kubernetes" failed (add): failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: no such file or directory Normal Pulled 3h33m kubelet Container image "registry.access.redhat.com/ubi8/openssl@sha256:9e743d947be073808f7f1750a791a3dbd81e694e37161e8c6c6057c2c342d671" already present on machine Normal Created 3h33m kubelet Created container file-checker Normal Started 3h33m kubelet Started container file-checker Normal Pulled 3h33m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h33m kubelet Created container lvmd Normal Started 3h33m kubelet Started container lvmd Normal Pulled 3h33m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 3h33m kubelet Created container topolvm-node Normal Started 3h33m kubelet Started container topolvm-node Normal Pulled 3h33m kubelet Container image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860" already present on machine Normal Created 3h33m kubelet Created container csi-registrar Normal Started 3h33m kubelet Started container csi-registrar Normal Pulled 3h33m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 3h33m kubelet Created container liveness-probe Normal Started 3h33m kubelet Started container liveness-probe Normal Pulled 76m kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 76m kubelet Created container lvmd Normal Started 76m kubelet Started container lvmd Normal Pulled 76m kubelet Container image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:a4319ff7c736ca9fe20500dc3e5862d6bb446f2428ea2eadfb5f042195f4f860" already present on machine Normal Created 76m kubelet Created container csi-registrar Normal Started 76m kubelet Started container csi-registrar Normal Pulled 76m kubelet Container image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:9df24be671271f5ea9414bfd08e58bc2fa3dc4bc68075002f3db0fd020b58be0" already present on machine Normal Created 76m kubelet Created container liveness-probe Normal Started 76m kubelet Started container liveness-probe Normal Pulled 75m (x3 over 76m) kubelet Container image "registry.redhat.io/lvms4/topolvm-rhel8@sha256:10bffded5317da9de6c45ba74f0bb10e0a08ddb2bfef23b11ac61287a37f10a1" already present on machine Normal Created 75m (x3 over 76m) kubelet Created container topolvm-node Normal Started 75m (x3 over 76m) kubelet Started container topolvm-node Warning BackOff 91s (x352 over 76m) kubelet Back-off restarting failed container topolvm-node in pod topolvm-node-9bnp5_openshift-storage(763e920a-b594-4485-bf77-dfed5dddbf03)