Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-48277

Unresponsive API with saturated ovs-vswitchd and kubelet

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Severe performance regression trying to run a simple scalability test.
      This scalability test was run in 2021 and ran fine at the time.
      It does not run today even without Kata, with API failures after scaling to only 15 containers on a host with 64 CPUs and 384G of memory.

      Version-Release number of selected component (if applicable):

      OpenShisft 4.17 stable
      Installed using kcli version: 99.0 commit: 356e26d 2024/12/10

      How reproducible:

      Every time   

      Steps to Reproduce:

      Using a host with 64 CPUs and 384G of memory.
          1. Using a host with 64 CPUs, 384G of memory
          2. Create a cluster with the parameter files shown below
          3. Run a script that attempts to scale a simple workload that consumes a calibrated amount of CPU/memory
      
      Parameter file:
      
      cat << EOF > kcli-ocp417.yaml
      
      cluster: kata417
      domain: kata417.com
      # RHCOS image name in the libvirt storage pool
      #image: rhcos-410.84.202201450144-0-qemu.x86_64.qcow2
      imagecontentsources: []
      mdns: True
      # Libvirt network name eg. 192.168.10.0/24
      network: openshift-417
      # Libvirt storage pool
      pool: openshift
      api_ip: 192.168.17.254
      # Copy the pull secret and store it the following file
      pull_secret: openshift_pull.json
      # Release version number: 4.7/4.8/4.9
      tag: 4.17
      # Build type: nightly/stable. The latest nightly or stable build will be automatically downloaded
      # If specific version is required then download openshift-install from
      # https://mirror.openshift.com/pub/openshift-v4/clients and
      # place it in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin to use the same for install
      version: stable
      disk_size: 60bootstrap_numcpus: 4
      bootstrap_memory: 16384ctlplanes: 3
      ctlplane_numcpus: 4
      ctlplane_memory: 28672
      ctlplane_macs: []workers: 6
      worker_numcpus: 8
      worker_memory: 49152
      worker_macs: []
      EOF
      
      Script:
      
      cat > scale.sh << EOF
      #!/bin/bash
      mv -f data.csv data.csv.old
      CLUSTER=kata417
      WORKERS=6echo "Iteration,Elapsed,Running,Creating,Terminating,Total Active Memory,Total Free Memory, Active,0,1,2,3,4,5,Free,0,1,2,3,4,5" > data.csvfor ((I = 0; I < 1000; I++))
      do
          START=$SECONDS
          echo "Iteration $I starting at $START, $(date)"
          oc login -u kubeadmin -p $(cat ~/.kcli/clusters/$CLUSTER/auth/kubeadmin-password)
          oc scale --replicas=$I -f workload.yaml
          while ( oc get pods | grep -q ContainerCreating ); do
              echo -n .
          done
          ELAPSED=$(($SECONDS - $START))
          RUNNING=$(oc get pods | grep Running | wc -l)
          CREATING=$(oc get pods | grep ContainerCreating | wc -l)
          TERMINATING=$(oc get pods | grep Terminating | wc -l)
          echo "  Containers started at $(date) in $ELAPSED seconds"
          echo "  Running $RUNNING containers, Creating $CREATING containers, $TERMINATING terminating"
          ALL_ACTIVE=""
          ALL_FREE=""
          TOTAL_ACTIVE=0
          TOTAL_FREE=0
          for ((W=0; W<$WORKERS; W++))
          do
              ACTIVE=$(kcli ssh ${CLUSTER}-worker-$W cat /proc/meminfo | grep Active: | awk '{ print $2 }')
              FREE=$(kcli ssh ${CLUSTER}-worker-$W cat /proc/meminfo | grep MemFree: | awk '{ print $2 }')
              ALL_ACTIVE="$ALL_ACTIVE,$ACTIVE"
              ALL_FREE="$ALL_FREE,$FREE"
              TOTAL_ACTIVE=$(($TOTAL_ACTIVE + $ACTIVE))
              TOTAL_FREE=$(($TOTAL_FREE + $FREE))
          done
          echo "$I,$ELAPSED,$RUNNING,$CREATING,$TERMINATING,$TOTAL_ACTIVE,$TOTAL_FREE,$ALL_ACTIVE,$ALL_FREE" >> data.csv
      done
      EOF

      Actual results:

      After only a few iteration, I start getting errors simply trying to login:
      
      Iteration 7 starting at 130, Fri Jan 10 08:31:00 UTC 2025
      Login successful.You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:31:03 UTC 2025 in 2 seconds
        Running 6 containers, Creating 1 containers, 0 terminating
      Iteration 8 starting at 152, Fri Jan 10 08:31:22 UTC 2025
      Login successful.You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:31:24 UTC 2025 in 2 seconds
        Running 7 containers, Creating 1 containers, 0 terminating
      Iteration 9 starting at 183, Fri Jan 10 08:31:53 UTC 2025
      Login successful.You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:31:57 UTC 2025 in 2 seconds
        Running 8 containers, Creating 1 containers, 0 terminating
      Iteration 10 starting at 238, Fri Jan 10 08:32:48 UTC 2025
      error: net/http: TLS handshake timeout
      Unable to connect to the server: net/http: TLS handshake timeout
        Containers started at Fri Jan 10 08:34:20 UTC 2025 in 30 seconds
        Running 9 containers, Creating 0 containers, 0 terminating
      Iteration 11 starting at 385, Fri Jan 10 08:35:15 UTC 2025
      Unable to connect to the server: net/http: TLS handshake timeout
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:36:15 UTC 2025 in 44 seconds
        Running 9 containers, Creating 0 containers, 0 terminating
      Iteration 12 starting at 492, Fri Jan 10 08:37:02 UTC 2025
      Login successful.You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:37:21 UTC 2025 in 16 seconds
        Running 9 containers, Creating 0 containers, 0 terminating
      Iteration 13 starting at 568, Fri Jan 10 08:38:18 UTC 2025
      Unable to connect to the server: read tcp 192.168.17.1:45900->192.168.17.254:443: read: connection reset by peer
      error: You must be logged in to the server (Unauthorized)
      error: You must be logged in to the server (Unauthorized)
        Containers started at Fri Jan 10 08:42:24 UTC 2025 in 180 seconds
        Running 12 containers, Creating 0 containers, 0 terminating
      Iteration 14 starting at 894, Fri Jan 10 08:43:44 UTC 2025
      Error from server (InternalError): Internal error occurred: unexpected response: 400
      Error from server: rpc error: code = DeadlineExceeded desc = context deadline exceeded
      E0110 08:48:31.274053 2398703 request.go:1027] Unexpected error when reading response body: context deadline exceeded
      error: You must be logged in to the server (Unauthorized)
      Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
        Containers started at Fri Jan 10 08:52:26 UTC 2025 in 229 seconds
        Running 0 containers, Creating 0 containers, 0 terminating
      Iteration 15 starting at 1500, Fri Jan 10 08:53:50 UTC 2025
      Error from server (InternalError): Internal error occurred: unexpected response: 400
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:55:26 UTC 2025 in 88 seconds
        Running 12 containers, Creating 0 containers, 0 terminating
      Iteration 16 starting at 1659, Fri Jan 10 08:56:29 UTC 2025
      error: net/http: TLS handshake timeout
      deployment.apps/workload-deployment scaled
        Containers started at Fri Jan 10 08:56:47 UTC 2025 in 14 seconds
        Running 12 containers, Creating 0 containers, 0 terminating
      Iteration 17 starting at 1741, Fri Jan 10 08:57:51 UTC 2025
      Login successful.
          

      Expected results:

      The script ran successfully in 2021 with hundreds of containers and with Kata as a runtime, which consumes a lot of extra memory and CPU. This is documented for example in https://issues.redhat.com/browse/KATA-673, see Screenshot taken on March 31, 2021 in that issue 

      Additional info:

      When the problem occurs, I can login into one of the control plane nodes. Processes ovs-vswitchd and kubelet seem to be pegged, with CPU utilization in the 70% range. There are many internal errors reported by journalctl --follow. 
      
      Here is a small sample:
      Jan 10 11:26:18 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:18.913066    2691 server.go:319] "Authorization error" err="Post \"https://api-int.kata417.kata417.com:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": context canceled" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics"
      Jan 10 11:26:18 kata417-ctlplane-0.kata417.com ovs-vswitchd[1100]: ovs|05521|timeval(revalidator16)|WARN|Unreasonably long 1262ms poll interval (602ms user, 23ms system)
      Jan 10 11:26:18 kata417-ctlplane-0.kata417.com ovs-vswitchd[1100]: ovs|05522|timeval(revalidator16)|WARN|context switches: 1 voluntary, 458 involuntary
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.027537    2691 secret.go:194] Couldn't get secret openshift-cluster-storage-operator/csi-snapshot-webhook-secret: failed to sync secret cache: timed out waiting for the condition                                                                                                               
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:08.911862    2691 server.go:319] "Authorization error" err="Post \"https://api-int.kata417.kata417.com:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": context canceled" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics"
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.075833    2691 reflector.go:547] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.077334    2691 trace.go:236] Trace[1729285992]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"image-import-ca" (10-Jan-2025 11:25:54.811) (total time: 24266ms):                                                                                                                   
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[1729285992]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 24264ms (11:26:19.075)             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[1729285992]: [24.266007699s] [24.266007699s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.077435    2691 reflector.go:150] object-"openshift-apiserver"/"image-import-ca": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:14.116673    2691 server.go:319] "Authorization error" err="Post \"https://api-int.kata417.kata417.com:6443/apis/authorization.k8s.io/v1/subjectaccessreviews\": context canceled" user="system:serviceaccount:openshift-monitoring:prometheus-k8s" verb="get" resource="nodes" subresource="metrics"
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:15.253356    2691 reflector.go:547] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.087541    2691 trace.go:236] Trace[1177888169]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"kube-root-ca.crt" (10-Jan-2025 11:25:45.974) (total time: 33112ms):                                                                                                             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[1177888169]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 27914ms (11:26:13.888)       
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[1177888169]: [33.112970549s] [33.112970549s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.087902    2691 reflector.go:150] object-"openshift-authentication"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:25:50.921605    2691 status_manager.go:853] "Failed to get status for pod" podUID="be36fe60-bcea-4650-8424-fc9a517cad0d" pod="openshift-operator-lifecycle-manager/catalog-operator-78c9b7bbcb-s96nn" err="Get \"https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-78c9b7bbcb-s96nn\": dial tcp: lookup api-int.kata417.kata417.com: i/o timeout"
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.003679    2691 reflector.go:547] object-"openshift-insights"/"openshift-insights-serving-cert": failed to list *v1.Secret: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.090681    2691 trace.go:236] Trace[513334414]: "Reflector ListAndWatch" name:object-"openshift-insights"/"openshift-insights-serving-cert" (10-Jan-2025 11:25:40.069) (total time: 39020ms):                                                                                                     
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[513334414]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 38934ms (11:26:19.003)  
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[513334414]: [39.020939216s] [39.020939216s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:25:49.707219    2691 kubelet_getters.go:218] "Pod status updated" pod="kcli-infra/coredns-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                                             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.095097    2691 kubelet_getters.go:218] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                        
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.095391    2691 kubelet_getters.go:218] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                      
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.095564    2691 kubelet_getters.go:218] "Pod status updated" pod="openshift-etcd/etcd-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                                            
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.095755    2691 kubelet_getters.go:218] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                         
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.095850    2691 kubelet_getters.go:218] "Pod status updated" pod="kcli-infra/haproxy-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                                             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.096041    2691 kubelet_getters.go:218] "Pod status updated" pod="kcli-infra/keepalived-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                                          
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.096093    2691 kubelet_getters.go:218] "Pod status updated" pod="kcli-infra/mdns-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                                                                
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.096297    2691 kubelet_getters.go:218] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-kata417-ctlplane-0.kata417.com" status="Running"                                                                                                                              
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.096750    2691 reflector.go:547] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.097400    2691 trace.go:236] Trace[2062411173]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"metrics-tls" (10-Jan-2025 11:25:42.559) (total time: 36538ms):                                                                                                                
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[2062411173]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 36537ms (11:26:19.096)             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[2062411173]: [36.538066841s] [36.538066841s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.097437    2691 reflector.go:150] object-"openshift-ingress-operator"/"metrics-tls": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.097791    2691 reflector.go:547] object-"openshift-monitoring"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: i/o timeout
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.097959    2691 trace.go:236] Trace[185185695]: "Reflector ListAndWatch" name:object-"openshift-monitoring"/"openshift-service-ca.crt" (10-Jan-2025 11:25:35.995) (total time: 43102ms):                                                                                                          
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[185185695]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: i/o timeout 43102ms (11:26:19.097)                    
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[185185695]: [43.102618681s] [43.102618681s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.097998    2691 reflector.go:150] object-"openshift-monitoring"/"openshift-service-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: i/o timeout
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.101934    2691 reflector.go:547] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.102745    2691 trace.go:236] Trace[856843885]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"kube-root-ca.crt" (10-Jan-2025 11:25:47.759) (total time: 31342ms):                                                                                                             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[856843885]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 31340ms (11:26:19.099)       
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[856843885]: [31.342761048s] [31.342761048s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.103169    2691 reflector.go:150] object-"openshift-config-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.103846    2691 reflector.go:547] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.104339    2691 trace.go:236] Trace[76664510]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"kube-root-ca.crt" (10-Jan-2025 11:25:41.644) (total time: 37459ms):                                                                                                             
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[76664510]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 37459ms (11:26:19.103)       
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[76664510]: [37.45982091s] [37.45982091s] END
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: E0110 11:26:19.104384    2691 reflector.go:150] object-"openshift-network-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: W0110 11:26:19.111987    2691 reflector.go:547] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: I0110 11:26:19.112538    2691 trace.go:236] Trace[962197571]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-serving-cert" (10-Jan-2025 11:25:43.871) (total time: 35241ms):                                                                                               
      Jan 10 11:26:19 kata417-ctlplane-0.kata417.com kubenswrapper[2691]: Trace[962197571]: ---"Objects listed" error:Get "https://api-int.kata417.kata417.com:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=339055": dial tcp 192.168.17.254:6443: connect: connection refused 35240ms (11:26:19.111)
         

       

              rh-ee-kehannon Kevin Hannon
              rh-ee-cdupontd Christophe de Dinechin
              None
              None
              Cameron Meadors Cameron Meadors
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: