Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-61968

[backport release-4.18] Pod Stuck on an Unschedulable Node Due to Broad Toleration (`operator: Exists`)

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 4.17.z, 4.16.z, 4.18.z, 4.19.z, 4.20.0
    • oc
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

          The must-gather pod its being scheduled on a not available node and the log collection it fails.
      
      The must-gather pod spec:
      [root@INBACRNRDL0102 ~]# oc get pods -n openshift-must-gather-rtp72 must-gather-ss79t -o yaml
      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.133.1.241/23"],"mac_address":"0a:58:0a:85:01:f1","gateway_ips":["10.133.0.1"],"routes":[{"dest":"10.132.0.0/14","nextHop":"10.133.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.133.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.133.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.133.0.1"}],"ip_address":"10.133.1.241/23","gateway_ip":"10.133.0.1","role":"primary"}}'
          k8s.v1.cni.cncf.io/network-status: |-
            [{
                "name": "ovn-kubernetes",
                "interface": "eth0",
                "ips": [
                    "10.133.1.241"
                ],
                "mac": "0a:58:0a:85:01:f1",
                "default": true,
                "dns": {}
            }]
        creationTimestamp: "2025-02-18T11:59:15Z"
        generateName: must-gather-
        labels:
          app: must-gather
        name: must-gather-ss79t
        namespace: openshift-must-gather-rtp72
        resourceVersion: "6784513"
        uid: 6c80119c-2124-44d2-afb1-7b49afbae73f
      spec:
        containers:
        - command:
          - /bin/bash
          - -c
          - "\necho \"volume percentage checker started.....\"\nwhile true; do\ndisk_usage=$(du
            -s \"/must-gather\" | awk '{print $1}')\ndisk_space=$(df -P \"/must-gather\"
            | awk 'NR==2 {print $2}')\nusage_percentage=$(( (disk_usage * 100) / disk_space
            ))\necho \"volume usage percentage $usage_percentage\"\nif [ \"$usage_percentage\"
            -gt \"30\" ]; then\n\techo \"Disk usage exceeds the volume percentage of 30
            for mounted directory. Exiting...\"\n\t# kill gathering process in gather container
            to prevent disk to use more.\n\tpkill --signal SIGKILL -f /usr/bin/gather\n\texit
            1\nfi\nsleep 5\ndone & /usr/bin/gather; sync"
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaac3feab704eb100776366ccbed8eaf9c7c0b9dea0bce597495fce1225d592f
          imagePullPolicy: IfNotPresent
          name: gather
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: FallbackToLogsOnError
          volumeMounts:
          - mountPath: /must-gather
            name: must-gather-output
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-54qc6
            readOnly: true
        - command:
          - /bin/bash
          - -c
          - 'trap : TERM INT; sleep infinity & wait'
          image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaac3feab704eb100776366ccbed8eaf9c7c0b9dea0bce597495fce1225d592f
          imagePullPolicy: IfNotPresent
          name: copy
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: FallbackToLogsOnError
          volumeMounts:
          - mountPath: /must-gather
            name: must-gather-output
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: kube-api-access-54qc6
            readOnly: true
        dnsPolicy: ClusterFirst
        enableServiceLinks: true
        imagePullSecrets:
        - name: default-dockercfg-sk2cg
        nodeName: hub-ctlplane-1.5g-deployment.lab
        nodeSelector:
          kubernetes.io/os: linux
          node-role.kubernetes.io/master: ""
        preemptionPolicy: PreemptLowerPriority
        priority: 2000000000
        priorityClassName: system-cluster-critical
        restartPolicy: Never
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: default
        serviceAccountName: default
        terminationGracePeriodSeconds: 0
        tolerations:
        - operator: Exists
        volumes:
        - emptyDir: {}
          name: must-gather-output
        - name: kube-api-access-54qc6
          projected:
            defaultMode: 420
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
            - configMap:
                items:
                - key: service-ca.crt
                  path: service-ca.crt
                name: openshift-service-ca.crt
      
      The pod.spec.tolerations its set to operator: Exists [1] which means that the pod can tolerate any taint on a node, regardless of the key, value, or effect of the taint.

      [1] : https://github.com/openshift/oc/blob/7c38ff702605a62d3218fed748c22062476d2c1c/pkg/cli/admin/mustgather/mustgather.go#L1024 

      Version-Release number of selected component (if applicable):

          any

      How reproducible:

          100%

      Steps to Reproduce:

          1. Check the node status:
      [root@INBACRNRDL0102 ~]# oc get nodes
      NAME                               STATUS     ROLES                         AGE     VERSION
      hub-ctlplane-0.5g-deployment.lab   NotReady   control-plane,master,worker   5d16h   v1.30.7
      hub-ctlplane-1.5g-deployment.lab   Ready      control-plane,master,worker   5d16h   v1.30.7
      hub-ctlplane-2.5g-deployment.lab   Ready      control-plane,master,worker   5d16h   v1.30.7
           2. Checking the nodes taints:
      [root@INBACRNRDL0102 ~]# oc describe nodes | grep -A 5 -i taint
      Taints:             node.kubernetes.io/unreachable:NoExecute
                          node.kubernetes.io/unreachable:NoSchedule
      Unschedulable:      false
      Lease:
        HolderIdentity:  hub-ctlplane-0.5g-deployment.lab
        AcquireTime:     <unset>
      --
      Taints:             <none>
      Unschedulable:      false
      Lease:
        HolderIdentity:  hub-ctlplane-1.5g-deployment.lab
        AcquireTime:     <unset>
        RenewTime:       Tue, 18 Feb 2025 06:11:31 -0700
      --
      Taints:             <none>
      Unschedulable:      false
      Lease:
        HolderIdentity:  hub-ctlplane-2.5g-deployment.lab
        AcquireTime:     <unset>
        RenewTime:       Tue, 18 Feb 2025 06:11:37 -0700
           3. Running the must-gather 
      $ oc adm must-gather
           4. Check the pod status:
      [root@INBACRNRDL0102 ~]# oc get pods -A -o wide | grep gather
      openshift-must-gather-h759d                        must-gather-tx5v9                                                 0/2     Pending            0               31s     <none>         hub-ctlplane-0.5g-deployment.lab   <none>           <none>
           5. 
          

      Actual results:

          [root@INBACRNRDL0102 ~]# oc adm must-gather --node-name='hub-ctlplane-0.5g-deployment.lab'[must-gather      ] OUT 2025-02-18T12:04:58.603291802Z Using must-gather plug-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaac3feab704eb100776366ccbed8eaf9c7c0b9dea0bce597495fce1225d592fWhen opening a support case, bugzilla, or issue please include the following summary data along with any other requested information:ClusterID: ba61e3b0-bfcd-44a2-a638-192f8ab5d775ClientVersion: 4.16.16ClusterVersion: Stable at "4.17.15"ClusterOperators:	clusteroperator/authentication is degraded because APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()	clusteroperator/dns is progressing: DNS "default" reports Progressing=True: "Have 2 available node-resolver pods, want 3."	clusteroperator/etcd is degraded because EtcdCertSignerControllerDegraded: EtcdCertSignerController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:2568014202552214589 name:"hub-ctlplane-2.5g-deployment.lab" peerURLs:"https://172.16.30.22:2380" clientURLs:"https://172.16.30.22:2379"  Healthy:true Took:1.110164ms Error:<nil>} {Member:ID:6712901891382413704 name:"hub-ctlplane-1.5g-deployment.lab" peerURLs:"https://172.16.30.21:2380" clientURLs:"https://172.16.30.21:2379"  Healthy:true Took:1.419229ms Error:<nil>} {Member:ID:10378403045053594737 name:"hub-ctlplane-0.5g-deployment.lab" peerURLs:"https://172.16.30.20:2380" clientURLs:"https://172.16.30.20:2379"  Healthy:false Took:29.997074964s Error:health check failed: context deadline exceeded}]EtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:2568014202552214589 name:"hub-ctlplane-2.5g-deployment.lab" peerURLs:"https://172.16.30.22:2380" clientURLs:"https://172.16.30.22:2379"  Healthy:true Took:1.110164ms Error:<nil>} {Member:ID:6712901891382413704 name:"hub-ctlplane-1.5g-deployment.lab" peerURLs:"https://172.16.30.21:2380" clientURLs:"https://172.16.30.21:2379"  Healthy:true Took:1.419229ms Error:<nil>} {Member:ID:10378403045053594737 name:"hub-ctlplane-0.5g-deployment.lab" peerURLs:"https://172.16.30.20:2380" clientURLs:"https://172.16.30.20:2379"  Healthy:false Took:29.997074964s Error:health check failed: context deadline exceeded}]EtcdMembersDegraded: 2 of 3 members are available, hub-ctlplane-0.5g-deployment.lab is unhealthyNodeControllerDegraded: The master nodes not ready: node "hub-ctlplane-0.5g-deployment.lab" not ready since 2025-02-17 10:07:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)TargetConfigControllerDegraded: TargetConfigController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:2568014202552214589 name:"hub-ctlplane-2.5g-deployment.lab" peerURLs:"https://172.16.30.22:2380" clientURLs:"https://172.16.30.22:2379"  Healthy:true Took:1.110164ms Error:<nil>} {Member:ID:6712901891382413704 name:"hub-ctlplane-1.5g-deployment.lab" peerURLs:"https://172.16.30.21:2380" clientURLs:"https://172.16.30.21:2379"  Healthy:true Took:1.419229ms Error:<nil>} {Member:ID:10378403045053594737 name:"hub-ctlplane-0.5g-deployment.lab" peerURLs:"https://172.16.30.20:2380" clientURLs:"https://172.16.30.20:2379"  Healthy:false Took:29.997074964s Error:health check failed: context deadline exceeded}]	clusteroperator/image-registry is progressing: NodeCADaemonProgressing: The daemon set node-ca is deploying node podsProgressing: The registry is ready	clusteroperator/kube-apiserver is degraded because NodeControllerDegraded: The master nodes not ready: node "hub-ctlplane-0.5g-deployment.lab" not ready since 2025-02-17 10:07:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)	clusteroperator/kube-controller-manager is degraded because NodeControllerDegraded: The master nodes not ready: node "hub-ctlplane-0.5g-deployment.lab" not ready since 2025-02-17 10:07:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)	clusteroperator/kube-scheduler is degraded because NodeControllerDegraded: The master nodes not ready: node "hub-ctlplane-0.5g-deployment.lab" not ready since 2025-02-17 10:07:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)	clusteroperator/machine-config is degraded because Failed to resync 4.17.15 because: error during waitForDaemonsetRollout: [context deadline exceeded, daemonset machine-config-daemon is not ready. status: (desired: 3, updated: 3, ready: 2, unavailable: 1)]	clusteroperator/monitoring is not available (UpdatingUserWorkloadThanosRuler: waiting for ThanosRuler object changes failed: waiting for Thanos Ruler openshift-user-workload-monitoring/user-workload: context deadline exceeded: expected 2 replicas, got 1 available replicas) because UpdatingUserWorkloadPrometheus: Prometheus "openshift-user-workload-monitoring/user-workload": SomePodsNotReady: , UpdatingUserWorkloadThanosRuler: waiting for ThanosRuler object changes failed: waiting for Thanos Ruler openshift-user-workload-monitoring/user-workload: context deadline exceeded: expected 2 replicas, got 1 available replicas, UpdatingPrometheus: Prometheus "openshift-monitoring/k8s": SomePodsNotReady: 	clusteroperator/network is progressing: DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes)DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes)DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes)	clusteroperator/node-tuning is progressing: Working towards "4.17.15"	clusteroperator/openshift-apiserver is degraded because APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()
      
      [must-gather      ] OUT 2025-02-18T12:04:58.634110209Z namespace/openshift-must-gather-h759d created[must-gather      ] OUT 2025-02-18T12:04:58.639960659Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-57mbh created[must-gather      ] OUT 2025-02-18T12:04:58.672638018Z pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaac3feab704eb100776366ccbed8eaf9c7c0b9dea0bce597495fce1225d592f created
      
      
      [must-gather-tx5v9] OUT 2025-02-18T12:14:58.687669763Z gather did not start: timed out waiting for the condition[must-gather      ] OUT 2025-02-18T12:14:58.694007158Z namespace/openshift-must-gather-h759d deleted[must-gather      ] OUT 2025-02-18T12:14:58.698888475Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-57mbh deleted
      
      Error running must-gather collection:    gather did not start for pod must-gather-tx5v9: timed out waiting for the condition
      Falling back to `oc adm inspect clusteroperators.v1.config.openshift.io` to collect basic cluster information.[must-gather      ] OUT 2025-02-18T12:14:58.977379528Z Gathering data for ns/openshift-config...Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+[must-gather      ] OUT 2025-02-18T12:14:59.278935738Z Gathering data for ns/openshift-config-managed...[must-gather      ] OUT 2025-02-18T12:14:59.752860765Z Gathering data for ns/openshift-authentication...[must-gather      ] OUT 2025-02-18T12:15:06.281623674Z Gathering data for ns/openshift-authentication-operator...[must-gather      ] OUT 2025-02-18T12:15:06.810068194Z Gathering data for ns/openshift-ingress...[must-gather      ] OUT 2025-02-18T12:15:07.171835733Z Gathering data for ns/openshift-oauth-apiserver...[must-gather      ] OUT 2025-02-18T12:15:19.703639475Z Gathering data for ns/openshift-machine-api...[must-gather      ] OUT 2025-02-18T12:15:27.360893399Z Gathering data for ns/openshift-cloud-controller-manager-operator...[must-gather      ] OUT 2025-02-18T12:15:27.972438434Z Gathering data for ns/openshift-cloud-controller-manager...[must-gather      ] OUT 2025-02-18T12:15:28.457943592Z Gathering data for ns/openshift-cloud-credential-operator...[must-gather      ] OUT 2025-02-18T12:15:29.692420026Z Gathering data for ns/openshift-config-operator...[must-gather      ] OUT 2025-02-18T12:15:30.142127416Z Gathering data for ns/openshift-console-operator...[must-gather      ] OUT 2025-02-18T12:15:30.609004667Z Gathering data for ns/openshift-console...[must-gather      ] OUT 2025-02-18T12:15:33.891424308Z Gathering data for ns/openshift-cluster-storage-operator...[must-gather      ] OUT 2025-02-18T12:15:34.314055821Z Gathering data for ns/openshift-dns-operator...[must-gather      ] OUT 2025-02-18T12:15:34.665730657Z Gathering data for ns/openshift-dns...[must-gather      ] OUT 2025-02-18T12:15:52.308466453Z Gathering data for ns/openshift-etcd-operator...[must-gather      ] OUT 2025-02-18T12:15:52.867716835Z Gathering data for ns/openshift-etcd...[must-gather      ] OUT 2025-02-18T12:17:12.342689343Z Gathering data for ns/openshift-image-registry...[must-gather      ] OUT 2025-02-18T12:17:19.34384229Z Gathering data for ns/openshift-ingress-operator...[must-gather      ] OUT 2025-02-18T12:17:19.747686607Z Gathering data for ns/openshift-ingress-canary...[must-gather      ] OUT 2025-02-18T12:17:24.41035667Z Gathering data for ns/openshift-insights...[must-gather      ] OUT 2025-02-18T12:17:26.740671969Z Gathering data for ns/openshift-monitoring...[must-gather      ] OUT 2025-02-18T12:18:40.444086398Z Gathering data for ns/openshift-operators...[must-gather      ] OUT 2025-02-18T12:18:41.068303754Z Gathering data for ns/hypershift...[must-gather      ] OUT 2025-02-18T12:18:42.163013425Z Gathering data for ns/open-cluster-management...[must-gather      ] OUT 2025-02-18T12:18:55.10243469Z Gathering data for ns/openshift-cluster-node-tuning-operator...[must-gather      ] OUT 2025-02-18T12:18:59.915403741Z Gathering data for ns/openshift-kube-apiserver-operator...[must-gather      ] OUT 2025-02-18T12:19:00.740073626Z Gathering data for ns/openshift-kube-apiserver...[must-gather      ] OUT 2025-02-18T12:20:47.391074872Z Gathering data for ns/default...[must-gather      ] OUT 2025-02-18T12:20:47.72530868Z Gathering data for ns/open-cluster-management-hub...[must-gather      ] OUT 2025-02-18T12:20:48.437383687Z Gathering data for ns/multicluster-engine...[must-gather      ] OUT 2025-02-18T12:21:06.113166988Z Gathering data for ns/openshift-multus...[must-gather      ] OUT 2025-02-18T12:22:07.3269937Z Gathering data for ns/openshift-storage...[must-gather      ] OUT 2025-02-18T12:22:16.285988713Z Gathering data for ns/openshift-kmm-hub...[must-gather      ] OUT 2025-02-18T12:22:20.847855606Z Gathering data for ns/openshift-kube-controller-manager...[must-gather      ] OUT 2025-02-18T12:23:15.203944691Z Gathering data for ns/openshift-kube-controller-manager-operator...[must-gather      ] OUT 2025-02-18T12:23:15.608115574Z Gathering data for ns/kube-system...[must-gather      ] OUT 2025-02-18T12:23:15.971196648Z Gathering data for ns/openshift-kube-scheduler...[must-gather      ] OUT 2025-02-18T12:24:03.85257403Z Gathering data for ns/openshift-kube-scheduler-operator...[must-gather      ] OUT 2025-02-18T12:24:04.268390979Z Gathering data for ns/openshift-kube-storage-version-migrator...[must-gather      ] OUT 2025-02-18T12:24:04.56254537Z Gathering data for ns/openshift-kube-storage-version-migrator-operator...[must-gather      ] OUT 2025-02-18T12:24:04.978517692Z Gathering data for ns/openshift-cluster-machine-approver...[must-gather      ] OUT 2025-02-18T12:24:05.309063411Z Gathering data for ns/openshift-machine-config-operator...[must-gather      ] OUT 2025-02-18T12:24:34.768440865Z Gathering data for ns/openshift-kni-infra...[must-gather      ] OUT 2025-02-18T12:24:35.061138755Z Gathering data for ns/openshift-openstack-infra...[must-gather      ] OUT 2025-02-18T12:24:35.366345103Z Gathering data for ns/openshift-ovirt-infra...[must-gather      ] OUT 2025-02-18T12:24:35.660725113Z Gathering data for ns/openshift-vsphere-infra...[must-gather      ] OUT 2025-02-18T12:24:35.96459097Z Gathering data for ns/openshift-nutanix-infra...[must-gather      ] OUT 2025-02-18T12:24:36.242991708Z Gathering data for ns/openshift-cloud-platform-infra...[must-gather      ] OUT 2025-02-18T12:24:36.533397562Z Gathering data for ns/openshift-marketplace...[must-gather      ] OUT 2025-02-18T12:24:37.596107431Z Gathering data for ns/openshift-user-workload-monitoring...[must-gather      ] OUT 2025-02-18T12:25:47.081956012Z Gathering data for ns/openshift-ovn-kubernetes...[must-gather      ] OUT 2025-02-18T12:27:03.756010718Z Gathering data for ns/openshift-host-network...[must-gather      ] OUT 2025-02-18T12:27:04.198629238Z Gathering data for ns/openshift-network-diagnostics...[must-gather      ] OUT 2025-02-18T12:27:08.28083085Z Gathering data for ns/openshift-network-node-identity...[must-gather      ] OUT 2025-02-18T12:27:21.084883468Z Gathering data for ns/openshift-network-console...[must-gather      ] OUT 2025-02-18T12:27:21.427350291Z Gathering data for ns/openshift-network-operator...[must-gather      ] OUT 2025-02-18T12:27:29.857078243Z Gathering data for ns/openshift-cloud-network-config-controller...[must-gather      ] OUT 2025-02-18T12:27:30.250116766Z Gathering data for ns/openshift-apiserver-operator...[must-gather      ] OUT 2025-02-18T12:27:30.571761464Z Gathering data for ns/openshift-apiserver...[must-gather      ] OUT 2025-02-18T12:27:48.22504458Z Gathering data for ns/openshift-controller-manager-operator...[must-gather      ] OUT 2025-02-18T12:27:48.512187074Z Gathering data for ns/openshift-controller-manager...[must-gather      ] OUT 2025-02-18T12:27:54.214375258Z Gathering data for ns/openshift-route-controller-manager...[must-gather      ] OUT 2025-02-18T12:28:00.348573134Z Gathering data for ns/openshift-cluster-samples-operator...[must-gather      ] OUT 2025-02-18T12:28:01.963674224Z Gathering data for ns/openshift-operator-lifecycle-manager...[must-gather      ] OUT 2025-02-18T12:28:06.298061531Z Gathering data for ns/openshift-service-ca-operator...[must-gather      ] OUT 2025-02-18T12:28:06.583425448Z Gathering data for ns/openshift-service-ca...[must-gather      ] OUT 2025-02-18T12:28:06.889590424Z Gathering data for ns/openshift-cluster-csi-drivers...[must-gather      ] OUT 2025-02-18T12:28:07.276378619Z Wrote inspect data to must-gather.local.5390849910137909608/inspect.local.1558164137628117313.error running backup collection: inspection completed with the errors occurred while gathering data:    [skipping gathering namespaces/openshift-authentication due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-authentication
          one or more errors occurred while gathering container data for pod oauth-openshift-6d798f766c-wl4w7:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-authentication/oauth-openshift-6d798f766c-wl4w7/oauth-openshift?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-authentication/oauth-openshift-6d798f766c-wl4w7/oauth-openshift?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-oauth-apiserver due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-oauth-apiserver
          one or more errors occurred while gathering container data for pod apiserver-65676db74b-d7zfr:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-oauth-apiserver/apiserver-65676db74b-d7zfr/oauth-apiserver?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-oauth-apiserver/apiserver-65676db74b-d7zfr/oauth-apiserver?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-oauth-apiserver/apiserver-65676db74b-d7zfr/fix-audit-permissions?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-oauth-apiserver/apiserver-65676db74b-d7zfr/fix-audit-permissions?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-machine-api due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-machine-api
          one or more errors occurred while gathering container data for pod ironic-proxy-r2ssq:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-machine-api/ironic-proxy-r2ssq/ironic-proxy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-machine-api/ironic-proxy-r2ssq/ironic-proxy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-dns due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-dns
          [one or more errors occurred while gathering container data for pod dns-default-74tff:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-dns/dns-default-74tff/dns?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-dns/dns-default-74tff/dns?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-dns/dns-default-74tff/kube-rbac-proxy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-dns/dns-default-74tff/kube-rbac-proxy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod node-resolver-v4hjb:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-dns/node-resolver-v4hjb/dns-node-resolver?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-dns/node-resolver-v4hjb/dns-node-resolver?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering namespaces/openshift-etcd due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-etcd
          [one or more errors occurred while gathering container data for pod etcd-guard-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-guard-hub-ctlplane-0.5g-deployment.lab/guard?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-guard-hub-ctlplane-0.5g-deployment.lab/guard?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod etcd-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcdctl?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcdctl?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-metrics?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-metrics?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-readyz?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-readyz?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/setup?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/setup?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-ensure-env-vars?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-ensure-env-vars?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-resources-copy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/etcd-hub-ctlplane-0.5g-deployment.lab/etcd-resources-copy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-5-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-7-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-7-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-7-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-8-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-8-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/installer-8-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-7-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/revision-pruner-7-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/revision-pruner-7-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-8-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/revision-pruner-8-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-etcd/revision-pruner-8-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering namespaces/openshift-image-registry due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-image-registry
          one or more errors occurred while gathering container data for pod node-ca-gnpnf:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-image-registry/node-ca-gnpnf/node-ca?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-image-registry/node-ca-gnpnf/node-ca?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-ingress-canary due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-ingress-canary
          one or more errors occurred while gathering container data for pod ingress-canary-w85b8:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-ingress-canary/ingress-canary-w85b8/serve-healthcheck-canary?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-ingress-canary/ingress-canary-w85b8/serve-healthcheck-canary?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering customresourcedefinitions.apiextensions.k8s.io due to error: skipping gathering namespaces/openshift-monitoring due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-monitoring
          [one or more errors occurred while gathering container data for pod node-exporter-bxzkw:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/node-exporter?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/node-exporter?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/kube-rbac-proxy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/kube-rbac-proxy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/init-textfile?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/node-exporter-bxzkw/init-textfile?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod prometheus-k8s-1:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/prometheus?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/prometheus?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/config-reloader?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/config-reloader?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/thanos-sidecar?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/thanos-sidecar?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-web?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-web?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/init-config-reloader?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-monitoring/prometheus-k8s-1/init-config-reloader?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering customresourcedefinitions.apiextensions.k8s.io due to error: skipping gathering namespaces/openshift-cluster-node-tuning-operator due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-cluster-node-tuning-operator
          one or more errors occurred while gathering container data for pod tuned-wqv2c:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-cluster-node-tuning-operator/tuned-wqv2c/tuned?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-cluster-node-tuning-operator/tuned-wqv2c/tuned?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-kube-apiserver due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-kube-apiserver
          [one or more errors occurred while gathering container data for pod installer-10-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-10-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-10-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-12-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-12-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-12-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-13-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-13-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-13-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-14-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-14-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/installer-14-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod kube-apiserver-guard-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-guard-hub-ctlplane-0.5g-deployment.lab/guard?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-guard-hub-ctlplane-0.5g-deployment.lab/guard?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod kube-apiserver-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-cert-syncer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-cert-syncer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-cert-regeneration-controller?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-cert-regeneration-controller?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-insecure-readyz?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-insecure-readyz?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-check-endpoints?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/kube-apiserver-check-endpoints?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/setup?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-hub-ctlplane-0.5g-deployment.lab/setup?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-10-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-10-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-10-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-11-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-11-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-11-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-12-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-12-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-12-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-13-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-13-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-13-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod revision-pruner-14-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-14-hub-ctlplane-0.5g-deployment.lab/pruner?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-apiserver/revision-pruner-14-hub-ctlplane-0.5g-deployment.lab/pruner?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering mutatingwebhookconfigurations.admissionregistration.k8s.io due to error: skipping gathering namespaces/multicluster-engine due to error: one or more errors occurred while gathering pod-specific data for namespace: multicluster-engine
          one or more errors occurred while gathering container data for pod assisted-service-5bff7d6554-x4stb:
          [Get "https://172.16.30.20:10250/containerLogs/multicluster-engine/assisted-service-5bff7d6554-x4stb/assisted-service?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/multicluster-engine/assisted-service-5bff7d6554-x4stb/assisted-service?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/multicluster-engine/assisted-service-5bff7d6554-x4stb/postgres?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/multicluster-engine/assisted-service-5bff7d6554-x4stb/postgres?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering validatingwebhookconfigurations.admissionregistration.k8s.io due to error: skipping gathering namespaces/openshift-multus due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-multus
          [one or more errors occurred while gathering container data for pod multus-additional-cni-plugins-j5txh:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/kube-multus-additional-cni-plugins?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/kube-multus-additional-cni-plugins?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/egress-router-binary-copy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/egress-router-binary-copy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/cni-plugins?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/cni-plugins?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/bond-cni-plugin?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/bond-cni-plugin?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/routeoverride-cni?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/routeoverride-cni?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/whereabouts-cni-bincopy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/whereabouts-cni-bincopy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/whereabouts-cni?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-additional-cni-plugins-j5txh/whereabouts-cni?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod multus-c7dfk:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-c7dfk/kube-multus?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/multus-c7dfk/kube-multus?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod network-metrics-daemon-gjbpn:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-multus/network-metrics-daemon-gjbpn/network-metrics-daemon?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/network-metrics-daemon-gjbpn/network-metrics-daemon?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/network-metrics-daemon-gjbpn/kube-rbac-proxy?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-multus/network-metrics-daemon-gjbpn/kube-rbac-proxy?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering validatingwebhookconfigurations.admissionregistration.k8s.io due to error: skipping gathering namespaces/openshift-storage due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-storage
          one or more errors occurred while gathering container data for pod vg-manager-ftczx:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-storage/vg-manager-ftczx/vg-manager?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-storage/vg-manager-ftczx/vg-manager?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], skipping gathering namespaces/openshift-kube-controller-manager due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-kube-controller-manager
          [one or more errors occurred while gathering container data for pod installer-3-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-3-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-3-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-4-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-4-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-4-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-4-retry-1-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-4-retry-1-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-4-retry-1-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-5-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod kube-controller-manager-guard-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-guard-hub-ctlplane-0.5g-deployment.lab/guard?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-guard-hub-ctlplane-0.5g-deployment.lab/guard?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod kube-controller-manager-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/cluster-policy-controller?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/cluster-policy-controller?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager-cert-syncer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager-cert-syncer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager-recovery-controller?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-controller-manager/kube-controller-manager-hub-ctlplane-0.5g-deployment.lab/kube-controller-manager-recovery-controller?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host]], skipping gathering namespaces/openshift-kube-scheduler due to error: one or more errors occurred while gathering pod-specific data for namespace: openshift-kube-scheduler
          [one or more errors occurred while gathering container data for pod installer-5-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-scheduler/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-scheduler/installer-5-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod installer-6-hub-ctlplane-0.5g-deployment.lab:
          [Get "https://172.16.30.20:10250/containerLogs/openshift-kube-scheduler/installer-6-hub-ctlplane-0.5g-deployment.lab/installer?previous=true&timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host, Get "https://172.16.30.20:10250/containerLogs/openshift-kube-scheduler/installer-6-hub-ctlplane-0.5g-deployment.lab/installer?timestamps=true": dial tcp 172.16.30.20:10250: connect: no route to host], one or more errors occurred while gathering container data for pod openshift-kube-scheduler-guard-hub-ctlplane-0.5g-deployment.lab:
          one or more errors occurred while gathering container data for pod route-controller-manager-59759f8564-hpt8r:
      
      
      error: gather did not start for pod must-gather-tx5v9: timed out waiting for the condition
      

      Expected results:

          Expectation would be to not schedule the pod on the tainted node.

      Additional info:

          

              aos-workloads-staff Workloads Team Bot Account
              midu@redhat.com Mihai IDU
              None
              None
              Ying Zhou Ying Zhou
              None
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: