Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-59680

gcp-ovn-techpreview presubmit failing on 4.18 branch sync

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • 5
    • None
    • None
    • None
    • None
    • CORENET Sprint 274, CORENET Sprint 275
    • 2
    • In Progress
    • Release Note Not Required
    • None
    • None
    • None
    • None
    • None

      the 4.19 -> 4.18 branch sync PR for ovnk is permafailing for the same reason (operators not installing).

      job history here
      example failure here

      debug output (copied from this github comment):

      ❯ oc get pods --all-namespaces | rg Pending
      
      openshift-image-registry                           image-registry-677847d5b7-js5xh                                      0/1     Pending     0            8h
      openshift-image-registry                           image-registry-677847d5b7-pxmcz                                      0/1     Pending     0            8h
      openshift-image-registry                           image-registry-7f8c69455f-56dlm                                      0/1     Pending     0            8h
      openshift-ingress                                  router-default-6d856889dc-lb8vl                                      0/1     Pending     0            8h
      openshift-ingress                                  router-default-6d856889dc-mg8sc                                      0/1     Pending     0            8h
      openshift-insights                                 periodic-gathering-88k68-grgvf                                       0/1     Pending     0            8h
      openshift-monitoring                               prometheus-operator-admission-webhook-df545d445-4wjmk                0/1     Pending     0            8h
      openshift-monitoring                               prometheus-operator-admission-webhook-df545d445-hjs5m                0/1     Pending     0            8h
      openshift-network-console                          networking-console-plugin-768869997f-tjkkt                           0/1     Pending     0            8h
      openshift-network-console                          networking-console-plugin-768869997f-vgrkn                           0/1     Pending     0            8h
      openshift-network-diagnostics                      network-check-source-58f6955785-7wz9j                                0/1     Pending     0            8h
      openshift-operator-lifecycle-manager               collect-profiles-29213955-8hnh6                                      0/1     Pending     0            7h36m
      ❯ oc get clusteroperator
      NAME                                       VERSION                                                    AVAILABLE   PROGRESSING   DEGRADED   SINCE
      authentication                             4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   False       True          True       8h
      baremetal                                  4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      cloud-controller-manager                   4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      cloud-credential                           4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      cluster-api                                4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      cluster-autoscaler                         4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      config-operator                            4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      console                                    4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   False       False         True       8h
      control-plane-machine-set                  4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      csi-snapshot-controller                    4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      dns                                        4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      etcd                                       4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      image-registry                                                                                        False       True          True       8h
      ingress                                                                                               False       True          True       8h
      insights                                   4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      kube-apiserver                             4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      kube-controller-manager                    4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      kube-scheduler                             4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      kube-storage-version-migrator              4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      machine-api                                4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      machine-approver                           4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      machine-config                             4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      marketplace                                4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      monitoring                                                                                            False       True          True       8h
      network                                    4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        True          False      8h
      node-tuning                                4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      olm                                        4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      openshift-apiserver                        4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      openshift-controller-manager               4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      openshift-samples                          4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      operator-lifecycle-manager                 4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      service-ca                                 4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      storage                                    4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      operator-lifecycle-manager-catalog         4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      operator-lifecycle-manager-packageserver   4.18.0-0.ci-2025-07-18-090940-test-ci-op-zx9kq02i-latest   True        False         False      8h
      ❯ oc describe pods -n openshift-network-diagnostics                      network-check-source-58f6955785-7wz9j
      Name:                 network-check-source-58f6955785-7wz9j
      Namespace:            openshift-network-diagnostics
      Priority:             1000000000
      Priority Class Name:  openshift-user-critical
      Service Account:      network-diagnostics
      Node:                 <none>
      Labels:               app=network-check-source
                            kubernetes.io/os=linux
                            pod-template-hash=58f6955785
      Annotations:          openshift.io/required-scc: restricted-v2
                            openshift.io/scc: restricted-v2
                            seccomp.security.alpha.kubernetes.io/pod: runtime/default
      Status:               Pending
      IP:                   
      IPs:                  <none>
      Controlled By:        ReplicaSet/network-check-source-58f6955785
      Containers:
        check-endpoints:
          Image:      registry.build04.ci.openshift.org/ci-op-zx9kq02i/stable@sha256:fc64ae5bb2957bfde020853a6f39638f20cd408a27bf5258089ef1810461d33a
          Port:       17698/TCP
          Host Port:  0/TCP
          Command:
            cluster-network-check-endpoints
          Args:
            --listen
            0.0.0.0:17698
            --namespace
            $(POD_NAMESPACE)
          Requests:
            cpu:     10m
            memory:  40Mi
          Environment:
            POD_NAME:       network-check-source-58f6955785-7wz9j (v1:metadata.name)
            POD_NAMESPACE:  openshift-network-diagnostics (v1:metadata.namespace)
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5ktc (ro)
      Conditions:
        Type           Status
        PodScheduled   False 
      Volumes:
        kube-api-access-d5ktc:
          Type:                    Projected (a volume that contains injected data from multiple sources)
          TokenExpirationSeconds:  3607
          ConfigMapName:           kube-root-ca.crt
          ConfigMapOptional:       <nil>
          DownwardAPI:             true
          ConfigMapName:           openshift-service-ca.crt
          ConfigMapOptional:       <nil>
      QoS Class:                   Burstable
      Node-Selectors:              kubernetes.io/os=linux
      Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                                   node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
      Events:
        Type     Reason            Age                 From               Message
        ----     ------            ----                ----               -------
        Warning  FailedScheduling  8h                  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
        Warning  FailedScheduling  8h                  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
        Warning  FailedScheduling  8h                  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. no new claims to deallocate, preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
        Warning  FailedScheduling  8h                  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. no new claims to deallocate, preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
        Warning  FailedScheduling  8h                  default-scheduler  0/6 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. no new claims to deallocate, preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
        Warning  FailedScheduling  7h40m (x5 over 8h)  default-scheduler  0/6 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. no new claims to deallocate, preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
      

              pepalani@redhat.com Periyasamy Palanisamy
              jluhrsen Jamo Luhrsen
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: