Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-35308

router deployment fails on y-stream upgrade 4.13->4.14

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Normal Normal
    • 4.16.0
    • 4.14
    • HyperShift
    • Critical
    • No
    • Hypershift Sprint 255
    • 1
    • False
    • Hide

      None

      Show
      None
    • Release Note Not Required
    • In Progress

      This is a clone of issue OCPBUGS-25758. The following is the description of the original issue:

      Description of problem:

      router pod is in CrashLoopBackup after y-stream upgrade from 4.13->4.14   

      Version-Release number of selected component (if applicable):

          

      How reproducible:

      always    

      Steps to Reproduce:

          1. create a cluster with 4.13
          2. upgrade HC to 4.14
          3.
          

      Actual results:

          router pod in CrashLoopBackoff

      Expected results:

          router pod is running after upgrade HC from 4.13->4.14

      Additional info:

      images:
      ======
      HO image: 4.15
      upgrade HC from 4.13.0-0.nightly-2023-12-19-114348 to 4.14.0-0.nightly-2023-12-19-120138
      
      router pod log:
      ==============
      jiezhao-mac:hypershift jiezhao$ oc get pods router-9cfd8b89-plvtc -n clusters-jie-test
      NAME          READY  STATUS       RESTARTS    AGE
      router-9cfd8b89-plvtc  0/1   CrashLoopBackOff  11 (45s ago)  32m
      jiezhao-mac:hypershift jiezhao$
      
      Events:
       Type   Reason              Age          From        Message
       ----   ------              ----          ----        -------
       Normal  Scheduled            27m          default-scheduler Successfully assigned clusters-jie-test/router-9cfd8b89-plvtc to ip-10-0-42-36.us-east-2.compute.internal
       Normal  AddedInterface          27m          multus       Add eth0 [10.129.2.82/23] from ovn-kubernetes
       Normal  Pulling             27m          kubelet      Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d2acba15f69ea3648b3c789111db34ff06d9230a4371c5949ebe3c6218e6ea3"
       Normal  Pulled              27m          kubelet      Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d2acba15f69ea3648b3c789111db34ff06d9230a4371c5949ebe3c6218e6ea3" in 14.309s (14.309s including waiting)
       Normal  Created             26m (x3 over 27m)   kubelet      Created container private-router
       Normal  Started             26m (x3 over 27m)   kubelet      Started container private-router
       Warning BackOff             26m (x5 over 27m)   kubelet      Back-off restarting failed container private-router in pod router-9cfd8b89-plvtc_clusters-jie-test(e6cf40ad-32cd-438c-8298-62d565cf6c6a)
       Normal  Pulled              26m (x3 over 27m)   kubelet      Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d2acba15f69ea3648b3c789111db34ff06d9230a4371c5949ebe3c6218e6ea3" already present on machine
       Warning FailedToRetrieveImagePullSecret 2m38s (x131 over 27m) kubelet      Unable to retrieve some image pull secrets (router-dockercfg-q768b); attempting to pull the image may not succeed.
      jiezhao-mac:hypershift jiezhao$
      
      jiezhao-mac:hypershift jiezhao$ oc logs router-9cfd8b89-plvtc -n clusters-jie-test
      [NOTICE]  (1) : haproxy version is 2.6.13-234aa6d
      [NOTICE]  (1) : path to executable is /usr/sbin/haproxy
      [ALERT]  (1) : config : [/usr/local/etc/haproxy/haproxy.cfg:52] : 'server ovnkube_sbdb/ovnkube_sbdb' : could not resolve address 'None'.
      [ALERT]  (1) : config : Failed to initialize server(s) addr.
      jiezhao-mac:hypershift jiezhao$
      
      notes:
      =====
      not sure if it has the same root cause as https://issues.redhat.com/browse/OCPBUGS-24627

            sjenning Seth Jennings
            openshift-crt-jira-prow OpenShift Prow Bot
            Jie Zhao Jie Zhao
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: