Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-29901

multus pod stuck in crashloopbackoff state during the upgrade

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • None
    • 4.12.z
    • Networking / multus
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Critical
    • Yes
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Upgrading cluster 4.12.37

      During the upgrade network cluster operator :

      ~~~
      network 4.11.32 True True False 535d DaemonSet "/openshift-multus/multus" update is rolling out (73 out of 89 updated)

      ~~~

      Pods are in CLBO:
      ~~~
      multus-bvgqz 0/1 CrashLoopBackOff 8 (4m14s ago) 21m 10.xx.xx.xx shk-xxxx.xxx.xxx-xsn6d <none> <none>
      multus-cz4d4 0/1 CrashLoopBackOff 12 (64s ago) 37m 10..xx.xx.xx shk-xxxx.xxx.xxx-znq6n <none> <none>
      multus-d9tl8 0/1 CrashLoopBackOff 8 (4m19s ago) 21m 10..xx.xx.xx shk-xxxx.xxx.xxx-work
      ~~~

       

      Describe command events:
      ~~~
      Events: Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 20m default-scheduler Successfully assigned openshift-multus/multus-bvgqz to shk-xxxx.xxx.xxx-xsn6d Normal Pulled 19m (x5 over 20m) kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20578ca1053282b10acef52310866c28828d1327b8e7bfd34b0a273fbbc87ab4" already 
      Normal Created 20m (x5 over 21m) kubelet Created container kube-multus
      Normal Started 20m (x5 over 21m) kubelet Started container kube-multus
      Warning BackOff 102s (x93 over 21m) kubelet Back-off restarting failed container
      ~~~

      pod logs:
      ~~~
      oc logs multus-wglk9
      2024-02-25T05:46:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel8/bin/ to /host/opt/cni/bin/upgrade_c40b9097-de60-422b-99e7-5d6af8d4a764
      2024-02-25T05:46:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c40b9097-de60-422b-99e7-5d6af8d4a764 to /host/opt/cni/bin/ 2024-02-25T05:46:37+00:00
      WARN: {unknown parameter "-"}
      2024-02-25T05:46:37+00:00 Entrypoint skipped copying Multus binary. 2024-02-25T05:46:37+00:00 Generating Multus configuration file using files in /host/var/run/multus/cni/net.d...
      2024-02-25T05:46:37+00:00 Using MASTER_PLUGIN: 80-openshift-network.conf

      ~~~

      crictl logs from node:
      ~~~
      crictl ps | grep multus
      aab50c1cc455c quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20578ca1053282b10acef52310866c28828d1327b8e7bfd34b0a273fbbc87ab4 5 hours ago Running kube-multus-additional-cni-plugins 0 ef8d32d214a79 multus-additional-cni-plugins-bnrgl
      ~~~
      ~~~
      crictl ps -a| grep multus

      2276e9b4afecf 9b8da56f1fa444fc606d343d5f27026c335856fd2f8004348809daf474c61708 4 minutes ago Exited kube-multus 13 0919ea20c8077 multus-nhf8s aab50c1cc455c quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20578ca1053282b10acef52310866c28828d1327b8e7bfd34b0a273fbbc87ab4

      ~~~

      ~~~
      crictl logs 06313cc3c22cb
      2024-02-25T05:55:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel8/bin/ to /host/opt/cni/bin/upgrade_de773bb6-2309-45de-a52f-c1f83b8bc3b9
      2024-02-25T05:55:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_de773bb6-2309-45de-a52f-c1f83b8bc3b9 to /host/opt/cni/bin/
      2024-02-25T05:55:21+00:00 WARN: {unknown parameter "-"}
      2024-02-25T05:55:21+00:00 Entrypoint skipped copying Multus binary.
      2024-02-25T05:55:21+00:00 Generating Multus configuration file using files in /host/var/run/multus/cni/net.d...
      2024-02-25T05:55:21+00:00 Using MASTER_PLUGIN: 80-openshift-network.conf
      ~~~

      Workaround: We have to reboot all the node one by one. There is no other option worked to solve. 

              dosmith Douglas Smith
              rhn-support-vsolanki Vimal Solanki
              None
              None
              Weibin Liang Weibin Liang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: