Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-57501

After the Cluster upgrade to 4.16.38 some of the iptables-alerter pod are in Create Container Error State.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Major Major
    • None
    • 4.16.z
    • Node / CRI-O
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

       --> After the Cluster upgrade to 4.16.38 some of the iptables-alerter pod are in Create Container Error State as shown:
      pod/iptables-alerter-98mdm              0/1     CreateContainerError   0          23h
      pod/iptables-alerter-bgzl7              1/1     Running                1          11d
      pod/iptables-alerter-mpswn              0/1     CreateContainerError   0          5d
      
      --> On seeing the pod yaml, the pods are only showing ContextDeadlineExceeded as shown:
      containerStatuses:
        - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3d0b212909bc8049aa6fb9006c0990b664dd0ca5089da224e08ba909ce0a0b
          imageID: ""
          lastState: {}
          name: iptables-alerter
          ready: false
          restartCount: 0
          started: false
          state:
            waiting:
              message: context deadline exceeded
              reason: CreateContainerError
      
      --> Similar error can be seen in the events as well:
      
      LAST SEEN   TYPE      REASON   OBJECT                       MESSAGE
      5m          Warning   Failed   pod/iptables-alerter-98mdm   Error: context deadline exceeded
      3m7s        Warning   Failed   pod/iptables-alerter-mpswn   Error: context deadline exceeded
      
      --> Manually deleted the iptables-alerter pods, but no luck.
      --> Manually restarted kubelet & crio service on the nodes, but no luck.
      --> Pruned the nodes of the exited containers, no luck.
      --> Hard & Soft reboots of the worker node from the VMware console, but no luck.
      --> Also after wiping the CRIO for the node the issue persists.

      Version-Release number of selected component (if applicable):

      4.16.38    

      How reproducible:

          

      Steps to Reproduce:

          1.
          2.
          3.
          

      Actual results:

          

      Expected results:

          

      Additional info:

          

              rh-ee-kehannon Kevin Hannon
              rhn-support-shupadhy Shivam Upadhyay
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

                Created:
                Updated:
                Resolved: