Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-6292

Pods in CrashLoopBackOff state after CLO update from v5.8.5 to v6.0.0

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Critical

      Description of problem:

      After updating CLO from 5.8 to 6.0 all instance* pods end up in CrashLoopBackOff state
          
      oc get po -n openshift-logging
      NAME                                        READY   STATUS             RESTARTS        AGE
      cluster-logging-operator-6b59ffc748-ls778   1/1     Running            0               36m
      instance-58ssh                              0/1     CrashLoopBackOff   7 (2m52s ago)   17m
      instance-bdgvw                              0/1     CrashLoopBackOff   7 (2m49s ago)   17m
      instance-kjf6q                              0/1     CrashLoopBackOff   7 (3m2s ago)    17m
      instance-mxs2r                              0/1     CrashLoopBackOff   7 (2m38s ago)   17m
      instance-nhd5v                              0/1     CrashLoopBackOff   7 (2m53s ago)   17m
      instance-plssd                              0/1     CrashLoopBackOff   7 (2m57s ago)   17m
      instance-ztdvs                              0/1     CrashLoopBackOff   7 (3m18s ago)   17m
      

      Version-Release number of selected component (if applicable):

      cluster-logging.v6.0.0
      OCP-4.17.2
          

      How reproducible:

      so far 1st CLO migration attempt
          

      Steps to Reproduce:

          1. Install CLO from "stable" channel(atm points to cluster-logging.v5.8.5)
          2. Migrate to the CLO 6.0
          3. Update ClusterLogForwarder to new format
          4. Remove 5.x instances and crds
          5. Check pods
          

      Actual results:

      Pods in CrashLoopBackOff state
          

      Expected results:

      All pods are running without issues
          

            Unassigned Unassigned
            yprokule@redhat.com Yurii Prokulevych
            Anping Li Anping Li
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: