Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-39108

image-registry pod keep restarting due to panic

XMLWordPrintable

    • Moderate
    • None
    • False
    • Hide

      None

      Show
      None
    • Hide
      * Previously, the image registry would, in some cases, panic when attempting to purge failed uploads from s3-compatible storage providers. This was caused by the image registry's s3 driver mishandling empty directory paths. With this update, the image registry properly handles empty directory paths, fixing the panic. (link:https://issues.redhat.com/browse/OCPBUGS-39108[*OCPBUGS-39108*])
      Show
      * Previously, the image registry would, in some cases, panic when attempting to purge failed uploads from s3-compatible storage providers. This was caused by the image registry's s3 driver mishandling empty directory paths. With this update, the image registry properly handles empty directory paths, fixing the panic. (link: https://issues.redhat.com/browse/OCPBUGS-39108 [* OCPBUGS-39108 *])
    • Bug Fix
    • Done

      Description of problem:

          IHAC running 4.16.1 OCP cluster. In their cluster image registry pod is restarting with below messages:  

      message: "/image-registry/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:7629 +0x1d0\ngithub.com/distribution/distribution/v3/registry/storage/driver/s3-aws.(*driver).doWalk(0xc000a3c120, {0x28924c0, 0xc0001f5b20}, 0xc00083bab8, {0xc00125b7d1, 0x20}, {0x2866860, 0x1}, 0xc00120a8d0)\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/storage/driver/s3-aws/s3.go:1135 +0x348\ngithub.com/distribution/distribution/v3/registry/storage/driver/s3-aws.(*driver).Walk(0xc000675ec0?, {0x28924c0, 0xc0001f5b20}, {0xc000675ec0, 0x20}, 0xc00083bc10?)\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/storage/driver/s3-aws/s3.go:1095 +0x148\ngithub.com/distribution/distribution/v3/registry/storage/driver/base.(*Base).Walk(0xc000519480, {0x2892778?, 0xc00012cf00?}, {0xc000675ec0, 0x20}, 0x1?)\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/storage/driver/base/base.go:237 +0x237\ngithub.com/distribution/distribution/v3/registry/storage.getOutstandingUploads({0x2892778, 0xc00012cf00}, {0x289d728?, 0xc000519480})\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/storage/purgeuploads.go:70 +0x1f9\ngithub.com/distribution/distribution/v3/registry/storage.PurgeUploads({0x2892778, 0xc00012cf00}, {0x289d728?, 0xc000519480?}, {0xc1a937efcf6aec96, 0xfffddc8e973b8a89, 0x3a94520}, 0x1)\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/storage/purgeuploads.go:34 +0x12d\ngithub.com/distribution/distribution/v3/registry/handlers.startUploadPurger.func1()\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/handlers/app.go:1139 +0x33f\ncreated by github.com/distribution/distribution/v3/registry/handlers.startUploadPurger in goroutine 1\n\t/go/src/github.com/openshift/image-registry/vendor/github.com/distribution/distribution/v3/registry/handlers/app.go:1127 +0x329\n" reason: Error startedAt: "2024-08-27T09:08:14Z" name: registry ready: true restartCount: 250 started: true

      Version-Release number of selected component (if applicable):

          4.16.1

      How reproducible:

          

      Steps to Reproduce:

          1.
          2.
          3.
          

      Actual results:

         all the pods are restating

      Expected results:

          It should not restart.

      Additional info:

      https://redhat-internal.slack.com/archives/C013VBYBJQH/p1724761756273879    
      upstream report: https://github.com/distribution/distribution/issues/4358

              fmissi Flavian Missi
              rhn-support-psingour Poornima Singour
              Wen Wang Wen Wang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: