Uploaded image for project: 'Satellite'
  1. Satellite
  2. SAT-22510

OSP Authenticated Pull fails from Satellite with error 422 Client Error: Unprocessable Content for url

    • None
    • None
    • None
    • No Coverage
    • No

      Description of problem: Building support matrix between OSP and satellite for sat versions 6.11 to 6.15. For all of those sat versions when OSP16.2 is deployed with authenticated pull the pull fails. The same test done with unauthenticated pull is successful.

      The error seen on OSP is: 422 Client Error: Unprocessable Content for url

      When tailing the satellite log: /var/log/foreman/production.log the error seen is(there are many of them):

      2024-01-12T11:15:40 [I|app|02aee461] Started GET "/v2/token?account=admin&scope=repository%3Adefault_organization-rhos-16_2-containers-aodh-api%3Apull&service=vm-240-109.lab.eng.tlv2.redhat.com" for 10.46.4.150 at 2024-01-12 11:15:40 -0500
      2024-01-12T11:15:40 [I|app|02aee461] Processing by Katello::Api::Registry::RegistryProxiesController#token as HTML
      2024-01-12T11:15:40 [I|app|02aee461] Parameters:

      {"account"=>"admin", "scope"=>"repository:default_organization-rhos-16_2-containers-aodh-api:pull", "service"=>"vm-240-109.lab.eng.tlv2.redhat.com"}

      2024-01-12T11:15:40 [I|kat|02aee461] Authorized user admin(Admin User)
      2024-01-12T11:15:40 [E|kat|02aee461] <Class> ActiveRecord::RecordInvalid
      2024-01-12T11:15:40 [E|kat|02aee461] name: ["has already been taken"]
      2024-01-12T11:15:40 [I|app|02aee461] Completed 422 Unprocessable Entity in 176ms (Views: 1.0ms | ActiveRecord: 24.2ms | Allocations: 12987) (edited)

      This happens on a newly deployed satellite with only one host registered. Don't see how the name could already be taken.

      Version-Release number of selected component (if applicable): OSP16.2 and satellite versions 6.11 through 6.15.

      How reproducible: Every time.

      Steps to Reproduce:
      1. Deploy OSP using a satellite set to authenticated pull.
      2.
      3.

      Actual results: OSP deployment fails with error: 422 Client Error: Unprocessable Content for url when attempting to pull a container.

      Expected results: OSP deployment is successful.

      Additional info:

            [SAT-22510] OSP Authenticated Pull fails from Satellite with error 422 Client Error: Unprocessable Content for url

            Steps to verify:
            1. Create non-admin user with manager role.
            2. Use a script to run 200x `podman login` concurrently in a batch, loop that batch indefinitely. Run the looped batch from 4 hosts, some with admin, some with non-admin user.
            3. Tail the production log for errors.

            Using the steps above I've been able to reproduce the login issue against unfixed VM (6.16.4 snap 1.0) after ~10 minutes run:
            2025-03-10T11:05:58 [I|app|ca9b43b4] Started GET "/v2/" for XXX.XXX.XXX.XXX at 2025-03-10 11:05:58 -0400
            2025-03-10T11:05:58 [I|app|ca9b43b4] Processing by Katello::Api::Registry::RegistryProxiesController#ping as HTML
            2025-03-10T11:05:58 [I|app|ca9b43b4]   Rendered api/v2/errors/unauthorized.json.rabl within api/v2/layouts/error_layout (Duration: 0.4ms | Allocations: 215)
            2025-03-10T11:05:58 [I|app|ca9b43b4]   Rendered layout api/v2/layouts/error_layout.json.erb (Duration: 0.7ms | Allocations: 309)
            2025-03-10T11:05:58 [I|app|ca9b43b4] Filter chain halted as :registry_authorize rendered or redirected
            2025-03-10T11:05:58 [I|app|ca9b43b4] Completed 401 Unauthorized in 9ms (Views: 1.1ms | ActiveRecord: 1.7ms | Allocations: 3194)
            2025-03-10T11:05:58 [I|app|16735e6a] Started GET "/v2/token?account=lojza&service=satellite.redhat.com" for XXX.XXX.XXX.XXX at 2025-03-10 11:05:58 -0400
            2025-03-10T11:05:58 [I|app|16735e6a] Processing by Katello::Api::Registry::RegistryProxiesController#token as HTML
            2025-03-10T11:05:58 [I|app|16735e6a]   Parameters: {"account"=>"lojza", "service"=>"satellite.redhat.com"}
            2025-03-10T11:05:58 [I|kat|16735e6a] Authorized user lojza(Lojza Vodtahlo)
            2025-03-10T11:05:58 [E|kat|16735e6a] <Class> ActiveRecord::RecordInvalid
            2025-03-10T11:05:58 [E|kat|16735e6a] name: ["has already been taken"]
            2025-03-10T11:05:58 [I|app|16735e6a] Completed 422 Unprocessable Entity in 139ms (Views: 0.8ms | ActiveRecord: 8.9ms | Allocations: 15405)

            Using the same steps I haven't been able to reproduce the issue against fixed VM (6.17.0 snap 2.0) even after 6 hours of run.

            Vladimír Sedmík added a comment - Steps to verify: 1. Create non-admin user with manager role. 2. Use a script to run 200x `podman login` concurrently in a batch, loop that batch indefinitely. Run the looped batch from 4 hosts, some with admin, some with non-admin user. 3. Tail the production log for errors. Using the steps above I've been able to reproduce the login issue against unfixed VM (6.16.4 snap 1.0) after ~10 minutes run: 2025-03-10T11:05:58 [I|app|ca9b43b4] Started GET "/v2/" for XXX.XXX.XXX.XXX at 2025-03-10 11:05:58 -0400 2025-03-10T11:05:58 [I|app|ca9b43b4] Processing by Katello::Api::Registry::RegistryProxiesController#ping as HTML 2025-03-10T11:05:58 [I|app|ca9b43b4]   Rendered api/v2/errors/unauthorized.json.rabl within api/v2/layouts/error_layout (Duration: 0.4ms | Allocations: 215) 2025-03-10T11:05:58 [I|app|ca9b43b4]   Rendered layout api/v2/layouts/error_layout.json.erb (Duration: 0.7ms | Allocations: 309) 2025-03-10T11:05:58 [I|app|ca9b43b4] Filter chain halted as :registry_authorize rendered or redirected 2025-03-10T11:05:58 [I|app|ca9b43b4] Completed 401 Unauthorized in 9ms (Views: 1.1ms | ActiveRecord: 1.7ms | Allocations: 3194) 2025-03-10T11:05:58 [I|app|16735e6a] Started GET "/v2/token?account=lojza&service=satellite.redhat.com" for XXX.XXX.XXX.XXX at 2025-03-10 11:05:58 -0400 2025-03-10T11:05:58 [I|app|16735e6a] Processing by Katello::Api::Registry::RegistryProxiesController#token as HTML 2025-03-10T11:05:58 [I|app|16735e6a]   Parameters: {"account"=>"lojza", "service"=>"satellite.redhat.com"} 2025-03-10T11:05:58 [I|kat|16735e6a] Authorized user lojza(Lojza Vodtahlo) 2025-03-10T11:05:58 [E|kat|16735e6a] <Class> ActiveRecord::RecordInvalid 2025-03-10T11:05:58 [E|kat|16735e6a] name: ["has already been taken"] 2025-03-10T11:05:58 [I|app|16735e6a] Completed 422 Unprocessable Entity in 139ms (Views: 0.8ms | ActiveRecord: 8.9ms | Allocations: 15405) Using the same steps I haven't been able to reproduce the issue against fixed VM (6.17.0 snap 2.0) even after 6 hours of run.

            Looks like a Need Info was added for me on 10/21. On 8/19 I answered the question that was asked with this update: The problem is still seen. Haven't learned anything new.

            Is there anything else I'm being asked?

            David Rosenfeld added a comment - Looks like a Need Info was added for me on 10/21. On 8/19 I answered the question that was asked with this update: The problem is still seen. Haven't learned anything new. Is there anything else I'm being asked?

            The problem is still seen. Haven't learned anything new.

            David Rosenfeld added a comment - The problem is still seen. Haven't learned anything new.

            Ian Ballou added a comment -

            This issue is not easily reproduced, but I've noticed that you can seem to reproduce it (at least within the past couple months) on dev environments if you try `podman login` before the database connection is fully established. The root cause of the problem is that an additional PersonalAccessToken gets created in the Foreman DB which then triggers the `name taken` error.

            A workaround is to use the foreman-console to delete the duplicate PersonalAccessToken.

             

            drosenfe are you still running into this issue? Since opening, have you learned anything new about it?

            Ian Ballou added a comment - This issue is not easily reproduced, but I've noticed that you can seem to reproduce it (at least within the past couple months) on dev environments if you try `podman login` before the database connection is fully established. The root cause of the problem is that an additional PersonalAccessToken gets created in the Foreman DB which then triggers the `name taken` error. A workaround is to use the foreman-console to delete the duplicate PersonalAccessToken.   drosenfe are you still running into this issue? Since opening, have you learned anything new about it?

              iballou@redhat.com Ian Ballou
              drosenfe David Rosenfeld
              Vladimír Sedmík Vladimír Sedmík
              Quinn James Quinn James
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: