Description
Customer is reporting oddities with Private Automation Hub and authentication to its container registry. They keep receiving an HTTP 403, even though they've assured me several times the credentials are correct.
The only way I've been able to reproduce this condition is by using an incorrect username/password:
(INCORRECT PASSWORD) [root@goldenrhel8 ~]# podman login -u admin 192.168.2.52.nip.io --log-level=debug INFO[0000] podman filtering at log level debug DEBU[0000] Called login.PersistentPreRunE(podman login -u admin 192.168.2.52.nip.io --log-level=debug) DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] cached value indicated that native-diff is not being used INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] Initializing event backend file DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/runc" INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 13 DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/force-fully-qualified-images.conf" DEBU[0000] No credentials for 192.168.2.52.nip.io found Password: DEBU[0002] Looking for TLS certificates and private keys in /etc/containers/certs.d/192.168.2.52.nip.io DEBU[0002] crt: /etc/containers/certs.d/192.168.2.52.nip.io/client.crt DEBU[0002] GET https://192.168.2.52.nip.io/v2/ DEBU[0002] Ping https://192.168.2.52.nip.io/v2/ status 401 DEBU[0002] GET https://192.168.2.52.nip.io/token?account=admin&service=192.168.2.52.nip.io Error: authenticating creds for "192.168.2.52.nip.io": Requesting bear token: invalid status code from registry 403 (Forbidden) (CORRECT PASSWORD) [root@goldenrhel8 ~]# podman login -u admin 192.168.2.52.nip.io --log-level=debug INFO[0000] podman filtering at log level debug DEBU[0000] Called login.PersistentPreRunE(podman login -u admin 192.168.2.52.nip.io --log-level=debug) DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] Using conmon: "/usr/bin/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is being used DEBU[0000] cached value indicated that native-diff is not being used INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true DEBU[0000] Initializing event backend file DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/runc" INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 13 DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/force-fully-qualified-images.conf" DEBU[0000] No credentials for 192.168.2.52.nip.io found Password: DEBU[0001] Looking for TLS certificates and private keys in /etc/containers/certs.d/192.168.2.52.nip.io DEBU[0001] crt: /etc/containers/certs.d/192.168.2.52.nip.io/client.crt DEBU[0001] GET https://192.168.2.52.nip.io/v2/ DEBU[0001] Ping https://192.168.2.52.nip.io/v2/ status 401 DEBU[0001] GET https://192.168.2.52.nip.io/token?account=admin&service=192.168.2.52.nip.io DEBU[0002] GET https://192.168.2.52.nip.io/v2/ DEBU[0002] Stored credentials for 192.168.2.52.nip.io in credential helper containers-auth.json Login Succeeded! DEBU[0002] Called login.PersistentPostRunE(podman login -u admin 192.168.2.52.nip.io --log-level=debug)
I've requested the customer check the following:
- Username/password is correct
- User authenticating to PAH has the proper access roles
Each time I ask, the customer keeps assuring me everything is correct.
Customer's Private Automation Hub Version:
automation-hub-4.5.0-1.el8pc.noarch Sun Jun 12 04:18:58 2022
Steps to Reproduce
1.) Authenticate to Private Automation Hub's registry via podman login.
2.) Attempt to push/pull images to Private Automation Hub's container registry.
Actual Behavior
HTTP 403s are occurring, even though credentials are correct.
Expected Behavior
podman push/pull operations should work as expected. skopeo tasks should also run successfully. If there's a more user-friendly approach to registry authentication that can be implemented within PAH, that would definitely help customers like these.