-
Story
-
Resolution: Done
-
Normal
-
None
-
4.16
-
5
-
False
-
None
-
False
-
OCPSTRAT-1389 - On Cluster Layering: Phase 3 (GA)
-
-
-
MCO Sprint 259, MCO Sprint 260, MCO Sprint 261, MCO Sprint 262
-
0
-
0.000
Description of problem:
When OCL is configured in a cluster using a proxy configuration, OCL is not using the proxy to build the image.
Version-Release number of selected component (if applicable):
oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.0-rc.8 True False 5h14m Cluster version is 4.16.0-rc.8
How reproducible:
Always
Steps to Reproduce:
1. Create a cluster that uses a proxy and cannot access the internet if not by using this proxy We can do it by using this flexy-install template, for example: https://gitlab.cee.redhat.com/aosqe/flexy-templates/-/blob/5724d9c157d51f175069c5bf09be1872173d0167/functionality-testing/aos-4_16/ipi-on-aws/versioned-installer-customer_vpc-http_proxy-multiblockdevices-fips-ovn-ipsec-ci private-templates/functionality-testing/aos-4_16/ipi-on-aws/versioned-installer-customer_vpc-http_proxy-multiblockdevices-fips-ovn-ipsec-ci 2. Enable OCL in a machineconfigpool by creating a MOSC resrouce
Actual results:
The build pod will not use the proxy to build the image and it will fail with a log similar to this one time="2024-06-25T13:38:19Z" level=debug msg="GET https://quay.io/v1/_ping" time="2024-06-25T13:38:49Z" level=debug msg="Ping https://quay.io/v1/_ping err Get \"https://quay.io/v1/_ping\": dial tcp 44.216.66.253:443: i/o timeout (&url.Error{Op:\"Get\", URL:\"https://quay.io/v1/_ping\", Err:(*net.OpError)(0xc000220d20)})" time="2024-06-25T13:38:49Z" level=debug msg="Accessing \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883\" failed: pinging container registry quay.io: Get \"https://quay.io/v2/\": dial tcp 44.216.66.253:443: i/o timeout" time="2024-06-25T13:38:49Z" level=debug msg="Error pulling candidate quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883: pinging container registry quay.io: Get \"https://quay.io/v2/\": dial tcp 44.216.66.253:443: i/o timeout" Error: creating build container: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883: pinging container registry quay.io: Get "https://quay.io/v2/": dial tcp 44.216.66.253:443: i/o timeout time="2024-06-25T13:38:49Z" level=debug msg="shutting down the store" time="2024-06-25T13:38:49Z" level=debug msg="exit status 125"
Expected results:
The build should be able to access the necessary resources by using the configured proxy
Additional info:
When verifying this ticket, we need to pay special attention to https proxies using their own user-ca certificate We can use this flexy-install template: https://gitlab.cee.redhat.com/aosqe/flexy-templates/-/blob/5724d9c157d51f175069c5bf09be1872173d0167/functionality-testing/aos-4_16/ipi-on-osp/versioned-installer-https_proxy private-templates/functionality-testing/aos-4_16/ipi-on-osp/versioned-installer-https_proxy In this kind of clusters it is not enough to use the proxy to build the image, but we need to use the /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt file to be able to reach the yum repositories, since rpm-ostree will complain about an intermediate certificate (this one of the https proxy) being self-signed. To test it we can use a custom Containerfile including something simelar to: RUN cd /etc/yum.repos.d/ && curl -LO https://pkgs.tailscale.com/stable/fedora/tailscale.repo && \ rpm-ostree install tailscale && rpm-ostree cleanup -m && \ systemctl enable tailscaled && \ ostree container commit