-
Bug
-
Resolution: Not a Bug
-
Normal
-
None
-
4.18, 4.18.z, 4.19.z, 4.19
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
Yes
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
[root@bastion ~]# oc get co --kubeconfig=hcp-kubeconfig-kvm -n openshift-ingress NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.18.11 True False False 8h csi-snapshot-controller 4.18.11 True False False 8h dns 4.18.11 True False False 8h image-registry 4.18.11 True False False 8h ingress 4.18.11 True False True 8h The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing. Last 1 error messages:... insights 4.18.11 True False False 8h kube-apiserver 4.18.11 True False False 8h kube-controller-manager 4.18.11 True False False 8h kube-scheduler 4.18.11 True False False 8h kube-storage-version-migrator 4.18.11 True False False 8h monitoring 4.18.11 True False False 8h network 4.18.11 True False False 8h node-tuning 4.18.11 True False False 8h openshift-apiserver 4.18.11 True False False 8h openshift-controller-manager 4.18.11 True False False 8h openshift-samples 4.18.11 True False False 8h operator-lifecycle-manager 4.18.11 True False False 8h operator-lifecycle-manager-catalog 4.18.11 True False False 8h operator-lifecycle-manager-packageserver 4.18.11 True False False 8h service-ca 4.18.11 True False False 8h storage 4.18.11 True False False 8h [root@bastion ~]# oc get po --kubeconfig=hcp-kubeconfig-kvm -n openshift-ingress NAME READY STATUS RESTARTS AGE router-default-6454d8bc64-jtjmv 1/1 Running 0 8h [root@bastion ~]# oc describe po router-default-6454d8bc64-jtjmv --kubeconfig=hcp-kubeconfig-kvm -n openshift-ingress Name: router-default-6454d8bc64-jtjmv Namespace: openshift-ingress Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: router Node: compute-0.hosted-cluster.solntest.com/172.23.238.46 Start Time: Tue, 29 Apr 2025 04:57:50 -0400 Labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default ingresscontroller.operator.openshift.io/hash=7c696859f5 pod-template-hash=6454d8bc64 Annotations: openshift.io/required-scc: hostnetwork openshift.io/scc: hostnetwork Status: Running IP: 172.23.238.46 IPs: IP: 172.23.238.46 Controlled By: ReplicaSet/router-default-6454d8bc64 Containers: router: Container ID: cri-o://cdab8340fb3cbfe872fb087dd7abf198feff1addbfca6614323aff3616450114 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc8eef8395513f799f3e1fbdd2f67474d847f7eb68a48dbd1c2c1d81a854f630 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43fd7a3698e3f75d15d5b341d57a3e6ded5d3e403fc1ea8667204acd0108ddbd Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 80/TCP, 443/TCP, 1936/TCP State: Running Started: Tue, 29 Apr 2025 04:58:22 -0400 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 256Mi Liveness: http-get http://localhost:1936/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://localhost:1936/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Startup: http-get http://localhost:1936/healthz/ready delay=0s timeout=1s period=1s #success=1 #failure=120 Environment: DEFAULT_CERTIFICATE_DIR: /etc/pki/tls/private DEFAULT_DESTINATION_CA_PATH: /var/run/configmaps/service-ca/service-ca.crt RELOAD_INTERVAL: 5s ROUTER_ALLOW_WILDCARD_ROUTES: false ROUTER_CANONICAL_HOSTNAME: router-default.apps.hosted-cluster.solntest.com ROUTER_CIPHERS: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ROUTER_CIPHERSUITES: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ROUTER_DISABLE_HTTP2: true ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK: false ROUTER_DOMAIN: apps.hosted-cluster.solntest.com ROUTER_IDLE_CLOSE_ON_RESPONSE: true ROUTER_LOAD_BALANCE_ALGORITHM: random ROUTER_METRICS_TLS_CERT_FILE: /etc/pki/tls/metrics-certs/tls.crt ROUTER_METRICS_TLS_KEY_FILE: /etc/pki/tls/metrics-certs/tls.key ROUTER_METRICS_TYPE: haproxy ROUTER_SERVICE_HTTPS_PORT: 443 ROUTER_SERVICE_HTTP_PORT: 80 ROUTER_SERVICE_NAME: default ROUTER_SERVICE_NAMESPACE: openshift-ingress ROUTER_SET_FORWARDED_HEADERS: append ROUTER_TCP_BALANCE_SCHEME: source ROUTER_THREADS: 4 SSL_MIN_VERSION: TLSv1.2 STATS_PASSWORD_FILE: /var/lib/haproxy/conf/metrics-auth/statsPassword STATS_PORT: 1936 STATS_USERNAME_FILE: /var/lib/haproxy/conf/metrics-auth/statsUsername Mounts: /etc/pki/tls/metrics-certs from metrics-certs (ro) /etc/pki/tls/private from default-certificate (ro) /var/lib/haproxy/conf/metrics-auth from stats-auth (ro) /var/run/configmaps/service-ca from service-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tkbr9 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-certificate: Type: Secret (a volume populated by a Secret) SecretName: default-ingress-cert Optional: false service-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: service-ca-bundle Optional: false stats-auth: Type: Secret (a volume populated by a Secret) SecretName: router-stats-default Optional: false metrics-certs: Type: Secret (a volume populated by a Secret) SecretName: router-metrics-certs-default Optional: false kube-api-access-tkbr9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux node-role.kubernetes.io/worker= Tolerations: kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Topology Spread Constraints: topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector ingresscontroller.operator.openshift.io/hash in (7c696859f5) Events: <none>
Version-Release number of selected component (if applicable):
MCE 2.8.0 HCP version 4.18.11-multi Management cluster OCP 4.18.11
How reproducible:
Always
Steps to Reproduce:
1.Create x86 OCP cluster 2.Install MCE operator 3.Create a KVM hosted cluster using above cluster as management cluster
Actual results:
Ingress operator in degraded state
Expected results:
All operators on hosted cluster should be available
Additional info: