-
Bug
-
Resolution: Done
-
Blocker
-
None
-
None
-
False
-
-
False
-
-
-
Important
Description of the problem:
The rhdh helm chart deploying orchestrator creates a network policy (rhdh-allow-external-communication) that is supposed to allow external traffic to the backstage ui, but it is not actually allowing traffic.
How reproducible:
99% - two QE on separate environments have hit it. 1 in 5 times I did not run into it somehow, but it could have been timing or something else (I cannot reproduce it succeeding any further to check)
Steps to reproduce:
1. Deploy orchestrator + rhdh via helm chart
2. Browse to the backstage UI using the route
Actual results:
Openshift router says it can't reach application
Expected results:
The application can be reached
Note:
Here is the network policy that should be letting us through but is not:
oc get networkpolicy rhdh-allow-external-communication -o yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: annotations: meta.helm.sh/release-name: rhdh meta.helm.sh/release-namespace: rhdh creationTimestamp: "2025-05-09T20:04:48Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm name: rhdh-allow-external-communication namespace: rhdh resourceVersion: "206364" uid: db83eb8c-ef53-4aee-9ae7-05e43215bc2d spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-ingress podSelector: {} policyTypes: - Ingress
I followed this doc and replaced the network policy with this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-router
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
podSelector: {}
policyTypes:
- Ingress
The matchlabel method above let me access the UI via the route.
- links to