-
Bug
-
Resolution: Duplicate
-
Critical
-
None
-
odf-4.18
Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:
The connection status of the submariner is degraded for Provider RDR setup. While installing Submarinar 'Globalnet' checkbox was tick marked.
The setup details:
We have used 2 Provider clusters: ibm-baremetal1 and ibm-baremetal3
with 2 hcp clusters on each---
ibm-baremetal1-hcp418-bm1-a and ibm-baremetal1-hcp418-bm1-b hcp clusters on ibm-baremetal1
ibm-baremetal3-hcp418-bm3-a and ibm-baremetal3-hcp418-bm3-b hcp clusters on ibm-baremetal3
and a separate ACM hub cluster: ibm-baremetal5
The error message showing for Submarinar addon installation:
bm1
The connection between clusters "ibm-baremetal1" and "ibm-baremetal3" is established The connection between clusters "ibm-baremetal1" and "ibm-baremetal1-hcp418-bm1-a" is established The connection between clusters "ibm-baremetal1" and "ibm-baremetal3-hcp418-bm3-a" is established The connection between clusters "ibm-baremetal1" and "ibm-baremetal1-hcp418-bm1-b" is not established (status=connecting) The connection between clusters "ibm-baremetal1" and "ibm-baremetal3-hcp418-bm3-b" is not established (status=error)
bm3
The connection between clusters "ibm-baremetal3" and "ibm-baremetal1" is established The connection between clusters "ibm-baremetal3" and "ibm-baremetal3-hcp418-bm3-b" is established The connection between clusters "ibm-baremetal3" and "ibm-baremetal1-hcp418-bm1-b" is established The connection between clusters "ibm-baremetal3" and "ibm-baremetal1-hcp418-bm1-a" is established The connection between clusters "ibm-baremetal3" and "ibm-baremetal3-hcp418-bm3-a" is not established (status=connecting)
hcp418-bm1-a
The connection between clusters "ibm-baremetal1-hcp418-bm1-a" and "ibm-baremetal1" is established The connection between clusters "ibm-baremetal1-hcp418-bm1-a" and "ibm-baremetal3" is established The connection between clusters "ibm-baremetal1-hcp418-bm1-a" and "ibm-baremetal1-hcp418-bm1-b" is not established (status=connecting) The connection between clusters "ibm-baremetal1-hcp418-bm1-a" and "ibm-baremetal3-hcp418-bm3-a" is not established (status=connecting) The connection between clusters "ibm-baremetal1-hcp418-bm1-a" and "ibm-baremetal3-hcp418-bm3-b" is not established (status=connecting)
hcp418-bm1-b
The connection between clusters "ibm-baremetal1-hcp418-bm1-b" and "ibm-baremetal3" is established The connection between clusters "ibm-baremetal1-hcp418-bm1-b" and "ibm-baremetal1" is not established (status=connecting) The connection between clusters "ibm-baremetal1-hcp418-bm1-b" and "ibm-baremetal3-hcp418-bm3-a" is not established (status=connecting) The connection between clusters "ibm-baremetal1-hcp418-bm1-b" and "ibm-baremetal3-hcp418-bm3-b" is not established (status=connecting) The connection between clusters "ibm-baremetal1-hcp418-bm1-b" and "ibm-baremetal1-hcp418-bm1-a" is not established (status=connecting)
hcp418-bm3-a
The connection between clusters "ibm-baremetal3-hcp418-bm3-a" and "ibm-baremetal1" is established The connection between clusters "ibm-baremetal3-hcp418-bm3-a" and "ibm-baremetal3" is not established (status=connecting) The connection between clusters "ibm-baremetal3-hcp418-bm3-a" and "ibm-baremetal1-hcp418-bm1-b" is not established (status=connecting) The connection between clusters "ibm-baremetal3-hcp418-bm3-a" and "ibm-baremetal3-hcp418-bm3-b" is not established (status=connecting) The connection between clusters "ibm-baremetal3-hcp418-bm3-a" and "ibm-baremetal1-hcp418-bm1-a" is not established (status=connecting)
hcp418-bm3-b
The connection between clusters "ibm-baremetal3-hcp418-bm3-b" and "ibm-baremetal3" is established The connection between clusters "ibm-baremetal3-hcp418-bm3-b" and "ibm-baremetal1" is not established (status=error) The connection between clusters "ibm-baremetal3-hcp418-bm3-b" and "ibm-baremetal1-hcp418-bm1-b" is not established (status=connecting) The connection between clusters "ibm-baremetal3-hcp418-bm3-b" and "ibm-baremetal3-hcp418-bm3-a" is not established (status=connecting) The connection between clusters "ibm-baremetal3-hcp418-bm3-b" and "ibm-baremetal1-hcp418-bm1-a" is not established (status=connecting)
The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):
Bare Metal
The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):
Provider
The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):
OCP: 4.18.0-ec.3
OCS: 4.18.0-83
Does this issue impact your ability to continue to work with the product?
Yes
Is there any workaround available to the best of your knowledge?
No
Can this issue be reproduced? If so, please provide the hit rate
Yes
Can this issue be reproduced from the UI?
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. Create 2 Provider clusters with 2 hcp clusters on each
2. Create a ACM hub
3. Import the provider and hosted clusters to ACM hub
4. Install Submarinar add-on for all the clusters with globalnet ticked
The exact date and time when the issue was observed, including timezone details:
Actual results:
The connection status of the submariner is degraded for Provider RDR setup
Expected results:
The connection status of the submariner should be healthy.
Logs collected and log location:
Additional info:
We have received some response and suggestions from Aswin as,
there seems to be issues with public ips ibm-baremetal3 and ibm-baremetal3-hcp418-bm3-b have same public IP 52.118.43.166 and also in the submariner logs I could see nat discovery packets going to the wrong cluster
[0m [36merror=[0m[31m[1m"remote endpoint \"submariner-cable-ibm-baremetal1-52-118-6-135\" responded with \"UNKNOWN_DST_CLUSTER\" :
The node IP of one hosted cluster cannot be reached from another hosted cluster's node.
can we force the hosted HCP clusters just to use the public ip assgined to it, seems like some of the packets are being send to the bare metal and getting translated to public IP there ?
RH slack thread - https://redhat-internal.slack.com/archives/C0134E73VH6/p1733237680559359?thread_ts=1732190525.902269&cid=C0134E73VH6