-
Bug
-
Resolution: Done
-
Undefined
-
None
-
False
-
None
-
False
-
-
-
Submariner Sprint 23-6, Submariner Sprint 2023-7
-
No
Description of problem:
This is a Regional DR setup where active hub was brought down and backup was restored on passive hub.
I am using submariner 0.15 with ACM 2.8 where globalnet is enabled.
Before performing the failover operation, submariner was healthy and connectivity was restored.
I performed failover for busybox-workloads-3 via ACM UI of passive hub to C2 managed cluster, which was initially running on C1 managed cluster.
Before the failover, all the master nodes of C1 were brought down, and when failover successfully completed, they were brought up.
After which the submariner connectivity went down and connection status on ACM UiIstarts showing Degraded.
Version-Release number of selected component (if applicable):
acm-custom-registry:2.8.0-DOWNSTREAM-2023-05-03-03-36-16
submariner 0.15
ODF 4.13.0-182.stable
OCP 4.13.0-0.nightly-2023-05-02-134729
How reproducible:
Steps to Reproduce:
1. Create a Regional DR setup with above mentioned versions by following the hub recovery documentation.
2. When active hub is brought down, and the backup is restored on the passive hub, ensure that the submariner is also restored and the connectivity is established.
3. Let the subscription based DR protected workloads run on C1 managed cluster and check if mirroring is healthy b/w C1 and C2 managed clusters.
4. Bring all master nodes of C1 down, let ACM UI report that the cluster is offline/unknown state.
5. Perform failover of DR protected workloads to C2.
6. When failover completes, bring all master nodes of C1 up.
7. Wait for some time for all the resources to resume/start, and check submariner connectivity.
Actual results: After failover was performed from passive hub, submariner connectivity is lost
Expected results: When failover is performed from passive hub, submariner connectivity shouldn't be lost
Additional info:
amagrawa:~$ oc get catsrc acm-custom-registry -n openshift-marketplace -o json |jq -r .spec.image
quay.io:443/acm-d/acm-custom-registry:2.8.0-DOWNSTREAM-2023-05-03-03-36-16
amagrawa:~$ clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.13.0-0.nightly-2023-05-02-134729 True False 5d19h Cluster version is 4.13.0-0.nightly-2023-05-02-134729
amagrawa:~$ subctl version
subctl version: v0.15.0-rc1
amagrawa:~$ oc version
Client Version: 4.13.0-0.nightly-2023-03-23-204038
Kustomize Version: v4.5.7
Server Version: 4.13.0-0.nightly-2023-05-02-134729
Kubernetes Version: v1.26.3+b404935
amagrawa:~$ cat /etc/os-release
NAME="Fedora Linux"
VERSION="37 (Workstation Edition)"
ID=fedora
VERSION_ID=37
VERSION_CODENAME=""
PLATFORM_ID="platform:f37"
PRETTY_NAME="Fedora Linux 37 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:37"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f37/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=37
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=37
SUPPORT_END=2023-11-14
VARIANT="Workstation Edition"
VARIANT_ID=workstation
amagrawa:~$ uname -a
Linux li-880b4a4c-2629-11b2-a85c-9189ecc1153d.ibm.com 6.2.7-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Mar 17 16:16:00 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Platform- VMware
amagrawa:~$ oc get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
amagrawa-c3-lwpbg-master-0 Ready control-plane,master 5d20h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,node.openshift.io/os_id=rhcos
amagrawa-c3-lwpbg-master-1 Ready control-plane,master 5d20h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-master-1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,node.openshift.io/os_id=rhcos
amagrawa-c3-lwpbg-master-2 Ready control-plane,master 5d20h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-master-2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-16gb.os-unknown,node.openshift.io/os_id=rhcos
amagrawa-c3-lwpbg-worker-0-m6thr Ready worker 5d19h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-worker-0-m6thr,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,node.openshift.io/os_id=rhcos
amagrawa-c3-lwpbg-worker-0-nwpj6 Ready worker 5d19h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-worker-0-nwpj6,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,node.openshift.io/os_id=rhcos
amagrawa-c3-lwpbg-worker-0-xb8f8 Ready worker 5d19h v1.26.3+b404935 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=amagrawa-c3-lwpbg-worker-0-xb8f8,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-64gb.os-unknown,node.openshift.io/os_id=rhcos
ACM and Submariner was installed via ACM UI. Globalnet was also enabled via ACM UI while installing submariner.