-
Bug
-
Resolution: Done
-
Normal
-
None
-
4.13, 4.12, 4.11
-
Low
-
No
-
2
-
OSDOCS Sprint 250
-
1
-
False
-
-
Release Note Not Required
-
In Progress
Description of problem:
There is incorrect output, like for example there aren't 6 masters and ovnkube-master has 6 containers. We should also add or review some of the information. On this part - https://docs.openshift.com/container-platform/4.13/networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.html#nw-ovn-ipsec-verification_configuring-ipsec-ovn - we should have: $ oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovnkube-master-2hmmf 6/6 Running 0 2m22s ovnkube-master-5l2tm 6/6 Running 0 6m25s ovnkube-master-sb25m 6/6 Running 0 10m On step 2 to verify the DBs, I wonder if we should have something like this instead: $ for OVNMASTER in $(oc get pod -l app=ovnkube-master --no-headers -o custom-columns=NAME:.metadata.name); \ do oc rsh -Tc northd $OVNMASTER ovn-nbctl --no-leader-only get nb_global . ipsec ; \ oc rsh -Tc northd $OVNMASTER ovn-sbctl --no-leader-only get sb_global . ipsec; \ done To finalize I wonder if it is useful to add this same command to verify the DBs on this section - https://docs.openshift.com/container-platform/4.13/networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.html#nw-ovn-ipsec-disable_configuring-ipsec-ovn - since the ovn-ipsec pods stay on the cluster once it is disabled. Could be a good idea for the customer to double check the command above returns "false".
Version-Release number of selected component (if applicable):
This only applies to documentation on 4.11, 4.12 and 4.13