-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.16.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
None
-
None
-
None
-
None
-
None
-
Customer Escalated
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
upgraded from 4.14.12 to 4.14.51 to 4.15.56 to 4.16.28
Keepalived doesn't come out of BACKUP state to take VIP when VIP holder goes down unless ovn daemonsets are restarted.
Active node also fails to login unless the following is done:
1. Undeploy keepalived on active master. via mv /etc/kubernets/manifests/keepalived
2. Stop the keepalived service.
3. Add the following entries on master2 to redirect traffic temporarily to the other master server IP(/etc/hosts):
10.20.171.11 api-int.dmatbdmnlab-prod.vzwdt.local
10.20.171.11 api.dmatbdmnlab-prod.vzwdt.local
4. Redeploy keepalived.
5. Remove the temporary entries once traffic is successfully restored.
How reproducible:
Always
Steps to Reproduce:
1. Shutdown node with active VIP.
2. rollout restart ovn daemonsets on another node
3. keepalived starts advertising.
Actual results:
Nodes stay in BACKUP state despite active node going down.
Expected results:
Another node going to Master state
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
master0.txt and master1.txt show that while master2 is down, they are quiet. not sending GARPs.
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
Master0-10.20.171.11
Master1-10.20.171.12
Master2-10.20.171.13
VIP-Master0-10.20.171.250
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
-
-
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
-
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components