-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.17, 4.18
-
Important
-
No
-
SDN Sprint 259, SDN Sprint 260, SDN Sprint 261, SDN Sprint 262
-
4
-
Rejected
-
False
-
Description of problem:
Version-Release number of selected component (if applicable):
build openshift/ovn-kubernetes#2291
How reproducible:
Always
Steps to Reproduce:
1. Create a ns ns1
2. Create a CRD in ns1
% oc get UserDefinedNetwork -n ns1 -o yaml apiVersion: v1 items: - apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: creationTimestamp: "2024-09-09T08:34:49Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-network namespace: ns1 resourceVersion: "73943" uid: c923b0b1-05b4-4889-b076-c6a28f7353de spec: layer3: role: Primary subnets: - cidr: 10.200.0.0/16 hostSubnet: 24 topology: Layer3 status: conditions: - lastTransitionTime: "2024-09-09T08:34:49Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkReady kind: List metadata: resourceVersion: ""
3. Create a service and pods in ns1
% oc get svc -n ns1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-service ClusterIP 172.30.16.88 <none> 27017/TCP 5m32s % oc get pods -n ns1 NAME READY STATUS RESTARTS AGE test-rc-f54tl 1/1 Running 0 5m4s test-rc-lhnd7 1/1 Running 0 5m4s % oc exec -n ns1 test-rc-f54tl -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default link/ether 0a:58:0a:80:02:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.128.2.27/23 brd 10.128.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe80:21b/64 scope link valid_lft forever preferred_lft forever 3: ovn-udn1@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default link/ether 0a:58:0a:c8:03:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.200.3.3/24 brd 10.200.3.255 scope global ovn-udn1 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fec8:303/64 scope link valid_lft forever preferred_lft forever 4. Restart ovn pods {code:java} % oc delete pods --all -n openshift-ovn-kubernetes pod "ovnkube-control-plane-76fd6ddbf4-j69j8" deleted pod "ovnkube-control-plane-76fd6ddbf4-vnr2m" deleted pod "ovnkube-node-5pd5w" deleted pod "ovnkube-node-5r9mg" deleted pod "ovnkube-node-6bdtx" deleted pod "ovnkube-node-6v5d7" deleted pod "ovnkube-node-8pmpq" deleted pod "ovnkube-node-cffld" deleted
Actual results: {code:java} % oc get pods -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovnkube-control-plane-76fd6ddbf4-9cklv 2/2 Running 0 9m22s ovnkube-control-plane-76fd6ddbf4-gkmlg 2/2 Running 0 9m22s ovnkube-node-bztn5 7/8 CrashLoopBackOff 5 (21s ago) 9m19s ovnkube-node-qhjsw 7/8 Error 5 (2m45s ago) 9m18s ovnkube-node-t5f8p 7/8 Error 5 (2m32s ago) 9m20s ovnkube-node-t8kpp 7/8 Error 5 (2m34s ago) 9m19s ovnkube-node-whbvx 7/8 Error 5 (2m35s ago) 9m20s ovnkube-node-xlzlh 7/8 CrashLoopBackOff 5 (14s ago) 9m18s ovnkube-controller: Container ID: cri-o://977dd8c17320695b1098ea54996bfad69c14dc4219a91dfd4354c818ea433cac Image: registry.build05.ci.openshift.org/ci-ln-y1ypd82/stable@sha256:3110151b89e767644c01c8ce2cf3fec4f26f6d6e011262d0988c1d915d63355f Image ID: registry.build05.ci.openshift.org/ci-ln-y1ypd82/stable@sha256:3110151b89e767644c01c8ce2cf3fec4f26f6d6e011262d0988c1d915d63355f Port: 29105/TCP Host Port: 29105/TCP Command: /bin/bash -c set -xe . /ovnkube-lib/ovnkube-lib.sh || exit 1 start-ovnkube-node ${OVN_KUBE_LOG_LEVEL} 29103 29105 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: :205] Sending *v1.Node event handler 7 for removal I0909 08:45:58.537155 170668 factory.go:542] Stopping watch factory I0909 08:45:58.537167 170668 handler.go:219] Removed *v1.Node event handler 7 I0909 08:45:58.537185 170668 handler.go:219] Removed *v1.Namespace event handler 1 I0909 08:45:58.537198 170668 handler.go:219] Removed *v1.Namespace event handler 5 I0909 08:45:58.537206 170668 handler.go:219] Removed *v1.EgressIP event handler 8 I0909 08:45:58.537207 170668 handler.go:219] Removed *v1.EgressFirewall event handler 9 I0909 08:45:58.537187 170668 handler.go:219] Removed *v1.Node event handler 10 I0909 08:45:58.537219 170668 handler.go:219] Removed *v1.Node event handler 2 I0909 08:45:58.538642 170668 network_attach_def_controller.go:126] [network-controller-manager NAD controller]: shutting down I0909 08:45:58.538703 170668 secondary_layer3_network_controller.go:433] Stop secondary layer3 network controller of network ns1.udn-network I0909 08:45:58.538742 170668 services_controller.go:243] Shutting down controller ovn-lb-controller for network=ns1.udn-network I0909 08:45:58.538767 170668 obj_retry.go:432] Stop channel got triggered: will stop retrying failed objects of type *v1.Node I0909 08:45:58.538754 170668 obj_retry.go:432] Stop channel got triggered: will stop retrying failed objects of type *v1.Pod E0909 08:45:58.5 Exit Code: 1 Started: Mon, 09 Sep 2024 16:44:57 +0800 Finished: Mon, 09 Sep 2024 16:45:58 +0800 Ready: False Restart Count: 5 Requests: cpu: 10m memory: 600Mi
Expected results:
ovn pods should not crash
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components