Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-51322

[OVN] SNATs issues with AdminPolicyBasedExternalRoute

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      In our enhancements docs we make several references about configuring skipHostSNAT, however it looks we are not implementing it, since looking at the code we have this field commented and in fact the CRD doesn't mentioned it.

      https://github.com/openshift/ovn-kubernetes/blob/87e64b772528470577795c4f82a09e5cb253491a/go-controller/pkg/crd/adminpolicybasedroute/v1/types.go#L79

      So far okay, but on the same documentation we mentioned that the default is to be set to false, so I assume SNATs will be configured as always whether we use AdminPolicyBasedExternalRoute. The problem is that doesn't seem the case and when we create the AdminPolicyBasedExternalRoute SNATs are not created and even when we delete the CR the SNATs will continute missing creating issues on what the normal routing from the pod is expected to happen and all egress connections hung until they timeout. The only solution is to delete the pod and force its restart.

      Configuring APBER:

      apiVersion: v1
      items:

      • apiVersion: k8s.ovn.org/v1
          kind: AdminPolicyBasedExternalRoute
          metadata:
            creationTimestamp: "2025-02-26T14:09:35Z"
            generation: 1
            name: additional-network-hop-policy
            resourceVersion: "444741"
            uid: 7abbf06d-c6be-42eb-a135-2f33493da500
          spec:
            from:
              namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: cluster-network-tests
            nextHops:
              static:
              - bfdEnabled: false
                ip: 172.23.184.1
          status:
            lastTransitionTime: "2025-02-26T14:09:35Z"
            messages:
            - 'ocp415-vmw-vcstf-worker-0-n8f4r: configured external gateway IPs: 172.23.184.1'
            - 'ocp415-vmw-vcstf-master-0: configured external gateway IPs: 172.23.184.1'
            - 'ocp415-vmw-vcstf-worker-0-dmq4m: configured external gateway IPs: 172.23.184.1'
            - 'ocp415-vmw-vcstf-master-1: configured external gateway IPs: 172.23.184.1'
            - 'ocp415-vmw-vcstf-worker-0-6f7dd: configured external gateway IPs: 172.23.184.1'
            - 'ocp415-vmw-vcstf-master-2: configured external gateway IPs: 172.23.184.1'
            status: Success
        kind: List
        metadata:
          resourceVersion: ""

       

      Pod running:

      NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE                              NOMINATED NODE   READINESS GATES
      net-tools-simplehttp-85d8444d56-r2nff   1/1     Running   0          52m   100.94.10.4   ocp415-vmw-vcstf-worker-0-6f7dd   <none>           <none>

       

      The OVN DBs:

      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-sbctl lflow-list GR_ocp415-vmw-vcstf-worker-0-6f7dd | grep 100.94.10.4
        table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && ip4.dst == 172.25.74.139 && tcp && tcp.dst == 9180), action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=100.94.10.4:9180; force_snat)
        table=13(lr_in_ip_routing   ), priority=96   , match=(ip4.src == 100.94.10.4/32), action=(ip.ttl--; reg8[0..15] = 0; reg0 = 172.23.184.1; reg1 = 172.23.181.82; eth.src = 00:50:56:88:7a:9b; outport = "rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd"; flags.loopback = 1; next

      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-nbctl lr-route-list GR_ocp415-vmw-vcstf-worker-0-6f7dd
      IPv4 Routes
      Route Table <main>:
                    100.94.10.4              172.23.184.1 src-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd ecmp-symmetric-reply
               169.254.169.0/29             169.254.169.4 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd
                  100.94.0.0/16                100.64.0.1 dst-ip
                      0.0.0.0/0              172.23.180.1 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd

      IPv6 Routes
      Route Table <main>:
                     fd69::/125                   fd69::4 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd
                      fd02::/48                   fd98::1 dst-ip
                           ::/0 fe80::1afd:74ff:fe71:1fdb dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd

      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-nbctl lr-nat-list GR_ocp415-vmw-vcstf-worker-0-6f7dd
      TYPE             GATEWAY_PORT          EXTERNAL_IP        EXTERNAL_PORT    LOGICAL_IP          EXTERNAL_MAC         LOGICAL_PORT
      snat                                   172.23.181.82                       100.64.0.7
      snat                                   172.23.181.82                       100.94.10.35
      snat                                   172.23.181.82                       100.94.10.3
      snat                                   172.23.181.82                       100.94.10.38
      snat                                   172.23.181.82                       100.94.10.22
      snat                                   172.23.181.82                       100.94.10.39
      snat                                   fdca:5d7b:fdda:                     fd98::7
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::26
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::16
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::27
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::3
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::23

       

      After deleting the APBER:

      $ oc delete adminpolicybasedexternalroutes.k8s.ovn.org  additional-network-hop-policy
      adminpolicybasedexternalroute.k8s.ovn.org "additional-network-hop-policy" deleted

      Nothing changes in the DBs:

      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-nbctl lr-nat-list GR_ocp415-vmw-vcstf-worker-0-6f7dd
      TYPE             GATEWAY_PORT          EXTERNAL_IP        EXTERNAL_PORT    LOGICAL_IP          EXTERNAL_MAC         LOGICAL_PORT
      snat                                   172.23.181.82                       100.64.0.7
      snat                                   172.23.181.82                       100.94.10.35
      snat                                   172.23.181.82                       100.94.10.3
      snat                                   172.23.181.82                       100.94.10.38
      snat                                   172.23.181.82                       100.94.10.22
      snat                                   172.23.181.82                       100.94.10.39
      snat                                   fdca:5d7b:fdda:                     fd98::7
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::26
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::16
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::27
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::3
      snat                                   fdca:5d7b:fdda:                     fd02:0:0:6::23
      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-nbctl lr-route-list GR_ocp415-vmw-vcstf-worker-0-6f7dd
      IPv4 Routes
      Route Table <main>:
               169.254.169.0/29             169.254.169.4 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd
                  100.94.0.0/16                100.64.0.1 dst-ip
                      0.0.0.0/0              172.23.180.1 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd

      IPv6 Routes
      Route Table <main>:
                     fd69::/125                   fd69::4 dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd
                      fd02::/48                   fd98::1 dst-ip
                           ::/0 fe80::1afd:74ff:fe71:1fdb dst-ip rtoe-GR_ocp415-vmw-vcstf-worker-0-6f7dd

       

      [root@ocp415-vmw-vcstf-worker-0-6f7dd ~]# ovn-sbctl lflow-list GR_ocp415-vmw-vcstf-worker-0-6f7dd | grep 100.94.10.4
        table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && ip4.dst == 172.25.74.139 && tcp && tcp.dst == 9180), action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=100.94.10.4:9180; force_snat)

      This will cause egress connections from the pod to fail until its restarted.

      Version-Release number of selected component (if applicable):

      Tested on OCP 4.16 but assume this will happen all the way back to 4.14

      How reproducible:

      Always

      Steps to Reproduce:

      1. Deploy a project and some deployment

      2. Configure a simple AdminPolicyBasedExternalRoute 

      3. Check the OVN DBs

      4. Delete the AdminPolicyBasedExternalRoute and test a few egress connections and check the OVN DBs

      5. Delete the pod and re-test connections and check the OVN DBs.

      Actual results:

      Expected results:

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sdn-team-bot sdn-team bot
              rhn-support-andcosta Andre Costa
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: