Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60934

iptables incorrectly recongise marks as part of CIDRs for rules

XMLWordPrintable

    • Quality / Stability / Reliability
    • True
    • Show
      https://issues.redhat.com/browse/RHEL-112096
    • None
    • Critical
    • None
    • x86_64
    • UAT
    • None
    • None
    • None
    • CORENET Sprint 276
    • 1
    • Customer Escalated, Customer Facing
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      1. My CU had encountered DB connectivity issue while they upgraded their cluster from 4.14.44 to 4.15.52.

      2. It was then confirmed that EgressIP failed to work for some of the app pods.

      3. I tried to reproduce in my lab env and found that one of the expected SNAT rules was missing (two were expected since there are two clusterNetwork).

      4. After enabling verbose log (level 5) on SDN, the following relevant iptables commands were observed:

      I0827 06:33:55.250627    2329 iptables.go:467] "Running" command="iptables" arguments=["-w","5","-W","100000","-C","OPENSHIFT-MASQUERADE","-t","nat","-s","172.16.0.0/20","-m","mark","--mark","0x01d86218","-j","SNAT","--to-source","192.168.26.240"]
      I0827 06:33:55.253851    2329 iptables.go:467] "Running" command="iptables" arguments=["-w","5","-W","100000","-I","OPENSHIFT-MASQUERADE","-t","nat","-s","172.16.0.0/20","-m","mark","--mark","0x01d86218","-j","SNAT","--to-source","192.168.26.240"]
      I0827 06:33:55.261752    2329 iptables.go:467] "Running" command="iptables" arguments=["-w","5","-W","100000","-C","OPENSHIFT-MASQUERADE","-t","nat","-s","172.16.24.0/21","-m","mark","--mark","0x01d86218","-j","SNAT","--to-source","192.168.26.240"] 

      5. Since there was no insert (-I) after the last check (-C), I manually run the  last check command and found that iptables return 0 even if there is no such rule, according to list (-L) result shown below:

      [lab-user@bastion-w5wtb ~]$ oc debug node/w5wtb-nv8rl-worker-0-kn2zw – chroot /host iptables -t nat -v -L OPENSHIFT-MASQUERADE 2>/dev/null
      Chain OPENSHIFT-MASQUERADE (1 references)
       pkts bytes target     prot opt in     out     source               destination
          0     0 MASQUERADE  all  --  any    any     172.16.24.0/21       24.98.216.1            mark match 0x1d86218
          0     0 MASQUERADE  all  --  any    any     172.16.0.0/20        24.98.216.1            mark match 0x1d86218
          0     0 SNAT       all  --  any    any     24.98.216.1            anywhere             mark match 0x1d86218 to:192.168.26.240
          0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1/0x1
       2911  189K OPENSHIFT-MASQUERADE-2  all  --  any    any     172.16.0.0/20        anywhere             /* masquerade pod-to-external traffic */
          4   240 OPENSHIFT-MASQUERADE-2  all  --  any    any     172.16.24.0/21       anywhere             /* masquerade pod-to-external traffic */ 

      6. Furthermore, it shows a strange IP 24.98.216.1 instead of any one of the two CIDRs for the configured clusterNetwork.

      7. I then turned to the underlying nftable to seek for evidence and found that it  only fails on iptables side but the data does not corrupt on nftable side, according to nft list ruleset:

      // ...
        chain OPENSHIFT-MASQUERADE {
          ip saddr 172.16.24.0/21 ip daddr 172.16.24.0/21 meta mark 0x1d86218 counter packets 0 bytes 0 masquerade
          ip saddr 172.16.0.0/20 ip daddr 172.16.0.0/20 meta mark 0x1d86218 counter packets 0 bytes 0 masquerade
          ip saddr 172.16.0.0/20 meta mark 0x1d86218 counter packets 2 bytes 120 snat to 192.168.26.240 // <-- see here
           meta mark & 0x00000001 == 0x00000001 counter packets 0 bytes 0 return
          ip saddr 172.16.0.0/20  counter packets 1251 bytes 81328 jump OPENSHIFT-MASQUERADE-2
          ip saddr 172.16.24.0/21  counter packets 4 bytes 240 jump OPENSHIFT-MASQUERADE-2
        }
      //... 

      8. By further testing, I finally found that iptables seems to interpret marks as some part of the CIDRs:

      [lab-user@bastion-w5wtb ~]$ oc debug node/w5wtb-nv8rl-worker-0-kn2zw -- chroot /host iptables -t nat -n -v -L OPENSHIFT-MASQUERADE 2>/dev/null
      Chain OPENSHIFT-MASQUERADE (1 references)
       pkts bytes target     prot opt in     out     source               destination
          0     0 MASQUERADE  all  --  *      *       172.16.24.0/21       0.0.0.255            mark match 0xff000000
          0     0 MASQUERADE  all  --  *      *       172.16.0.0/20        0.0.0.255            mark match 0xff000000
          4   240 SNAT       all  --  *      *       0.0.0.255            0.0.0.0/0            mark match 0xff000000 to:192.168.26.240
          2   120 SNAT       all  --  *      *       0.0.0.255            0.0.0.0/0            mark match 0xff000000 to:192.168.26.240
          0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x1/0x1
       4052  263K OPENSHIFT-MASQUERADE-2  all  --  *      *       172.16.0.0/20        0.0.0.0/0            /* masquerade pod-to-external traffic */
          6   360 OPENSHIFT-MASQUERADE-2  all  --  *      *       172.16.24.0/21       0.0.0.0/0            /* masquerade pod-to-external traffic */
          0     0 SNAT       all  --  *      *       0.0.0.255/16         0.0.0.0/0            mark match 0xff000000 to:192.168.26.240
          0     0 SNAT       all  --  *      *       0.0.0.255            0.0.0.0/0            mark match 0xff000000 to:192.168.26.240
          0     0 SNAT       all  --  *      *       1.0.0.255            0.0.0.0/0            mark match 0xff000001 to:192.168.26.240
          0     0 SNAT       all  --  *      *       1.0.5.255            0.0.0.0/0            mark match 0xff050001 to:192.168.26.240
          0     0 SNAT       all  --  *      *       1.160.5.255          0.0.0.0/0            mark match 0xff05a001 to:192.168.26.240
          0     0 MASQUERADE  all  --  *      *       1.160.5.255          0.0.0.0/0            mark match 0xff05a001

       

      Version-Release number of selected component (if applicable): 4.15.z

      How reproducible:

      Steps to Reproduce:

      1. Install a new 4.14 OCP cluster with SDN with two clusterNetworks like

      [lab-user@bastion-w5wtb ~]$ cat $OCP_ASSET_DIR.bak/install-config.yaml
      additionalTrustBundlePolicy: Proxyonly
      apiVersion: v1
      baseDomain: <redacted>
      compute:
      - architecture: amd64
        hyperthreading: Enabled
        name: worker
        platform:
          vsphere:
            coresPerSocket: 2
            cpus: 2
        replicas: 3
      controlPlane:
        architecture: amd64
        hyperthreading: Enabled
        name: master
        platform: {}
        replicas: 3
      metadata:
        creationTimestamp: null
        name: ocp
      networking:
        clusterNetwork:
        - cidr: 172.16.0.0/20
          hostPrefix: 22
        - cidr: 172.16.24.0/21
          hostPrefix: 23
        machineNetwork:
        - cidr: 192.168.26.0/24
        networkType: OpenShiftSDN
        serviceNetwork:
        - 172.16.16.0/21
      platform:
        vsphere:
          apiVIPs:
          - 192.168.26.201
          failureDomains:
          - name: generated-failure-domain
            region: generated-region
            server: <redacted>
            topology: 
              computeCluster: <redacted>
              datacenter: <redacted>
              datastore: <redacted>
              folder: <redacted>
              networks:
              - <redacted>
              resourcePool: <redacted>
            zone: generated-zone
          ingressVIPs:
          - 192.168.26.202
          vcenters:
          - datacenters:
            - <redacted>
            password: <redacted>
            port: 443
            server: <redacted>
            user: <redacted>
      publish: External
      pullSecret: <redacted>
      sshKey: <redacted> 

      2. Set up an external server to log request IP (e.g., on bastion run sudo podman run -d --name httpbin -p 80:80 docker.io/kennethreitz/httpbin)

      3. Set up a namespace which run deployment with image containing the curl cmdline tool and scale up to evenly spread to all worker nodes

      4. Run curl cmd from each pod to get source IP from the aforementioned server like

       

      oc get po -o name | xargs -I {} oc exec {} -- curl http://$HOST_IP/ip -s | jq -r .origin 

       

       

      Actual results:

       

      192.168.26.120
      192.168.26.240
      192.168.26.120

       

      Expected results:

       

      192.168.26.240
      192.168.26.240
      192.168.26.240 

       

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              rhn-support-vkochuku Vinu Kochukuttan
              rhn-support-jacng Kit Shing NG
              None
              None
              Huiran Wang Huiran Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated: