Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-209

Broken support for DVR + VLAN network combination

XMLWordPrintable

    • 13
    • False
    • Hide

      None

      Show
      None
    • False
    • Hide

      Given a network setup with two logical switches, LS1 (configured as a provider flat network) and LS2 (configured as a provider VLAN network), connected to a logical router (LR). The logical router port connecting LS2 to the LR has the `reside-on-redirect-chassis` setting enabled to true. A VM on LS2 is configured with a NAT that maps its internal network IP, 192.168.2.236, to an external network IP, 10.0.0.249 for example.

      When ICMP traffic is initiated from an external source IP, 10.0.0.52 for example, targeting the VM's external IP, 10.0.0.249,

      Then, the traffic reaching the VM should reflect the external IP, 10.0.0.249, in the ICMP packets captured on the VM’s network interface, confirming that SNAT is correctly applied.

      Show
      Given a network setup with two logical switches, LS1 (configured as a provider flat network) and LS2 (configured as a provider VLAN network), connected to a logical router (LR). The logical router port connecting LS2 to the LR has the `reside-on-redirect-chassis` setting enabled to true. A VM on LS2 is configured with a NAT that maps its internal network IP, 192.168.2.236, to an external network IP, 10.0.0.249 for example. When ICMP traffic is initiated from an external source IP, 10.0.0.52 for example, targeting the VM's external IP, 10.0.0.249, Then, the traffic reaching the VM should reflect the external IP, 10.0.0.249, in the ICMP packets captured on the VM’s network interface, confirming that SNAT is correctly applied.
    • FDP 24.E, FDP 24.F

      When reside-on-redirect-chassis is set to true in the Logical Router Port connecting a vlan LS to a LR the NATed traffic is not properly working as the reply has not done the snat.

      The topology to reproduce it is the next:

      LS1 (provider flat network) <–> LR <–> LS2 (provider vlan network)

       

      Where LS1 is:

      _uuid               : a788e636-0125-4137-a4a5-94c87be13aac
      acls                : []
      copp                : []
      dns_records         : [3d774817-c13c-46dc-9907-319e45c7f292]
      external_ids        : {"neutron:mtu"="1500", "neutron:network_name"=nova, "neutron:revision_number"="6"}                                            
      forwarding_groups   : []
      load_balancer       : []
      load_balancer_group : []
      name                : neutron-3cc4fa35-f05a-452d-a281-c3560a9aed22
      other_config        : {mcast_flood_unregistered="false", mcast_snoop="true", vlan-passthru="false"}                                                 
      ports               : [0cd69d3f-290b-4c1e-84a6-cc850f1a4ab7, 35861567-7cf6-4f8c-86de-33b13cbbaa76, 4b3d6b3d-ccf2-4d05-aa41-c90152e86977, 6a50ea44-f55e-45c1-b323-0da99be507a9, 7e947787-1dc0-427d-82b2-a9eebf0563a7, 80cdc012-deab-4604-9f2d-0bf6012de8ce, c0e31473-b09d-4f92-a4be-d287f34d8288]    

      LR is:

      _uuid               : 256f5ac8-7249-4f07-9584-a4caca8bdf2e
      copp                : []
      enabled             : true
      external_ids        : {"neutron:availability_zone_hints"="", "neutron:gw_port_id"="701872a3-d04f-46ca-bcee-9585fcdd3528", "neutron:revision_number"="11", "neutron:router_name"=geneve-router}
      load_balancer       : []
      load_balancer_group : []
      name                : neutron-a073cae8-664a-448a-8f55-38f4d2e1948d
      nat                 : [0d0c054e-8be0-4771-b214-5524bdeaf277, 40e51b68-5c3b-4875-bf25-81d96166c622, 99526592-eeed-4144-adbe-cb2f641470d3]
      options             : {always_learn_from_arp_request="false", dynamic_neigh_routers="true"}
      policies            : []
      ports               : [27a962ff-a60c-4d75-b6bb-6f6bab2a8258, 5c0725d4-e71e-49f1-9e5c-e779bd96ffac, f36904dc-28e2-4d9d-8f4f-e8751970af5b]
      static_routes       : [6e6a55b8-d145-4f32-a906-6c8febb92563, d2a1e654-6954-45ad-b203-2c8196099af6]

      And LS2 is:

      _uuid               : 6e12b2b3-e50a-464d-898f-fe05b250fff5
      acls                : []
      copp                : []
      dns_records         : [ff9597a2-7a0e-41e5-af0a-ceb2d6845f51]
      external_ids        : {"neutron:mtu"="1500", "neutron:network_name"=vlan-network-1, "neutron:revision_number"="2"}                                  
      forwarding_groups   : []
      load_balancer       : []
      load_balancer_group : []
      name                : neutron-4d5438e6-9cee-4725-9d2f-27e517b058c4
      other_config        : {mcast_flood_unregistered="false", mcast_snoop="true", vlan-passthru="false"}                                                 
      ports               : [4851108e-456b-458d-8d20-92d2c748ab14, 6f393f72-25d0-4b41-ae93-c8a504d85d53, 7143da09-d561-4894-8ffa-3c9a80245f77, ce790ef1-e4de-427b-9398-12a87e7bc03a]

       

      In this sample, both LSs has localnet ports:

       

      _uuid               : 7143da09-d561-4894-8ffa-3c9a80245f77
      addresses           : [unknown]
      dhcpv4_options      : []
      dhcpv6_options      : []
      dynamic_addresses   : []
      enabled             : []
      external_ids        : {}
      ha_chassis_group    : []
      name                : provnet-0aa0dd62-cee7-4aca-9faa-f4a33f348efc
      options             : {mcast_flood="false", mcast_flood_reports="true", network_name=datacentre}                                                    
      parent_name         : []
      port_security       : []
      tag                 : 1911
      tag_request         : []
      type                : localnet
      up                  : false
      _uuid               : c0e31473-b09d-4f92-a4be-d287f34d8288
      addresses           : [unknown]
      dhcpv4_options      : []
      dhcpv6_options      : []
      dynamic_addresses   : []
      enabled             : []
      external_ids        : {}
      ha_chassis_group    : []
      name                : provnet-c6c7798c-4243-4e46-b970-572b5094579c
      options             : {mcast_flood="false", mcast_flood_reports="true", network_name=datacentre}                                                    
      parent_name         : []
      port_security       : []
      tag                 : []
      tag_request         : []
      type                : localnet
      up                  : false
      

       

      And the  logical router port connecting the LS2 to LR has reside-on-redirect-chassis=true

      _uuid               : f36904dc-28e2-4d9d-8f4f-e8751970af5b
      enabled             : []
      external_ids        : {"neutron:network_name"=neutron-4d5438e6-9cee-4725-9d2f-27e517b058c4, "neutron:revision_number"="3", "neutron:router_name"="a073cae8-664a-448a-8f55-38f4d2e1948d", "neutron:subnet_ids"="40812d41-12d0-4258-9446-fd0d91c09647"}
      gateway_chassis     : []
      ha_chassis_group    : []
      ipv6_prefix         : []
      ipv6_ra_configs     : {}
      mac                 : "fa:16:3e:25:04:31"
      name                : lrp-4db4cf4a-5709-4d89-8f16-40a45445436f
      networks            : ["192.168.2.1/24"]
      options             : {reside-on-redirect-chassis="true"}
      peer                : []

      Finally, the next is the VM with the NAT entry associated:

       

      (overcloud) [stack@undercloud-0 ~]$ openstack server list                                                                                           
      +--------------------------------------+-------------+--------+------------------------------------------------+--------------+----------+          
      | ID                                   | Name        | Status | Networks                                       | Image        | Flavor   |          
      +--------------------------------------+-------------+--------+------------------------------------------------+--------------+----------+          
      | c245ecc4-efd4-4c8b-8a2e-1ea868d53c1e | test-vlan-1 | ACTIVE | vlan-network-1=192.168.2.236, 10.0.0.249       | cirros       | m1.micro |  
      
      _uuid               : 6f393f72-25d0-4b41-ae93-c8a504d85d53
      addresses           : ["fa:16:3e:95:ee:66 192.168.2.236"]
      dhcpv4_options      : d1f3fcc7-4388-4c80-8745-7e575b427272
      dhcpv6_options      : []
      dynamic_addresses   : []
      enabled             : true
      external_ids        : {"neutron:cidrs"="192.168.2.236/24", "neutron:device_id"="c245ecc4-efd4-4c8b-8a2e-1ea868d53c1e", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-4d5438e6-9cee-4725-9d2f-27e517b058c4, "neutron:port_fip"="10.0.0.249", "neutron:port_name"="", "neutron:project_id"=f7e961614dff4bc18bf68eb23b382ebd, "neutron:revision_number"="4", "neutron:security_group_ids"="aafffa92-5141-4373-83f4-377f8fdde97c"}        
      ha_chassis_group    : []
      name                : "dd6230ca-79f1-4307-b88e-fb185fc4e4ec"
      options             : {mcast_flood_reports="true", requested-chassis=compute-0.redhat.local}                                                        
      parent_name         : []
      port_security       : ["fa:16:3e:95:ee:66 192.168.2.236"]
      tag                 : []
      tag_request         : []
      type                : ""
      up                  : true
      

       

      And the associated NAT entry:

      _uuid               : 99526592-eeed-4144-adbe-cb2f641470d3
      allowed_ext_ips     : []
      exempted_ext_ips    : []  
      external_ids        : {"neutron:fip_external_mac"="fa:16:3e:75:3f:bb", "neutron:fip_id"="8e5da506-9cb7-4a06-934b-91d4b9d90f0c", "neutron:fip_network_
      id"="3cc4fa35-f05a-452d-a281-c3560a9aed22", "neutron:fip_port_id"="dd6230ca-79f1-4307-b88e-fb185fc4e4ec", "neutron:revision_number"="26", "neutron:router_name"=neutron-a073cae8-664a-448a-8f55-38f4d2e1948d}
      external_ip         : "10.0.0.249"
      external_mac        : "fa:16:3e:75:3f:bb"
      external_port_range : ""
      logical_ip          : "192.168.2.236"
      logical_port        : "dd6230ca-79f1-4307-b88e-fb185fc4e4ec"
      options             : {}  
      type                : dnat_and_snat

      With that, tcpdumping on the node where the VM is we get:

      (ens5/br-ex)    09:21:19.309363 IP 10.0.0.52 > 10.0.0.249: ICMP echo request, id 22946, seq 1, length 64
      (vm tap device) 09:21:19.310123 IP 10.0.0.52 > 192.168.2.236: ICMP echo request, id 22946, seq 1, length 64
      (vm tap device) 09:21:19.310662 IP 192.168.2.236 > 10.0.0.52: ICMP echo reply, id 22946, seq 1, length 64
      (br-ex device)  09:21:19.311033 ethertype IPv4, IP 192.168.2.236 > 10.0.0.52: ICMP echo reply, id 22946, seq 1, length 64

      As it can be seen, the traffic leaves with the internal VM ip instead of having the  snat applied to use 10.0.0.249 instead of 192.168.2.236

      If we disable reside-on-redirect-chassis, then we get it properly (but then the non-NAT traffic will be broken as it will be tunneled (geneve) to the gateway node:

      [root@compute-0 heat-admin]# tcpdump -ni ens5 icmp
      dropped privs to tcpdump
      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on ens5, link-type EN10MB (Ethernet), capture size 262144 bytes
      09:28:03.652485 IP 10.0.0.52 > 10.0.0.249: ICMP echo request, id 36165, seq 1, length 64
      09:28:03.653071 IP 10.0.0.249 > 10.0.0.52: ICMP echo reply, id 36165, seq 1, length 64

       

        1. ovnnb_db.db
          117 kB
          Luis Tomas Bolivar
        2. ovnsb_db.db
          997 kB
          Luis Tomas Bolivar

              nusiddiq@redhat.com Siddique Numan
              ltomasbo@redhat.com Luis Tomas Bolivar
              Jianlin Shi Jianlin Shi
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: