Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-26878

[IPSEC] Restarting ipsec service/ovn-ipsec-host pod would cause ipsec traffic broken between worker node and external host

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • libreswan
    • sst_security_crypto
    • ssg_security
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

       

      Version-Release number of selected component (if applicable):
      4.15.0-0.nightly-2024-01-22-160236
      % oc get csv -n openshift-nmstate
      NAME DISPLAY VERSION REPLACES PHASE
      kubernetes-nmstate-operator.4.15.0-202401231732 Kubernetes NMState Operator 4.15.0-202401231732 kubernetes-nmstate-operator.4.15.0-202401230749 Succeeded

      With NetworkManager-libreswan-1.2.14-3.el9_2.x86_64 and nmstate-2.2.23-1.el9_2.x86_64 updated on one worker node huirwang-0124a-lmsvv-worker-a-dddtr

      How reproducible:
      Always
       

      Steps to Reproduce:

      1. Install nmstate operator and nmstate csr

      2. Create IPSEC config with yaml
      % oc get nncp
      NAME STATUS REASON
      ipsec-policy1 Available SuccessfullyConfigured

      % oc get nncp -o yaml
      apiVersion: v1
      items:
      - apiVersion: nmstate.io/v1
        kind: NodeNetworkConfigurationPolicy
        metadata:
          annotations:
            kubectl.kubernetes.io/last-applied-configuration: |
              {"apiVersion":"nmstate.io/v1","kind":"NodeNetworkConfigurationPolicy","metadata":{"annotations":{},"name":"ipsec-policy1"},"spec":{"desiredState":{"interfaces":[{"libreswan":{"ikev2":"insist","left":"10.0.128.2","leftcert":"10_0_128_2","leftid":"%fromcert","leftmodecfgclient":false,"leftrsasigkey":"%cert","right":"10.0.0.2","rightid":"%fromcert","rightrsasigkey":"%cert","rightsubnet":"10.0.0.2/32","type":"transport"},"name":"plutoVM","type":"ipsec"}]},"nodeSelector":{"kubernetes.io/hostname":"huirwang-0124a-lmsvv-worker-a-dddtr"}}}
            nmstate.io/webhook-mutating-timestamp: "1706083277739864288"
          creationTimestamp: "2024-01-24T08:01:17Z"
          generation: 1
          name: ipsec-policy1
          resourceVersion: "163193"
          uid: 49495486-5c16-49ed-ad7a-c2b02c233a92
        spec:
          desiredState:
            interfaces:
            - libreswan:
                ikev2: insist
                left: 10.0.128.2
                leftcert: "10_0_128_2"
                leftid: '%fromcert'
                leftmodecfgclient: false
                leftrsasigkey: '%cert'
                right: 10.0.0.2
                rightid: '%fromcert'
                rightrsasigkey: '%cert'
                rightsubnet: 10.0.0.2/32
                type: transport
              name: plutoVM
              type: ipsec
          nodeSelector:
            kubernetes.io/hostname: huirwang-0124a-lmsvv-worker-a-dddtr
        status:
          conditions:
          - lastHeartbeatTime: "2024-01-24T08:03:23Z"
            lastTransitionTime: "2024-01-24T08:03:23Z"
            message: 1/1 nodes successfully configured
            reason: SuccessfullyConfigured
            status: "True"
            type: Available
          - lastHeartbeatTime: "2024-01-24T08:03:23Z"
            lastTransitionTime: "2024-01-24T08:03:23Z"
            reason: SuccessfullyConfigured
            status: "False"
            type: Degraded
          - lastHeartbeatTime: "2024-01-24T08:03:23Z"
            lastTransitionTime: "2024-01-24T08:03:23Z"
            reason: ConfigurationProgressing
            status: "False"
            type: Progressing
          lastUnavailableNodeCountUpdate: "2024-01-24T08:03:23Z"
      kind: List
      metadata:
        resourceVersion: ""
      
      

      4. Check ipsec connecton was up and then restart ipsec service

      
      

      sh-5.1# ipsec status | grep 10.0.0.2
      000 "dd40adec-719b-474d-a399-3b6a50a85d7d": 10.0.128.2[CN=10_0_128_2]...10.0.0.2[CN=10_0_0_2,MS+S=C]; erouted; eroute owner: #19
      000 "dd40adec-719b-474d-a399-3b6a50a85d7d": our idtype: ID_DER_ASN1_DN; our id=CN=10_0_128_2; their idtype: ID_DER_ASN1_DN; their id=CN=10_0_0_2
      000 #19: "dd40adec-719b-474d-a399-3b6a50a85d7d" esp.1a8845b3@10.0.0.2 esp.941e5f50@10.0.128.2 Traffic: ESPin=64B ESPout=64B ESPmax=2^63B

      sh-5.1# ping 10.0.0.2
      PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
      64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.98 m

      sh-5.1# systemctl restart ipsec
      sh-5.1#
      sh-5.1# ping 10.0.0.2
      PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
      ^C
      — 10.0.0.2 ping statistics —
      61 packets transmitted, 0 received, 100% packet loss, time 61469ms

            dueno@redhat.com Daiki Ueno
            huirwang Huiran Wang
            Daiki Ueno Daiki Ueno
            SSG Security QE SSG Security QE
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated: