Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-38769

ovnkube-controller take 18 minutes to sync

XMLWordPrintable

    • Important
    • None
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      We observe one performance issue during network policy perfscale testing, It's not product bug, it's a performance bug, it take too long time to sync data of kube-controller, amadeus customer have ever met with similar issue, it will cause pod creating slowly issue.  so open one bug to track this issue. 

      How reproducible:

      Steps to Reproduce:

      1.  OCP 4.16.6with 50 worker nodes 

      2. Creating 500 ns, 6000 pods, 17k networkpolicy, 30K egress firewall rules, 34 network policy per ns, 60 egress firewall rule per ns. 

      3. scaling out one new worker node or execute oc delete pod ovn-node-xxxx

      4.  Check the ovnkube-controller log or metric ovnkube_controller_ready_duration_seconds

      Actual results:

      It take 18 minutes to complete sync of ovnkube-controller

      I0817 10:47:35.230200 3338316 pods.go:220] [anp-open-102/anp-open-102-app-1-9496b8f54-5r4d8] addLogicalPort took 5.392342ms, libovsdb time 3.687732ms

      I0817 10:47:35.231762 3338316 pods.go:220] [anp-cidr-124/anp-cidr-124-perfweb-1-7cb4f79d6-84dbd] addLogicalPort took 6.071086ms, libovsdb time 2.736114ms

      I0817 10:47:35.231780 3338316 pods.go:220] [anp-open-239/anp-open-239-app-1-7974f86696-f5l9x] addLogicalPort took 6.28638ms, libovsdb time 2.02987ms

      I0817 10:47:35.232006 3338316 pods.go:220] [anp-open-238/anp-open-238-db-1-5bfdd78767-ls5pv] addLogicalPort took 3.464984ms, libovsdb time 1.606867ms

      I0817 10:47:35.232274 3338316 pods.go:220] [openshift-dns/dns-default-nwkjp] addLogicalPort took 3.805488ms, libovsdb time 1.828533ms

      I0817 10:47:35.232938 3338316 pods.go:220] [anp-cidr-168/anp-cidr-168-ef-1-f8864bfbb-rxbfd] addLogicalPort took 3.803398ms, libovsdb time 2.159627ms

      I0817 10:47:35.233100 3338316 pods.go:220] [anp-cidr-157/anp-cidr-157-db-1-7d66655cfc-sz259] addLogicalPort took 7.818846ms, libovsdb time 1.499355ms

      I0817 10:47:35.233700 3338316 pods.go:220] [anp-open-181/anp-cidr-181-perfweb-1-97c64d679-9s2zd] addLogicalPort took 2.829552ms, libovsdb time 1.030679ms

      I0817 10:47:35.234451 3338316 pods.go:220] [anp-cidr-80/anp-cidr-80-perfweb-1-75cfbbfcf9-jlzxk] addLogicalPort took 3.45671ms, libovsdb time 1.845078ms

      I0817 10:47:35.234578 3338316 pods.go:220] [anp-cidr-143/anp-cidr-143-db-1-bc7756bc8-64w29] addLogicalPort took 2.64342ms, libovsdb time 882.757µs

      I0817 10:47:35.234749 3338316 pods.go:220] [anp-open-75/anp-open-75-db-1-b948555cb-rrjt4] addLogicalPort took 4.527017ms, libovsdb time 2.740178ms

      I0817 10:47:35.235120 3338316 pods.go:220] [anp-open-224/anp-open-224-db-1-649cffdbbd-xv5xc] addLogicalPort took 2.518362ms, libovsdb time 752.833µs

      I0817 10:47:35.235271 3338316 pods.go:220] [anp-open-193/anp-open-193-db-1-558cfd64f-m4whq] addLogicalPort took 2.845013ms, libovsdb time 874.944µs

      I0817 10:47:35.235436 3338316 pods.go:220] [anp-open-195/anp-cidr-195-ef-1-6f58588d4b-fmtcx] addLogicalPort took 2.424302ms, libovsdb time 595.343µs

      I0817 10:47:35.235870 3338316 pods.go:220] [anp-open-155/anp-open-155-db-1-674b9f8587-ck2td] addLogicalPort took 2.011091ms, libovsdb time 423.001µs

      I0817 10:47:35.235981 3338316 pods.go:220] [anp-open-187/anp-open-187-app-1-6b46bf9484-qkl4t] addLogicalPort took 2.427343ms, libovsdb time 466.062µs

      I0817 10:47:35.236279 3338316 pods.go:220] [anp-open-241/anp-open-241-app-1-597958fcbb-sjbb5] addLogicalPort took 2.222995ms, libovsdb time 422.179µs

      I0817 10:47:35.237425 3338316 pods.go:220] [anp-cidr-6/anp-cidr-6-app-1-67c59d659b-s995r] addLogicalPort took 1.521931ms, libovsdb time 368.946µs

      I0817 10:47:35.237645 3338316 pods.go:220] [anp-cidr-21/anp-cidr-21-perfweb-1-7fcc8f5cc8-rg9x6] addLogicalPort took 2.157084ms, libovsdb time 412.519µs

      I0817 10:47:35.239412 3338316 pods.go:220] [anp-open-63/anp-open-63-db-1-b9bb5d75-twgtx] addLogicalPort took 1.543558ms, libovsdb time 354.61µs

      I0817 10:47:35.296642 3338316 repair.go:27] Repairing admin network policies took 47.998318ms

      I0817 10:47:35.343900 3338316 repair.go:90] Repairing baseline admin network policies took 47.21748ms

      I0817 11:05:40.819321 3338316 default_network_controller.go:572] Completing all the Watchers took 18m6.311281825s

      Expected results:

      It should be sync fast, maybe need to optimize the sync 

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

            sdn-team-bot sdn-team bot
            rhn-support-liqcui Liquan Cui
            Anurag Saxena Anurag Saxena
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: