Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-13098

[Reliability] cronjob test: defunct process ovs-vsctl ovn-appctl ovs-ofctl ovn-appctl

XMLWordPrintable

    • Moderate
    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      There are 6 test users, each user created 10 cronjobs, scheduled every 1 minutes. The cronjob just run a data command. Check the cronjob pods for each user every 10 minutes. After the test run for 1 day +, one of the worker node became NotReady for about 10 seconds and recovered to Ready again. Checking the node's log, I found about half an hour before the node became NotReady there are some logs about defunc process like ovs-vsctl ovn-appctl ovs-ofctl ovn-appctl. 
      
      I don't know if this indicates some issue in OVN. Please help to check the logs.
      
      

      Version-Release number of selected component (if applicable):

      4.13.0-0.nightly-2023-04-21-084440

      How reproducible:

      First time I run this test scenario

      Steps to Reproduce:

      1.Install a AWS cluster with vm_type: m5.xlarge, 3 masters 3 workers.
      2. There are 6 test users, each user will create 10 cronjobs, scheduled every 1 minutes.
      Script: https://github.com/openshift/svt/blob/8557a75faa7325de40dae8cc96b99358ed0721bd/reliability-v2/tasks/script/cronjob.sh
      
      #create the cronjobs
      cronjob.sh -n 10
      
      # check the cronjobs every 10 minutes for each user
      cronjob.sh -c
      - name: dev-cronjob
      
      3. Long run the test and monitor the cluster's health
      

      Actual results:

      One of the worker node became NotReady during the test, but recovered itself after 10 seconds.
      
      Test started [2023-04-27 04:24:16 UTC]
      
      NodeNotReady after test run for 1 day + on [Apr 28 09:29:06] and became NodeReady
      
      From must-gather, NodeNotReady set by node-controller on 09:01:30, by kubelet on 09:29:06.
      
      09:01:30 default node-controller ip-10-0-169-195.us-east-2.compute.internal NodeNotReady
      
      09:29:06 default kubelet ip-10-0-169-195.us-east-2.compute.internalNodeNotReadyNode ip-10-0-169-195.us-east-2.compute.internal status is now: NodeNotReady
      
      
      From 09:00 to 09:02, CPU busy iowait, disk throughput, and network packets all increased a lot
      Scrrenshot: https://drive.google.com/file/d/1oCPp04-EjPkVhAwc4cpPy_g7xb6oimmk/view?usp=share_link
      
      Between 09:02 and 09:05 there is no node logs, and after that there were some other defunct process ovs-vsctl ovn-appctl ovs-ofctl ovn-appctl
      
      Prometheus issue observed:
      09:05 From grafana dashboard, prometheus-k8s-0 pod on the mentioned node increased to 12.6G, soon after that prometheus data lost.
      09:30 From grafana dashboard, prometheus data recovered
      Screenshot: https://drive.google.com/file/d/1Bks-dIXCJvoCsI7xGW5FdkBn2ezQlgli/view?usp=share_link 
      
      
      

      Expected results:

      Are those defunct process logs indicates some issue in OVS/OVN or it could be some other component's issue that caused those defunct process? What can cause the defunct process?

      Additional info:

      Test started [2023-04-27 04:24:16 UTC]
      ---------------
      NodeNotReady after test run for 1 day + on [Apr 28 09:29:06] and became NodeReady on [Apr 28 09:29:16]
      
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011301    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeNotReady"
      
      NodeReady again after 10s
      Apr 28 09:29:16.131666 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:16.128395    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeReady"
      
      ---------------
      #Describe the node
      % oc describe node ip-10-0-169-195.us-east-2.compute.internal 
      ...
      Events:
        Type    Reason                   Age   From             Message
        ----    ------                   ----  ----             -------
        Normal  NodeNotReady             90m   node-controller  Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeNotReady
        Normal  NodeHasSufficientMemory  62m   kubelet          Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeHasSufficientMemory
        Normal  NodeHasNoDiskPressure    62m   kubelet          Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
        Normal  NodeHasSufficientPID     62m   kubelet          Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeHasSufficientPID
        Normal  NodeNotReady             62m   kubelet          Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeNotReady
        Normal  NodeReady                62m   kubelet          Node ip-10-0-169-195.us-east-2.compute.internal status is now: NodeReady
      
      ---------------
      #Check node logs about the time of NodeNotReady and NodeREady
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011190    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeHasSufficientMemory"
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011223    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeHasNoDiskPressure"
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011235    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeHasSufficientPID"
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011301    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeNotReady"
      Apr 28 09:29:06.016139 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:06.011363    2117 setters.go:548] "Node became not ready" node="ip-10-0-169-195.us-east-2.compute.internal" condition={Type:Ready Status:False LastHeartbeatTime:2023-04-28 09:29:06.011240865 +0000 UTC m=+111963.836192062 LastTransitionTime:2023-04-28 09:29:06.011240865 +0000 UTC m=+111963.836192062 Reason:KubeletNotReady Message:[container runtime is down, PLEG is not healthy: pleg was last seen active 28m15.787626012s ago; threshold is 3m0s]}
      
      
      Apr 28 09:29:16.131666 ip-10-0-169-195 kubenswrapper[2117]: I0428 09:29:16.128395    2117 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-169-195.us-east-2.compute.internal" event="NodeReady"
      
      ---------------
      #between 09:02 and 09:05 there is no logs, and after that ovs-vsctl became defunct 
      
      Apr 28 09:00:31.107461 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:00:31.096799006Z" level=info msg="Stopped pod sandbox (already stopped): dd228a43371c18402928c960b4c4895d3cb5f17e4c83c4b250e4e341b2a07248" id=f93ecaf4-3197-4be7-b4b1-e994eaf5370a name=/runtime.v1.RuntimeService/StopPodSandbox
      Apr 28 09:00:31.107461 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:00:31.097039443Z" level=info msg="Removing pod sandbox: dd228a43371c18402928c960b4c4895d3cb5f17e4c83c4b250e4e341b2a07248" id=370d7f92-fe5b-43ea-adf3-420809711ac6 name=/runtime.v1.RuntimeService/RemovePodSandbox
      Apr 28 09:00:31.114964 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:00:31.110235331Z" level=info msg="Removed pod sandbox: dd228a43371c18402928c960b4c4895d3cb5f17e4c83c4b250e4e341b2a07248" id=370d7f92-fe5b-43ea-adf3-420809711ac6 name=/runtime.v1.RuntimeService/RemovePodSandbox
      Apr 28 09:00:34.748431 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76365|connmgr|INFO|br-ex<->unix#48313: 2 flow_mods in the last 0 s (2 adds)
      Apr 28 09:00:49.572183 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76366|connmgr|INFO|br-ex<->unix#48322: 2 flow_mods in the last 0 s (2 adds)
      Apr 28 09:01:16.736595 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76367|rconn|WARN|br-int<->unix#12: connection dropped (Connection reset by peer)
      Apr 28 09:01:22.816437 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76368|connmgr|INFO|br-int<->unix#13: 140 flow_mods 10 s ago (140 adds)
      Apr 28 09:02:29.265135 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76369|connmgr|INFO|br-int<->unix#13: 317 flow_mods in the 5 s starting 10 s ago (130 adds, 187 deletes)
      Apr 28 09:05:08.773381 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:03:18.243957221Z" level=warning msg="Found defunct process with PID 224581 (ovs-vsctl)"
      Apr 28 09:05:08.773381 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:03:18.846126778Z" level=warning msg="Found defunct process with PID 224581 (ovs-vsctl)" Apr 28 09:09:20.877864 ip-10-0-169-195 systemd-journald[224750]: Journal started
      Apr 28 09:09:27.474102 ip-10-0-169-195 systemd-journald[224750]: System Journal (/var/log/journal/ec2d6cfca4e64b9aebfd651824773d7e) is 2.7G, max 4.0G, 1.2G free.
      Apr 28 09:09:27.527504 ip-10-0-169-195 systemd-coredump[224729]: Resource limits disable core dumping for process 747 (systemd-journal).
      Apr 28 09:09:27.527616 ip-10-0-169-195 systemd-coredump[224729]: Process 747 (systemd-journal) of user 0 dumped core.
      Apr 28 09:09:27.527648 ip-10-0-169-195 systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
      Apr 28 09:09:27.527684 ip-10-0-169-195 systemd[1]: systemd-journald.service: Failed with result 'watchdog'. 
      Apr 28 09:09:27.527706 ip-10-0-169-195 systemd[1]: systemd-journald.service: Consumed 5min 14.897s CPU time.
      Apr 28 09:09:27.527727 ip-10-0-169-195 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 2.
      Apr 28 09:09:27.527752 ip-10-0-169-195 systemd[1]: Stopped Journal Service.
      Apr 28 09:09:27.527769 ip-10-0-169-195 systemd[1]: systemd-journald.service: Consumed 5min 14.897s CPU time.
      Apr 28 09:09:27.527789 ip-10-0-169-195 systemd[1]: Starting Journal Service...
      Apr 28 09:09:27.527814 ip-10-0-169-195 systemd-journald[224750]: File /var/log/journal/ec2d6cfca4e64b9aebfd651824773d7e/system.journal corrupted or uncleanly shut down, renaming and replacing.
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:05:23.976704682Z" level=warning msg="Found defunct process with PID 224605 (ovs-vsctl)"
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:05:46.928356054Z" level=warning msg="Found defunct process with PID 224605 (ovs-vsctl)"
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:09:00.956385711Z" level=warning msg="Found defunct process with PID 224743 (ovn-appctl)"
      Apr 28 09:08:10.315364 ip-10-0-169-195 systemd[1]: systemd-journald.service: Watchdog timeout (limit 3min)!
      Apr 28 09:03:16.043071 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76370|rconn|WARN|br-int<->unix#48323: connection dropped (Connection reset by peer)
      Apr 28 09:08:11.009648 ip-10-0-169-195 systemd[1]: systemd-journald.service: Killing process 747 (systemd-journal) with signal SIGABRT.
      Apr 28 09:03:42.699987 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76371|connmgr|INFO|br-int<->unix#13: 60 flow_mods 10 s ago (60 adds)
      Apr 28 09:08:55.091495 ip-10-0-169-195 systemd-coredump[224729]: Process 747 (systemd-journal) of user 0 dumped core.
      Apr 28 09:06:01.923020 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76372|rconn|WARN|br-int<->unix#48325: connection dropped (Connection reset by peer)
      Apr 28 09:06:18.742509 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76373|connmgr|INFO|br-int<->unix#13: 4 flow_mods 10 s ago (4 deletes)
      Apr 28 09:07:18.742449 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76374|connmgr|INFO|br-int<->unix#13: 61 flow_mods in the 42 s starting 49 s ago (5 adds, 56 deletes)
      Apr 28 09:07:21.482858 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76375|rconn|WARN|br-int<->unix#48326: connection dropped (Connection reset by peer)
      Apr 28 09:08:18.450336 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76376|rconn|WARN|br-int<->unix#48327: connection dropped (Connection reset by peer)
      Apr 28 09:08:18.742040 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76377|connmgr|INFO|br-int<->unix#13: 55 flow_mods 53 s ago (55 adds)
      Apr 28 09:09:12.175243 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76378|rconn|WARN|br-int<->unix#48328: connection dropped (Connection reset by peer)
      Apr 28 09:09:18.741905 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76379|connmgr|INFO|br-int<->unix#13: 720 flow_mods in the 36 s starting 45 s ago (371 adds, 349 deletes)
      Apr 28 09:09:29.464300 ip-10-0-169-195 systemd[1]: Started Journal Service.
      Apr 28 09:09:36.924668 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:09:27.949894672Z" level=warning msg="Found defunct process with PID 224745 (ovs-appctl)"
      Apr 28 09:10:07.329978 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76380|rconn|WARN|br-int<->unix#48332: connection dropped (Connection reset by peer)
      Apr 28 09:10:18.741638 ip-10-0-169-195 ovs-vswitchd[1087]: ovs|76381|connmgr|INFO|br-int<->unix#13: 120 flow_mods in the 51 s starting 54 s ago (60 adds, 60 deletes)
      
      ---------------
      #defunc processes
      % grep defunc node-logs-ip-10-0-169-195.us-east-2.compute.internal 
      Apr 28 09:05:08.773381 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:03:18.243957221Z" level=warning msg="Found defunct process with PID 224581 (ovs-vsctl)"
      Apr 28 09:05:08.773381 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:03:18.846126778Z" level=warning msg="Found defunct process with PID 224581 (ovs-vsctl)"
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:05:23.976704682Z" level=warning msg="Found defunct process with PID 224605 (ovs-vsctl)"
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:05:46.928356054Z" level=warning msg="Found defunct process with PID 224605 (ovs-vsctl)"
      Apr 28 09:09:27.527887 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:09:00.956385711Z" level=warning msg="Found defunct process with PID 224743 (ovn-appctl)"
      Apr 28 09:09:36.924668 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:09:27.949894672Z" level=warning msg="Found defunct process with PID 224745 (ovs-appctl)"
      Apr 28 09:11:44.479373 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:11:38.370759686Z" level=warning msg="Found defunct process with PID 224846 (ovs-ofctl)"
      Apr 28 09:11:55.001470 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:11:48.079590475Z" level=warning msg="Found defunct process with PID 224853 (ovs-ofctl)"
      Apr 28 09:12:17.497545 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:12:07.096646494Z" level=warning msg="Found defunct process with PID 224891 (ovs-vsctl)"
      Apr 28 09:12:41.459348 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:12:35.587690500Z" level=warning msg="Found defunct process with PID 224891 (ovs-vsctl)"
      Apr 28 09:13:36.117881 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:13:33.945115140Z" level=warning msg="Found defunct process with PID 224930 (ovs-ofctl)"
      Apr 28 09:13:36.117881 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:13:36.117965891Z" level=warning msg="Found defunct process with PID 224934 (ovs-vsctl)"
      Apr 28 09:13:36.117881 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:13:36.118044710Z" level=warning msg="Found defunct process with PID 224951 (ovs-vsctl)"
      Apr 28 09:14:20.199488 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:14:11.329902964Z" level=warning msg="Found defunct process with PID 224930 (ovs-ofctl)"
      Apr 28 09:14:20.551880 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:14:20.307964670Z" level=warning msg="Found defunct process with PID 224953 (ovs-vsctl)"
      Apr 28 09:14:20.551880 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:14:20.551683358Z" level=warning msg="Found defunct process with PID 224962 (ovs-ofctl)"
      Apr 28 09:14:32.680299 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:14:26.581686859Z" level=warning msg="Found defunct process with PID 224953 (ovs-vsctl)"
      Apr 28 09:15:07.245077 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:15:03.590289180Z" level=warning msg="Found defunct process with PID 224953 (ovs-vsctl)"
      Apr 28 09:16:32.397655 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:16:26.408322262Z" level=warning msg="Found defunct process with PID 225060 (ovn-appctl)"
      Apr 28 09:16:42.889236 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:16:34.251659401Z" level=warning msg="Found defunct process with PID 225061 (ovs-vsctl)"
      Apr 28 09:16:49.702667 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:16:46.564632284Z" level=warning msg="Found defunct process with PID 225062 (ovs-vsctl)"
      Apr 28 09:17:03.598321 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:16:58.911828779Z" level=warning msg="Found defunct process with PID 225060 (ovn-appctl)"
      Apr 28 09:17:10.532785 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:17:08.176217996Z" level=warning msg="Found defunct process with PID 225061 (ovs-vsctl)"
      Apr 28 09:17:28.524115 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:17:27.721528189Z" level=warning msg="Found defunct process with PID 225060 (ovn-appctl)"
      Apr 28 09:17:28.524115 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:17:28.524188100Z" level=warning msg="Found defunct process with PID 225061 (ovs-vsctl)"
      Apr 28 09:17:28.524115 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:17:28.524232198Z" level=warning msg="Found defunct process with PID 225074 (ovs-vsctl)"
      Apr 28 09:17:28.524115 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:17:28.524327802Z" level=warning msg="Found defunct process with PID 225108 (ovn-appctl)"
      Apr 28 09:18:12.313607 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:18:04.085608696Z" level=warning msg="Found defunct process with PID 225108 (ovn-appctl)"
      Apr 28 09:18:37.329780 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:18:30.364439135Z" level=warning msg="Found defunct process with PID 225108 (ovn-appctl)"
      Apr 28 09:19:41.829576 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:19:33.606164354Z" level=warning msg="Found defunct process with PID 225184 (ovs-ofctl)"
      Apr 28 09:19:46.315088 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:19:42.036008158Z" level=warning msg="Found defunct process with PID 225186 (ovs-appctl)"
      Apr 28 09:19:58.761267 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:19:54.791588434Z" level=warning msg="Found defunct process with PID 225186 (ovs-appctl)"
      Apr 28 09:20:38.135994 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:20:30.266490523Z" level=warning msg="Found defunct process with PID 225186 (ovs-appctl)"
      Apr 28 09:20:46.440407 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:20:42.015973205Z" level=warning msg="Found defunct process with PID 225233 (ovs-ofctl)"
      Apr 28 09:20:53.042832 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:20:51.639475002Z" level=warning msg="Found defunct process with PID 225234 (ovs-ofctl)"
      Apr 28 09:21:01.498941 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:20:56.337263628Z" level=warning msg="Found defunct process with PID 225234 (ovs-ofctl)"
      Apr 28 09:21:01.498941 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:20:59.496708276Z" level=warning msg="Found defunct process with PID 225260 (ovs-ofctl)"
      Apr 28 09:22:03.708477 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:21:58.451853005Z" level=warning msg="Found defunct process with PID 225310 (ovs-vsctl)"
      Apr 28 09:22:05.212395 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:04.513070857Z" level=warning msg="Found defunct process with PID 225314 (ovs-vsctl)"
      Apr 28 09:22:05.212395 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:04.798916036Z" level=warning msg="Found defunct process with PID 225316 (ovs-ofctl)"
      Apr 28 09:22:05.212395 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:04.798972590Z" level=warning msg="Found defunct process with PID 225321 (ovs-vsctl)"
      Apr 28 09:22:05.212395 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:04.799015158Z" level=warning msg="Found defunct process with PID 225323 (ovs-vsctl)"
      Apr 28 09:22:33.091915 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:29.960339438Z" level=warning msg="Found defunct process with PID 225321 (ovs-vsctl)"
      Apr 28 09:22:44.534534 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:36.011683651Z" level=warning msg="Found defunct process with PID 225329 (ovn-appctl)"
      Apr 28 09:23:01.378783 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:22:55.138359409Z" level=warning msg="Found defunct process with PID 225329 (ovn-appctl)"
      Apr 28 09:23:37.862813 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:23:30.125297836Z" level=warning msg="Found defunct process with PID 225329 (ovn-appctl)"
      Apr 28 09:23:47.508628 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:23:41.468719146Z" level=warning msg="Found defunct process with PID 225333 (ovn-appctl)"
      Apr 28 09:23:52.004363 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:23:50.389570206Z" level=warning msg="Found defunct process with PID 225376 (ovs-appctl)"
      Apr 28 09:24:01.025202 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:23:56.870869862Z" level=warning msg="Found defunct process with PID 225381 (ovs-vsctl)"
      Apr 28 09:24:07.818983 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:03.446093460Z" level=warning msg="Found defunct process with PID 225392 (ovs-vsctl)"
      Apr 28 09:24:14.156514 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:12.875001011Z" level=warning msg="Found defunct process with PID 225399 (ovs-vsctl)"
      Apr 28 09:24:33.119922 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:29.363214928Z" level=warning msg="Found defunct process with PID 225392 (ovs-vsctl)"
      Apr 28 09:24:33.119922 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:32.025774490Z" level=warning msg="Found defunct process with PID 225399 (ovs-vsctl)"
      Apr 28 09:24:33.119922 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:32.025818191Z" level=warning msg="Found defunct process with PID 225401 (ovs-vsctl)"
      Apr 28 09:24:33.119922 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:24:32.025846503Z" level=warning msg="Found defunct process with PID 225406 (ovs-vsctl)"
      Apr 28 09:25:15.439946 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:04.980389247Z" level=warning msg="Found defunct process with PID 225439 (ovs-ofctl)"
      Apr 28 09:25:21.981521 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:19.432554930Z" level=warning msg="Found defunct process with PID 225444 (ovs-vsctl)"
      Apr 28 09:25:34.820769 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:29.690525071Z" level=warning msg="Found defunct process with PID 225439 (ovs-ofctl)"
      Apr 28 09:25:39.762170 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:36.699439116Z" level=warning msg="Found defunct process with PID 225444 (ovs-vsctl)"
      Apr 28 09:25:39.762170 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:39.665710782Z" level=warning msg="Found defunct process with PID 225451 (ovs-vsctl)"
      Apr 28 09:25:39.762170 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:39.665824212Z" level=warning msg="Found defunct process with PID 225457 (ovs-ofctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:25:58.541194836Z" level=warning msg="Found defunct process with PID 225439 (ovs-ofctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:26:00.903982395Z" level=warning msg="Found defunct process with PID 225451 (ovs-vsctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:26:00.904089083Z" level=warning msg="Found defunct process with PID 225457 (ovs-ofctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:26:00.904119450Z" level=warning msg="Found defunct process with PID 225475 (ovs-ofctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:26:00.904141558Z" level=warning msg="Found defunct process with PID 225479 (ovs-ofctl)"
      Apr 28 09:26:01.301211 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:26:00.904162411Z" level=warning msg="Found defunct process with PID 225480 (ovs-ofctl)"
      Apr 28 09:27:35.334927 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:27:29.521664453Z" level=warning msg="Found defunct process with PID 225605 (ovs-appctl)"
      Apr 28 09:28:56.952086 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:28:56.266507664Z" level=warning msg="Found defunct process with PID 225708 (ovs-appctl)"
      Apr 28 09:29:05.374771 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:29:05.070749323Z" level=warning msg="Found defunct process with PID 225779 (sh)"
      Apr 28 09:29:06.959583 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:29:06.900525753Z" level=warning msg="Found defunct process with PID 225625 (haproxy)"
      Apr 28 09:29:06.959583 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:29:06.900966989Z" level=warning msg="Found defunct process with PID 225905 (runc:[1:CHILD])"
      Apr 28 09:29:10.657240 ip-10-0-169-195 crio[2089]: time="2023-04-28 09:29:10.655332214Z" level=warning msg="Found defunct process with PID 225625 (haproxy)"
      Apr 28 10:34:50.726380 ip-10-0-169-195 crio[2089]: time="2023-04-28 10:34:50.726321219Z" level=warning msg="Found defunct process with PID 405786 (runc)"

            bpickard@redhat.com Ben Pickard
            rhn-support-qili Qiujie Li
            Anurag Saxena Anurag Saxena
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: