Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-49933

ovnkube-controller container crashed on RHEL-8 worker

    • Important
    • Yes
    • SDN Sprint 266, SDN Sprint 267, SDN Sprint 268
    • 3
    • Approved
    • False
    • Hide

      None

      Show
      None
    • Hide
      * There is a known issue with RHEL 8 worker nodes that use `cgroupv1` Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes: `UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1.` As a workaround, users should migrate worker nodes from `cgroupv1` to `cgroupv2`. (link:https://issues.redhat.com/browse/OCPBUGS-49933[*OCPBUGS-49933*])
      Show
      * There is a known issue with RHEL 8 worker nodes that use `cgroupv1` Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes: `UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1.` As a workaround, users should migrate worker nodes from `cgroupv1` to `cgroupv2`. (link: https://issues.redhat.com/browse/OCPBUGS-49933 [* OCPBUGS-49933 *])
    • Known Issue
    • Done

      Description of problem:

      We're seeing the following error in the ovnkube-controller container log on RHEL-8 worker which leads to the network is not ready of the node

      F0206 03:40:21.953369   12091 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller - while waiting for any node to have zone: "ip-10-0-75-250.ec2.internal", error: context canceled, failed to start node network controller: failed to start default node network controller: failed to find kubelet cgroup path: %!w(<nil>)]
      

      The full log of the ovnkube-controller container:

      https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.18-e2e-aws-ovn-workers-rhel8/1887322975150018560/artifacts/e2e-aws-ovn-workers-rhel8/gather-extra/artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-js6vn_ovnkube-controller.log

      Version-Release number of selected component (if applicable):
      4.18.0-0.nightly-2025-02-05-033447/4.18.0-0.nightly-2025-02-04-192134

      How reproducible:
      Always

      Steps to Reproduce:
      1. Add RHEL-8 worker to 4.18 OCP cluster, but the RHEL workers can't be ready, found the following error about ovnkube-controller in the kublet.log

      Feb 06 11:38:34 ip-10-0-50-48.us-east-2.compute.internal kubenswrapper[15267]: E0206 11:38:34.798490   15267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkkp_openshift-ovn-kubernetes(c22474ab-6f0b-4403-93a6-eb80766934e6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkkp" podUID="c22474ab-6f0b-4403-93a6-eb80766934e6"
      

      An example failure job: https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.18-e2e-aws-ovn-workers-rhel8/1887322975150018560

      2.

      3.

      Actual results:

      Expected results:

      Additional info:
      Based on the test history, it's working for 4.18.0-0.nightly-2025-02-04-114552, but start failing for 4.18.0-0.nightly-2025-02-05-033447.
      (Update: confirmed it's also failed on 4.18.0-0.nightly-2025-02-04-192134)

      Here's the daily 4.18 rhel8 job history link:
      https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.18-e2e-aws-ovn-workers-rhel8

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesnโ€™t need to read the entire case history.
      • Donโ€™t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with โ€œsbr-triagedโ€
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with โ€œsbr-untriagedโ€
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label โ€œSDN-Jira-templateโ€
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

            [OCPBUGS-49933] ovnkube-controller container crashed on RHEL-8 worker

            Zhanqi Zhao added a comment -

            >I was able to build eventually. zzhao1@redhat.com This bug is for 4.19. We should be taking care of 4.18.0 work. Is there a need for this bug (code) to be back ported to 4.18.0?

            I guess this will not backport 4.18.0, but maybe 4.18.1, since the new patchs just warning message issue, not blocker IIUC

            Zhanqi Zhao added a comment - >I was able to build eventually. zzhao1@redhat.com  This bug is for 4.19. We should be taking care of 4.18.0 work. Is there a need for this bug (code) to be back ported to 4.18.0? I guess this will not backport 4.18.0, but maybe 4.18.1, since the new patchs just warning message issue, not blocker IIUC

            Arti Sood added a comment -

            I was able to build eventually. zzhao1@redhat.com This bug is for 4.19. We should be taking care of 4.18.0 work. Is there a need for this bug (code) to be back ported to 4.18.0?

            Arti Sood added a comment - I was able to build eventually. zzhao1@redhat.com This bug is for 4.19. We should be taking care of 4.18.0 work. Is there a need for this bug (code) to be back ported to 4.18.0?

            Arti Sood added a comment -

             

            build 4.19.0-0.nightly-2025-02-17-112513, openshift/ovn-kubernetes#2459

            Failure https://prow.ci.openshift.org/view/gs/test-platform-results/logs/release-openshift-origin-installer-launch-aws-modern/1891513694936895488  

             

            Failed twice

            Arti Sood added a comment -   build 4.19.0-0.nightly-2025-02-17-112513, openshift/ovn-kubernetes#2459 Failure https://prow.ci.openshift.org/view/gs/test-platform-results/logs/release-openshift-origin-installer-launch-aws-modern/1891513694936895488     Failed twice

            Arti Sood added a comment -

            zzhao1@redhat.com Could you elaborate on regression tests? As per Nadia 's comment I see running hostnetwork isolation tests. These tests are in context of UDN not regression.

            Arti Sood added a comment - zzhao1@redhat.com Could you elaborate on regression tests? As per Nadia 's comment I see running hostnetwork isolation tests. These tests are in context of UDN not regression.

            Arti Sood added a comment -

             

            RHEL 8 workers are not supported in 4.19, rh-ee-gpei is scaling up and addition of RHEL 8 nodes blocked somehow in 4.19 or we expect the customer to not attempt to add at all?

            Event is relevant to 4.18. 

            Arti Sood added a comment -   RHEL 8 workers are not supported in 4.19, rh-ee-gpei is scaling up and addition of RHEL 8 nodes blocked somehow in 4.19 or we expect the customer to not attempt to add at all? Event is relevant to 4.18. 

            Zhanqi Zhao added a comment -

            rhn-support-asood should be worked on this area, could you do regression test on that, thanks?

            Zhanqi Zhao added a comment - rhn-support-asood should be worked on this area, could you do regression test on that, thanks?

            Gaoyun Pei added a comment -

            zzhao1@redhat.com anusaxen Do you know who can help with the hostnetwork isolation tests on 4.19 as Nadia mentioned above?

            Gaoyun Pei added a comment - zzhao1@redhat.com anusaxen Do you know who can help with the hostnetwork isolation tests on 4.19 as Nadia mentioned above?

            Gaoyun Pei added a comment -

            weliang1@redhat.comnpinaeva@redhat.com Just FYI, since we don't support RHEL-8 workers for 4.19, so maybe we can't verify this issue for 4.19 version specifically with RHEL workers.

            Gaoyun Pei added a comment - weliang1@redhat.com npinaeva@redhat.com Just FYI, since we don't support RHEL-8 workers for 4.19, so maybe we can't verify this issue for 4.19 version specifically with RHEL workers.

            Hi npinaeva@redhat.com,

            Bugs should not be moved to Verified without first providing a Release Note Type("Bug Fix" or "No Doc Update") and for type "Bug Fix" the Release Note Text must also be provided. Please populate the necessary fields before moving the Bug to Verified.

            OpenShift Jira Bot added a comment - Hi npinaeva@redhat.com , Bugs should not be moved to Verified without first providing a Release Note Type("Bug Fix" or "No Doc Update") and for type "Bug Fix" the Release Note Text must also be provided. Please populate the necessary fields before moving the Bug to Verified.

            Tested @Nadia Pinaeva's https://github.com/openshift/ovn-kubernetes/pull/2445 , I do not see ovnkube-controller container crashing on RHEL-8 worker nodes, see my testing log.

             

            $ oc get clusterversion
            NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
            version   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         40m     Cluster version is 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest
             
            $ oc get nodes
            NAME                                        STATUS   ROLES                  AGE     VERSION
            ip-10-0-53-53.us-east-2.compute.internal    Ready    worker                 6m29s   v1.31.5                    ---- New added RHEL8 node
            ip-10-0-54-126.us-east-2.compute.internal   Ready    worker                 6m40s   v1.31.5                    ---- New added RHEL8 node
            ip-10-0-57-191.us-east-2.compute.internal   Ready    control-plane,master   65m     v1.31.5
            ip-10-0-59-83.us-east-2.compute.internal    Ready    worker                 57m     v1.31.5
            ip-10-0-63-31.us-east-2.compute.internal    Ready    control-plane,master   65m     v1.31.5
            ip-10-0-75-97.us-east-2.compute.internal    Ready    worker                 57m     v1.31.5
            ip-10-0-77-27.us-east-2.compute.internal    Ready    control-plane,master   65m     v1.31.5
            ip-10-0-79-26.us-east-2.compute.internal    Ready    worker                 53m     v1.31.5
             
            $ oc debug node/ip-10-0-53-53.us-east-2.compute.internal
            Starting pod/ip-10-0-53-53us-east-2computeinternal-debug-slxst ...
            To use host binaries, run `chroot /host`
            Pod IP: 10.0.53.53
            If you don't see a command prompt, try pressing enter.
            sh-5.1# chroot /host   
            sh-4.4# cat /etc/system-release
            Red Hat Enterprise Linux release 8.10 (Ootpa)
            sh-4.4# 
             
            $ oc get event
            LAST SEEN   TYPE      REASON                                OBJECT                                           MESSAGE
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-6gr7b              CSR "csr-6gr7b" has been approved
            54m         Normal    CSRApproved                           certificatesigningrequest/csr-7g9m2              CSR "csr-7g9m2" has been approved
            54m         Normal    CSRApproved                           certificatesigningrequest/csr-7l8sk              CSR "csr-7l8sk" has been approved
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-8jj5q              CSR "csr-8jj5q" has been approved
            2m46s       Normal    CSRApproved                           certificatesigningrequest/csr-8wx76              CSR "csr-8wx76" has been approved
            2m51s       Normal    CSRApproved                           certificatesigningrequest/csr-clvd2              CSR "csr-clvd2" has been approved
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-cpg7r              CSR "csr-cpg7r" has been approved
            54m         Normal    CSRApproved                           certificatesigningrequest/csr-d744p              CSR "csr-d744p" has been approved
            54m         Normal    CSRApproved                           certificatesigningrequest/csr-hhfz2              CSR "csr-hhfz2" has been approved
            2m48s       Normal    CSRApproved                           certificatesigningrequest/csr-jkvct              CSR "csr-jkvct" has been approved
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-kwzjk              CSR "csr-kwzjk" has been approved
            50m         Normal    CSRApproved                           certificatesigningrequest/csr-lwslc              CSR "csr-lwslc" has been approved
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-q8989              CSR "csr-q8989" has been approved
            60m         Normal    CSRApproved                           certificatesigningrequest/csr-sqzpc              CSR "csr-sqzpc" has been approved
            2m58s       Normal    CSRApproved                           certificatesigningrequest/csr-tpq52              CSR "csr-tpq52" has been approved
            50m         Normal    CSRApproved                           certificatesigningrequest/csr-tsxr6              CSR "csr-tsxr6" has been approved
            3m43s       Normal    NodeHasSufficientMemory               node/ip-10-0-53-53.us-east-2.compute.internal    Node ip-10-0-53-53.us-east-2.compute.internal status is now: NodeHasSufficientMemory
            3m43s       Normal    Synced                                node/ip-10-0-53-53.us-east-2.compute.internal    Node synced successfully
            3m39s       Normal    RegisteredNode                        node/ip-10-0-53-53.us-east-2.compute.internal    Node ip-10-0-53-53.us-east-2.compute.internal event: Registered Node ip-10-0-53-53.us-east-2.compute.internal in Controller
            3m53s       Normal    Synced                                node/ip-10-0-54-126.us-east-2.compute.internal   Node synced successfully
            3m49s       Normal    RegisteredNode                        node/ip-10-0-54-126.us-east-2.compute.internal   Node ip-10-0-54-126.us-east-2.compute.internal event: Registered Node ip-10-0-54-126.us-east-2.compute.internal in Controller
            62m         Normal    NodeHasSufficientMemory               node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasSufficientMemory
            62m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
            62m         Normal    NodeHasSufficientPID                  node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasSufficientPID
            62m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller
            61m         Normal    Synced                                node/ip-10-0-57-191.us-east-2.compute.internal   Node synced successfully
            59m         Normal    NodeReady                             node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeReady
            52m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller
            55m         Normal    Starting                              node/ip-10-0-59-83.us-east-2.compute.internal    Starting kubelet.
            55m         Normal    NodeHasSufficientMemory               node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasSufficientMemory
            55m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
            55m         Normal    NodeHasSufficientPID                  node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasSufficientPID
            55m         Normal    NodeAllocatableEnforced               node/ip-10-0-59-83.us-east-2.compute.internal    Updated Node Allocatable limit across pods
            55m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller
            55m         Normal    Synced                                node/ip-10-0-59-83.us-east-2.compute.internal    Node synced successfully
            54m         Normal    NodeReady                             node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeReady
            52m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller
            62m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller
            61m         Normal    Synced                                node/ip-10-0-63-31.us-east-2.compute.internal    Node synced successfully
            52m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller
            55m         Normal    Starting                              node/ip-10-0-75-97.us-east-2.compute.internal    Starting kubelet.
            55m         Normal    NodeHasSufficientMemory               node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasSufficientMemory
            55m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
            55m         Normal    NodeHasSufficientPID                  node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasSufficientPID
            55m         Normal    NodeAllocatableEnforced               node/ip-10-0-75-97.us-east-2.compute.internal    Updated Node Allocatable limit across pods
            55m         Normal    Synced                                node/ip-10-0-75-97.us-east-2.compute.internal    Node synced successfully
            55m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller
            54m         Normal    NodeReady                             node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeReady
            52m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller
            62m         Normal    NodeHasSufficientMemory               node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal status is now: NodeHasSufficientMemory
            62m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
            62m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller
            61m         Normal    Synced                                node/ip-10-0-77-27.us-east-2.compute.internal    Node synced successfully
            52m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller
            50m         Normal    Synced                                node/ip-10-0-79-26.us-east-2.compute.internal    Node synced successfully
            50m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller
            49m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller
            47m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller
            42m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller
            54m         Normal    Status upgrade                        clusteroperator/machine-api                      Progressing towards operator: 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest
            63m         Warning   KubeAPIReadyz                         namespace/openshift-kube-apiserver               readyz=true
            52m         Normal    ShutdownInitiated                     namespace/openshift-kube-apiserver               Received signal to terminate, becoming unready, but keeping serving
            52m         Normal    TerminationPreShutdownHooksFinished   namespace/openshift-kube-apiserver               All pre-shutdown hooks have been finished
             
            $ oc get event | grep cgroup
             
            $ oc get co
            NAME                                       VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
            authentication                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      39m     
            baremetal                                  4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m     
            cloud-controller-manager                   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m     
            cloud-credential                           4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m     
            cluster-autoscaler                         4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            config-operator                            4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            console                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      46m     
            control-plane-machine-set                  4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      58m     
            csi-snapshot-controller                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            dns                                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            etcd                                       4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m     
            image-registry                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m     
            ingress                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m     
            insights                                   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            kube-apiserver                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      57m     
            kube-controller-manager                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      57m     
            kube-scheduler                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      56m     
            kube-storage-version-migrator              4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            machine-api                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      55m     
            machine-approver                           4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            machine-config                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      58m     
            marketplace                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m     
            monitoring                                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      47m     
            network                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m     
            node-tuning                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      4m55s   
            olm                                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      49m     
            openshift-apiserver                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m     
            openshift-controller-manager               4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      56m     
            openshift-samples                          4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m     
            operator-lifecycle-manager                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m     
            operator-lifecycle-manager-catalog         4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m     
            operator-lifecycle-manager-packageserver   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m     
            service-ca                                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
            storage                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m     
             
            $ oc get all -n openshift-ovn-kubernetes
            Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
            NAME                                        READY   STATUS    RESTARTS      AGE
            pod/ovnkube-control-plane-fcc469c5d-4zksf   2/2     Running   0             61m
            pod/ovnkube-control-plane-fcc469c5d-vjq5t   2/2     Running   0             61m
            pod/ovnkube-node-4ft6n                      8/8     Running   1 (55m ago)   55m
            pod/ovnkube-node-8wvm7                      8/8     Running   0             61m
            pod/ovnkube-node-lfdfk                      8/8     Running   0             61m
            pod/ovnkube-node-mmjb6                      8/8     Running   0             4m44s
            pod/ovnkube-node-p9b7s                      8/8     Running   0             55m
            pod/ovnkube-node-vtd5c                      8/8     Running   0             4m33s
            pod/ovnkube-node-xdrtn                      8/8     Running   0             60m
            pod/ovnkube-node-zjrnv                      8/8     Running   1 (51m ago)   51m
             
            NAME                                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
            service/ovn-kubernetes-control-plane   ClusterIP   None         <none>        9108/TCP            61m
            service/ovn-kubernetes-node            ClusterIP   None         <none>        9103/TCP,9105/TCP   61m
             
            NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
            daemonset.apps/ovnkube-node   8         8         8       8            8           kubernetes.io/os=linux   61m
             
            NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
            deployment.apps/ovnkube-control-plane   2/2     2            2           61m
             
            NAME                                              DESIRED   CURRENT   READY   AGE
            replicaset.apps/ovnkube-control-plane-fcc469c5d   2         2         2       61m
             
            $ oc logs ovnkube-control-plane-fcc469c5d-4zksf -n openshift-ovn-kubernetes
            Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, ovnkube-cluster-manager
            2025-02-07T20:41:57+00:00 INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes.
            2025-02-07T20:43:27+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy
            W0207 20:43:27.956003       1 deprecated.go:66] 
            ==== Removed Flag Warning ======================
             
            logtostderr is removed in the k8s upstream and has no effect any more.
             
            ===============================================
                    
            I0207 20:43:27.956418       1 kube-rbac-proxy.go:233] Valid token audiences: 
            I0207 20:43:27.957596       1 kube-rbac-proxy.go:347] Reading certificate files
            I0207 20:43:27.957837       1 kube-rbac-proxy.go:395] Starting TCP socket on :9108
            I0207 20:43:27.958074       1 kube-rbac-proxy.go:402] Listening securely on :9108

            Weibin Liang added a comment - Tested  @Nadia Pinaeva 's https://github.com/openshift/ovn-kubernetes/pull/2445 , I do not see ovnkube-controller container crashing on RHEL-8 worker nodes, see my testing log .   $ oc get clusterversion NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS version   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         40m     Cluster version is 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   $ oc get nodes NAME                                        STATUS   ROLES                  AGE     VERSION ip-10-0-53-53.us-east-2.compute.internal    Ready    worker                 6m29s   v1.31.5                    ---- New added RHEL8 node ip-10-0-54-126.us-east-2.compute.internal   Ready    worker                 6m40s   v1.31.5                    ---- New added RHEL8 node ip-10-0-57-191.us-east-2.compute.internal   Ready    control-plane,master   65m     v1.31.5 ip-10-0-59-83.us-east-2.compute.internal    Ready    worker                 57m     v1.31.5 ip-10-0-63-31.us-east-2.compute.internal    Ready    control-plane,master   65m     v1.31.5 ip-10-0-75-97.us-east-2.compute.internal    Ready    worker                 57m     v1.31.5 ip-10-0-77-27.us-east-2.compute.internal    Ready    control-plane,master   65m     v1.31.5 ip-10-0-79-26.us-east-2.compute.internal    Ready    worker                 53m     v1.31.5   $ oc debug node/ip-10-0-53-53.us-east-2.compute.internal Starting pod/ip-10-0-53-53us-east-2computeinternal-debug-slxst ... To use host binaries, run `chroot /host` Pod IP: 10.0.53.53 If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host    sh-4.4# cat /etc/system-release Red Hat Enterprise Linux release 8.10 (Ootpa) sh-4.4#    $ oc get event LAST SEEN   TYPE      REASON                                OBJECT                                           MESSAGE 60m         Normal    CSRApproved                           certificatesigningrequest/csr-6gr7b              CSR "csr-6gr7b" has been approved 54m         Normal    CSRApproved                           certificatesigningrequest/csr-7g9m2              CSR "csr-7g9m2" has been approved 54m         Normal    CSRApproved                           certificatesigningrequest/csr-7l8sk              CSR "csr-7l8sk" has been approved 60m         Normal    CSRApproved                           certificatesigningrequest/csr-8jj5q              CSR "csr-8jj5q" has been approved 2m46s       Normal    CSRApproved                           certificatesigningrequest/csr-8wx76              CSR "csr-8wx76" has been approved 2m51s       Normal    CSRApproved                           certificatesigningrequest/csr-clvd2              CSR "csr-clvd2" has been approved 60m         Normal    CSRApproved                           certificatesigningrequest/csr-cpg7r              CSR "csr-cpg7r" has been approved 54m         Normal    CSRApproved                           certificatesigningrequest/csr-d744p              CSR "csr-d744p" has been approved 54m         Normal    CSRApproved                           certificatesigningrequest/csr-hhfz2              CSR "csr-hhfz2" has been approved 2m48s       Normal    CSRApproved                           certificatesigningrequest/csr-jkvct              CSR "csr-jkvct" has been approved 60m         Normal    CSRApproved                           certificatesigningrequest/csr-kwzjk              CSR "csr-kwzjk" has been approved 50m         Normal    CSRApproved                           certificatesigningrequest/csr-lwslc              CSR "csr-lwslc" has been approved 60m         Normal    CSRApproved                           certificatesigningrequest/csr-q8989              CSR "csr-q8989" has been approved 60m         Normal    CSRApproved                           certificatesigningrequest/csr-sqzpc              CSR "csr-sqzpc" has been approved 2m58s       Normal    CSRApproved                           certificatesigningrequest/csr-tpq52              CSR "csr-tpq52" has been approved 50m         Normal    CSRApproved                           certificatesigningrequest/csr-tsxr6              CSR "csr-tsxr6" has been approved 3m43s       Normal    NodeHasSufficientMemory               node/ip-10-0-53-53.us-east-2.compute.internal    Node ip-10-0-53-53.us-east-2.compute.internal status is now: NodeHasSufficientMemory 3m43s       Normal    Synced                                node/ip-10-0-53-53.us-east-2.compute.internal    Node synced successfully 3m39s       Normal    RegisteredNode                        node/ip-10-0-53-53.us-east-2.compute.internal    Node ip-10-0-53-53.us-east-2.compute.internal event: Registered Node ip-10-0-53-53.us-east-2.compute.internal in Controller 3m53s       Normal    Synced                                node/ip-10-0-54-126.us-east-2.compute.internal   Node synced successfully 3m49s       Normal    RegisteredNode                        node/ip-10-0-54-126.us-east-2.compute.internal   Node ip-10-0-54-126.us-east-2.compute.internal event: Registered Node ip-10-0-54-126.us-east-2.compute.internal in Controller 62m         Normal    NodeHasSufficientMemory               node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasSufficientMemory 62m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasNoDiskPressure 62m         Normal    NodeHasSufficientPID                  node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeHasSufficientPID 62m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller 61m         Normal    Synced                                node/ip-10-0-57-191.us-east-2.compute.internal   Node synced successfully 59m         Normal    NodeReady                             node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal status is now: NodeReady 52m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-57-191.us-east-2.compute.internal   Node ip-10-0-57-191.us-east-2.compute.internal event: Registered Node ip-10-0-57-191.us-east-2.compute.internal in Controller 55m         Normal    Starting                              node/ip-10-0-59-83.us-east-2.compute.internal    Starting kubelet. 55m         Normal    NodeHasSufficientMemory               node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasSufficientMemory 55m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasNoDiskPressure 55m         Normal    NodeHasSufficientPID                  node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeHasSufficientPID 55m         Normal    NodeAllocatableEnforced               node/ip-10-0-59-83.us-east-2.compute.internal    Updated Node Allocatable limit across pods 55m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller 55m         Normal    Synced                                node/ip-10-0-59-83.us-east-2.compute.internal    Node synced successfully 54m         Normal    NodeReady                             node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal status is now: NodeReady 52m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-59-83.us-east-2.compute.internal    Node ip-10-0-59-83.us-east-2.compute.internal event: Registered Node ip-10-0-59-83.us-east-2.compute.internal in Controller 62m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller 61m         Normal    Synced                                node/ip-10-0-63-31.us-east-2.compute.internal    Node synced successfully 52m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-63-31.us-east-2.compute.internal    Node ip-10-0-63-31.us-east-2.compute.internal event: Registered Node ip-10-0-63-31.us-east-2.compute.internal in Controller 55m         Normal    Starting                              node/ip-10-0-75-97.us-east-2.compute.internal    Starting kubelet. 55m         Normal    NodeHasSufficientMemory               node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasSufficientMemory 55m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasNoDiskPressure 55m         Normal    NodeHasSufficientPID                  node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeHasSufficientPID 55m         Normal    NodeAllocatableEnforced               node/ip-10-0-75-97.us-east-2.compute.internal    Updated Node Allocatable limit across pods 55m         Normal    Synced                                node/ip-10-0-75-97.us-east-2.compute.internal    Node synced successfully 55m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller 54m         Normal    NodeReady                             node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal status is now: NodeReady 52m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-75-97.us-east-2.compute.internal    Node ip-10-0-75-97.us-east-2.compute.internal event: Registered Node ip-10-0-75-97.us-east-2.compute.internal in Controller 62m         Normal    NodeHasSufficientMemory               node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal status is now: NodeHasSufficientMemory 62m         Normal    NodeHasNoDiskPressure                 node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal status is now: NodeHasNoDiskPressure 62m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller 61m         Normal    Synced                                node/ip-10-0-77-27.us-east-2.compute.internal    Node synced successfully 52m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-77-27.us-east-2.compute.internal    Node ip-10-0-77-27.us-east-2.compute.internal event: Registered Node ip-10-0-77-27.us-east-2.compute.internal in Controller 50m         Normal    Synced                                node/ip-10-0-79-26.us-east-2.compute.internal    Node synced successfully 50m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller 49m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller 47m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller 42m         Normal    RegisteredNode                        node/ip-10-0-79-26.us-east-2.compute.internal    Node ip-10-0-79-26.us-east-2.compute.internal event: Registered Node ip-10-0-79-26.us-east-2.compute.internal in Controller 54m         Normal    Status upgrade                        clusteroperator/machine-api                      Progressing towards operator : 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest 63m         Warning   KubeAPIReadyz                         namespace/openshift-kube-apiserver               readyz= true 52m         Normal    ShutdownInitiated                     namespace/openshift-kube-apiserver               Received signal to terminate, becoming unready, but keeping serving 52m         Normal    TerminationPreShutdownHooksFinished   namespace/openshift-kube-apiserver               All pre-shutdown hooks have been finished   $ oc get event | grep cgroup   $ oc get co NAME                                       VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE authentication                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      39m      baremetal                                  4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m      cloud-controller-manager                   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m      cloud-credential                           4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m      cluster-autoscaler                         4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      config- operator                            4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      console                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      46m      control-plane-machine-set                  4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      58m      csi-snapshot-controller                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      dns                                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      etcd                                       4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m      image-registry                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m      ingress                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m      insights                                   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      kube-apiserver                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      57m      kube-controller-manager                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      57m      kube-scheduler                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      56m      kube-storage-version-migrator              4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      machine-api                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      55m      machine-approver                           4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      machine-config                             4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      58m      marketplace                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m      monitoring                                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      47m      network                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      62m      node-tuning                                4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      4m55s    olm                                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      49m      openshift-apiserver                        4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m      openshift-controller-manager               4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      56m      openshift-samples                          4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m      operator -lifecycle-manager                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m      operator -lifecycle-manager-catalog         4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      59m      operator -lifecycle-manager-packageserver   4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      51m      service-ca                                 4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m      storage                                    4.18.0-0.test-2025-02-07-201221-ci-ln-mrl0kkt-latest   True        False         False      60m        $ oc get all -n openshift-ovn-kubernetes Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME                                        READY   STATUS    RESTARTS      AGE pod/ovnkube-control-plane-fcc469c5d-4zksf   2/2     Running   0             61m pod/ovnkube-control-plane-fcc469c5d-vjq5t   2/2     Running   0             61m pod/ovnkube-node-4ft6n                      8/8     Running   1 (55m ago)   55m pod/ovnkube-node-8wvm7                      8/8     Running   0             61m pod/ovnkube-node-lfdfk                      8/8     Running   0             61m pod/ovnkube-node-mmjb6                      8/8     Running   0             4m44s pod/ovnkube-node-p9b7s                      8/8     Running   0             55m pod/ovnkube-node-vtd5c                      8/8     Running   0             4m33s pod/ovnkube-node-xdrtn                      8/8     Running   0             60m pod/ovnkube-node-zjrnv                      8/8     Running   1 (51m ago)   51m   NAME                                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE service/ovn-kubernetes-control-plane   ClusterIP   None         <none>        9108/TCP            61m service/ovn-kubernetes-node            ClusterIP   None         <none>        9103/TCP,9105/TCP   61m   NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE daemonset.apps/ovnkube-node   8         8         8       8            8           kubernetes.io/os=linux   61m   NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE deployment.apps/ovnkube-control-plane   2/2     2            2           61m   NAME                                              DESIRED   CURRENT   READY   AGE replicaset.apps/ovnkube-control-plane-fcc469c5d   2         2         2       61m   $ oc logs ovnkube-control-plane-fcc469c5d-4zksf -n openshift-ovn-kubernetes Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, ovnkube-cluster-manager 2025-02-07T20:41:57+00:00 INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. 2025-02-07T20:43:27+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy W0207 20:43:27.956003       1 deprecated.go:66]  ==== Removed Flag Warning ======================   logtostderr is removed in the k8s upstream and has no effect any more.   ===============================================          I0207 20:43:27.956418       1 kube-rbac-proxy.go:233] Valid token audiences:  I0207 20:43:27.957596       1 kube-rbac-proxy.go:347] Reading certificate files I0207 20:43:27.957837       1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 I0207 20:43:27.958074       1 kube-rbac-proxy.go:402] Listening securely on :9108

              npinaeva@redhat.com Nadia Pinaeva
              rh-ee-gpei Gaoyun Pei
              Arti Sood Arti Sood
              Votes:
              1 Vote for this issue
              Watchers:
              11 Start watching this issue

                Created:
                Updated: