Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-6265

When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease

    • None
    • False
    • Hide

      None

      Show
      None
    • N/A
    • Bug Fix
    • Done

      Description of problem:

      When installing SNO with bootstrap in place CVO hangs for 6 minutes waiting for the lease

      Version-Release number of selected component (if applicable):

       

      How reproducible:

      100%

      Steps to Reproduce:

      1.Run the POC using the makefile here https://github.com/eranco74/bootstrap-in-place-poc
      2. Observe the CVO logs post reboot
      3.
      

      Actual results:

      I0102 09:45:53.131061       1 leaderelection.go:248] attempting to acquire leader lease openshift-cluster-version/version...
      I0102 09:51:37.219685       1 leaderelection.go:258] successfully acquired lease openshift-cluster-version/version

      Expected results:

      Expected the bootstrap CVO to release the lease so that the CVO running post reboot won't have to wait the lease duration  

      Additional info:

      POC (hack) that remove the lease and allows CVO to start immediately:
      https://github.com/openshift/installer/pull/6757/files#diff-f12fbadd10845e6dab2999e8a3828ba57176db10240695c62d8d177a077c7161R38-R48
        
      Slack thread:
      https://redhat-internal.slack.com/archives/C04HSKR4Y1X/p1673345953183709

            [OCPBUGS-6265] When installing SNO with bootstrap in place it takes CVO 6 minutes to acquire the leader lease

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Important: OpenShift Container Platform 4.13.0 security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:1326

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Important: OpenShift Container Platform 4.13.0 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:1326

            Yang Yang added a comment -

            ercohen Could you please help with it^ ? Thank you!

            Yang Yang added a comment - ercohen Could you please help with it^ ? Thank you!

            OCP version: 4.13.0-rc.3
            During the installation the CVO logs and events look good.

            Alexander Chuzhoy added a comment - OCP version: 4.13.0-rc.3 During the installation the CVO logs and events look good.

            The FeatureGate business should be getting sorted via OCPBUGS-3783 (4.14) and OCPBUGS-11435 (4.13).

            W. Trevor King added a comment - The FeatureGate business should be getting sorted via OCPBUGS-3783 (4.14) and OCPBUGS-11435 (4.13).

            Eran Cohen added a comment -

             achuzhoy@redhat.com, I think we can close the current issue (CVO lease during the installation). 

            About the time gap when acquiring the during the cluster lifecycle, we see it 2/2 here.
            I looked at the must-gather logs of SNO CI job and I didn't see this issue but I suspect that it's because the job completed before there is a CVO restart.

            omg logs -n openshift-cluster-version                         cluster-version-operator-7874f8b95-97jkq | grep "lease openshift-cluster-version"
            2023-04-17T05:36:20.529800826Z I0417 05:36:20.529772       1 leaderelection.go:248] attempting to acquire leader lease openshift-cluster-version/version...
            2023-04-17T05:36:20.542273260Z I0417 05:36:20.542257       1 leaderelection.go:258] successfully acquired lease openshift-cluster-version/version
            
            omg get pods  -n openshift-cluster-version                         
            NAME                                      READY  STATUS   RESTARTS  AGE
            cluster-version-operator-7874f8b95-97jkq  1/1    Running  0         26m
             

            The live setup achuzhoy@redhat.com created is up for some time and I see this: 
            CVO got restarted 6 times:

            openshift-cluster-version                          cluster-version-operator-86ddb9bc46-hz6p6                     1/1     Running     6 (147m ago)    167m 

            And here is the previous logs (doesn't seem related to the lease issue)

            oc logs -n openshift-cluster-version                          cluster-version-operator-86ddb9bc46-hz6p6 --previous 
            I0416 17:06:10.602268       1 start.go:23] ClusterVersionOperator 4.13.0-202303301516.p0.g7e34cd1.assembly.stream-7e34cd1
            W0416 17:06:11.698195       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:14.770104       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:17.842147       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:20.914366       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:23.986139       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:27.058166       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:30.130326       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:33.202243       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 10.19.134.16:6443: connect: connection refused
            W0416 17:06:35.604773       1 start.go:157] Failed to get FeatureGate from cluster: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": context deadline exceeded
            F0416 17:06:35.604830       1 start.go:29] error: Get "https://api-int.qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster": context deadline exceede 

             

            Eran Cohen added a comment -   achuzhoy@redhat.com , I think we can close the current issue (CVO lease during the installation).  About the time gap when acquiring the during the cluster lifecycle, we see it 2/2 here. I looked at the must-gather logs of SNO CI job and I didn't see this issue but I suspect that it's because the job completed before there is a CVO restart. omg logs -n openshift-cluster-version                         cluster-version- operator -7874f8b95-97jkq | grep "lease openshift-cluster-version" 2023-04-17T05:36:20.529800826Z I0417 05:36:20.529772       1 leaderelection.go:248] attempting to acquire leader lease openshift-cluster-version/version... 2023-04-17T05:36:20.542273260Z I0417 05:36:20.542257       1 leaderelection.go:258] successfully acquired lease openshift-cluster-version/version omg get pods  -n openshift-cluster-version                          NAME                                      READY  STATUS   RESTARTS  AGE cluster-version- operator -7874f8b95-97jkq  1/1    Running  0         26m The live setup achuzhoy@redhat.com created is up for some time and I see this:  CVO got restarted 6 times: openshift-cluster-version                          cluster-version- operator -86ddb9bc46-hz6p6                     1/1     Running     6 (147m ago)    167m And here is the previous logs (doesn't seem related to the lease issue) oc logs -n openshift-cluster-version                          cluster-version- operator -86ddb9bc46-hz6p6 --previous  I0416 17:06:10.602268       1 start.go:23] ClusterVersionOperator 4.13.0-202303301516.p0.g7e34cd1.assembly.stream-7e34cd1 W0416 17:06:11.698195       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:14.770104       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:17.842147       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:20.914366       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:23.986139       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:27.058166       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:30.130326       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:33.202243       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : dial tcp 10.19.134.16:6443: connect: connection refused W0416 17:06:35.604773       1 start.go:157] Failed to get FeatureGate from cluster: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : context deadline exceeded F0416 17:06:35.604830       1 start.go:29] error: Get "https: //api- int .qe4.kni.lab.eng.bos.redhat.com:6443/apis/config.openshift.io/v1/featuregates/cluster" : context deadline exceede  

            We've certainly had some bugs back around 4.9 or so with CVO graceful shutdown, but I'm not aware of anything recent. If we have logs from the outgoing CVO to shed light on what went wrong, or a way to reproduce the issue, a new bug against the CVO to dig in makes sense. In the absence of the outgoing CVOs logs or a way to reproduce, it's probably not worth tracking "maybe we saw this happen once", unless someone has time to try to get the resources we'd need for further debugging.

            W. Trevor King added a comment - We've certainly had some bugs back around 4.9 or so with CVO graceful shutdown, but I'm not aware of anything recent. If we have logs from the outgoing CVO to shed light on what went wrong, or a way to reproduce the issue, a new bug against the CVO to dig in makes sense. In the absence of the outgoing CVOs logs or a way to reproduce, it's probably not worth tracking "maybe we saw this happen once", unless someone has time to try to get the resources we'd need for further debugging.

            Eran Cohen added a comment -

            Another installation attempt show this:

            [kni@r640-u01 ~]$ oc get events  -n openshift-cluster-version --sort-by='.metadata.creationTimestamp' | egrep "Created container|leader" | grep -v configma
            3h7m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_074e6f9e-5267-48d8-adbe-ade61c934c75 became leader
            164m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_b112a66e-92db-45cd-9f49-5fe52e0b2907 became leader
            164m        Normal    Created             pod/cluster-version-operator-54c56759c5-jgp84    Created container cluster-version-operator
            143m        Normal    Created             pod/cluster-version-operator-86ddb9bc46-hz6p6    Created container cluster-version-operator
            163m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_955017fb-27d8-43d4-9385-7a972daae08a became leader
            150m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_1808359a-281b-463c-b198-eb4dfffb2c5c became leader
            137m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_e032e2a0-59a7-429a-98e6-99f4050e500b became leader
             

            During the installation the CVO lease taken by the bootstrap is released and the cluster CVO can take it imminently when it starts:

            164m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_b112a66e-92db-45cd-9f49-5fe52e0b2907 became leader
            164m        Normal    Created             pod/cluster-version-operator-54c56759c5-jgp84    Created container cluster-version-operator 

            However, I think there is a bug in CVO shutdown that cause this gap later:

            143m        Normal    Created             pod/cluster-version-operator-86ddb9bc46-hz6p6    Created container cluster-version-operator
            137m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_e032e2a0-59a7-429a-98e6-99f4050e500b became leader 

            And cvo logs show it as well:

            [kni@r640-u01 ~]$ oc logs -n openshift-cluster-version                          cluster-version-operator-86ddb9bc46-hz6p6 | grep "lease openshift-cluster-version"
            I0416 17:07:18.659180       1 leaderelection.go:248] attempting to acquire leader lease openshift-cluster-version/version...
            I0416 17:12:37.494413       1 leaderelection.go:258] successfully acquired lease openshift-cluster-version/version 

            trking , my initial thought was that the issue (not releasing the lease upon shutdown) is limited to the bootstrap CVO so we just mitigated it to allow faster SNO installation, now it seems that the cluster CVO isn't releasing the lease upon shutdown as well (this happens long after the cluster is installed), I'm not sure how critical it is but I think it deserves a bug, thoughts? 

            Eran Cohen added a comment - Another installation attempt show this: [kni@r640-u01 ~]$ oc get events  -n openshift-cluster-version --sort-by= '.metadata.creationTimestamp' | egrep "Created container|leader" | grep -v configma 3h7m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_074e6f9e-5267-48d8-adbe-ade61c934c75 became leader 164m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_b112a66e-92db-45cd-9f49-5fe52e0b2907 became leader 164m        Normal    Created             pod/cluster-version- operator -54c56759c5-jgp84    Created container cluster-version- operator 143m        Normal    Created             pod/cluster-version- operator -86ddb9bc46-hz6p6    Created container cluster-version- operator 163m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_955017fb-27d8-43d4-9385-7a972daae08a became leader 150m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_1808359a-281b-463c-b198-eb4dfffb2c5c became leader 137m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_e032e2a0-59a7-429a-98e6-99f4050e500b became leader During the installation the CVO lease taken by the bootstrap is released and the cluster CVO can take it imminently when it starts: 164m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_b112a66e-92db-45cd-9f49-5fe52e0b2907 became leader 164m        Normal    Created             pod/cluster-version- operator -54c56759c5-jgp84    Created container cluster-version- operator However, I think there is a bug in CVO shutdown that cause this gap later: 143m        Normal    Created             pod/cluster-version- operator -86ddb9bc46-hz6p6    Created container cluster-version- operator 137m        Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_e032e2a0-59a7-429a-98e6-99f4050e500b became leader And cvo logs show it as well: [kni@r640-u01 ~]$ oc logs -n openshift-cluster-version                          cluster-version- operator -86ddb9bc46-hz6p6 | grep "lease openshift-cluster-version" I0416 17:07:18.659180       1 leaderelection.go:248] attempting to acquire leader lease openshift-cluster-version/version... I0416 17:12:37.494413       1 leaderelection.go:258] successfully acquired lease openshift-cluster-version/version trking , my initial thought was that the issue (not releasing the lease upon shutdown) is limited to the bootstrap CVO so we just mitigated it to allow faster SNO installation, now it seems that the cluster CVO isn't releasing the lease upon shutdown as well (this happens long after the cluster is installed), I'm not sure how critical it is but I think it deserves a bug, thoughts? 

            Eran Cohen added a comment -

            Unsure what this is about:
            44m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader
            44m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader
            did you use a real BM that takes very long time to reboot?

            This looks good:
            22m Normal Created pod/cluster-version-operator-54c56759c5-s7729 Created container cluster-version-operator22m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader
             

            And this looks like a bug in CVO, it shouldn't wait so long:

            16m         Normal    Created             pod/cluster-version-operator-86ddb9bc46-g4dk9    Created container cluster-version-operator
            11m         Normal    LeaderElection      configmap/version                                api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader 

            Eran Cohen added a comment - Unsure what this is about: 44m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader 44m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader did you use a real BM that takes very long time to reboot? This looks good: 22m Normal Created pod/cluster-version-operator-54c56759c5-s7729 Created container cluster-version-operator22m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader   And this looks like a bug in CVO, it shouldn't wait so long: 16m Normal Created pod/cluster-version- operator -86ddb9bc46-g4dk9 Created container cluster-version- operator 11m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader

            Alexander Chuzhoy added a comment - - edited
            [kni@r640-u01 shared]$ oc get clusterversion
            NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
            version   4.13.0-rc.3   True        False         5m6s    Cluster version is 4.13.0-rc.3
            
            
            [kni@r640-u01 shared]$ oc get events  -n openshift-cluster-version --sort-by='.metadata.creationTimestamp' | egrep "Created|Deleted|leader" 
            44m         Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader
            44m         Normal    LeaderElection      configmap/version                                api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader
            25m         Normal    SuccessfulCreate    replicaset/cluster-version-operator-54c56759c5   Created pod: cluster-version-operator-54c56759c5-s7729
            22m         Normal    Created             pod/cluster-version-operator-54c56759c5-s7729    Created container cluster-version-operator
            22m         Normal    LeaderElection      configmap/version                                api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader
            22m         Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader
            21m         Normal    SuccessfulDelete    replicaset/cluster-version-operator-54c56759c5   Deleted pod: cluster-version-operator-54c56759c5-s7729
            21m         Normal    SuccessfulCreate    replicaset/cluster-version-operator-86ddb9bc46   Created pod: cluster-version-operator-86ddb9bc46-g4dk9
            21m         Normal    LeaderElection      configmap/version                                api.qe4.kni.lab.eng.bos.redhat.com_519a1543-68df-4217-b93e-fd23efcf4d7f became leader
            21m         Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_519a1543-68df-4217-b93e-fd23efcf4d7f became leader
            16m         Normal    Created             pod/cluster-version-operator-86ddb9bc46-g4dk9    Created container cluster-version-operator
            11m         Normal    LeaderElection      configmap/version                                api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader
            11m         Normal    LeaderElection      lease/version                                    api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader
            
            

            Alexander Chuzhoy added a comment - - edited [kni@r640-u01 shared]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.0-rc.3 True False 5m6s Cluster version is 4.13.0-rc.3 [kni@r640-u01 shared]$ oc get events -n openshift-cluster-version --sort-by= '.metadata.creationTimestamp' | egrep "Created|Deleted|leader" 44m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader 44m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_4734545c-289c-491a-973a-4569207d88e2 became leader 25m Normal SuccessfulCreate replicaset/cluster-version- operator -54c56759c5 Created pod: cluster-version- operator -54c56759c5-s7729 22m Normal Created pod/cluster-version- operator -54c56759c5-s7729 Created container cluster-version- operator 22m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader 22m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_2fe2ae95-3f51-47de-b3ff-e92572804e6c became leader 21m Normal SuccessfulDelete replicaset/cluster-version- operator -54c56759c5 Deleted pod: cluster-version- operator -54c56759c5-s7729 21m Normal SuccessfulCreate replicaset/cluster-version- operator -86ddb9bc46 Created pod: cluster-version- operator -86ddb9bc46-g4dk9 21m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_519a1543-68df-4217-b93e-fd23efcf4d7f became leader 21m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_519a1543-68df-4217-b93e-fd23efcf4d7f became leader 16m Normal Created pod/cluster-version- operator -86ddb9bc46-g4dk9 Created container cluster-version- operator 11m Normal LeaderElection configmap/version api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader 11m Normal LeaderElection lease/version api.qe4.kni.lab.eng.bos.redhat.com_08ef0b88-2295-47fc-b3ea-3d1f851d0334 became leader

            Eran Cohen added a comment -

            Note that this issue reproduced in 4.12 as well so I don't think that it relates to RHCOS 9.2 

            Eran Cohen added a comment - Note that this issue reproduced in 4.12 as well so I don't think that it relates to RHCOS 9.2 

              ercohen Eran Cohen
              ercohen Eran Cohen
              Alexander Chuzhoy Alexander Chuzhoy
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

                Created:
                Updated:
                Resolved: