Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-23764

Sometimes, need to wait 5 mins to receive the managed cluster information

XMLWordPrintable

    • Quality / Stability / Reliability
    • 1
    • False
    • Hide

      None

      Show
      None
    • False
    • GH Train-31, GH Train-32
    • Important
    • None

      Description of problem:

      Need to wait 5 mins to receive the managed cluster information

      manager's logs

      2025-09-02T08:10:18.705Z	INFO	consumer/generic_consumer.go:83	transport consumer with cloudevents-kafka receiver
      2025-09-02T08:10:18.709Z	INFO	controller/controller.go:221	start consumer: global_hub
      2025-09-02T08:10:18.710Z	INFO	consumer/generic_consumer.go:151	init consumer	{"offsets": []}
      2025-09-02T08:15:18.601Z	INFO	local-compliance-history	task/local_compliance_history.go:36	start running	{"date": "2025-09-02", "currentRun": "2025-09-02 08:15:18"}
      2025-09-02T08:15:18.601Z	INFO	local-compliance-history	task/local_compliance_history.go:63	The number of compliance need to be synchronized	{"date": "2025-09-02", "count": 0}
      2025-09-02T08:15:18.601Z	INFO	local-compliance-history	task/local_compliance_history.go:73	The number of compliance has been synchronized	{"date": "2025-09-02", "insertedCount": 0}
      2025-09-02T08:15:18.601Z	INFO	local-compliance-history	task/local_compliance_history.go:53	finish running	{"date": "2025-09-02", "nextRun": "2025-09-02 08:16:18"}
      2025-09-02T08:15:18.718Z	DEBUG	statistics/statistics.go:151	{CU=0, CUQueue=0, idleDBW=10, success=0, fail=0, CU Avg=0 ms, DB Avg=0 ms}
      [io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.info(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.event.managedcluster(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.managedclustermigration(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.policy.localspec(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompliance(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.event.localreplicatedpolicy(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.placementrule.localspec(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.managedhub.heartbeat(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.policy.localcompletecompliance(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.event.localrootpolicy(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      [io.open-cluster-management.operator.multiclusterglobalhubs.security.alertcounts(0) | conflation(successes=0, avg=0 ms, max=0 ms           ) | storage(successes=0, avg=0 ms, max=0 ms           )]
      
      2025-09-02T08:15:20.029Z	DEBUG	consumer/generic_consumer.go:159	received message	{"event.Source": "hub2", "event.Type": "managedhub.heartbeat"}
      2025-09-02T08:15:20.029Z	DEBUG	consumer/generic_consumer.go:159	received message	{"event.Source": "hub2", "event.Type": "event.managedcluster"}
      2025-09-02T08:15:20.029Z	DEBUG	consumer/generic_consumer.go:159	received message	{"event.Source": "hub2", "event.Type": "managedclustermigration"}
      2025-09-02T08:15:20.029Z	DEBUG	consumer/generic_consumer.go:159	received message	{"event.Source": "hub2", "event.Type": "managedhub.heartbeat"} 

      topic information

      ce_id:1083fd40-8fe2-4e1f-9500-6a12080dcac9,ce_source:hub1,ce_specversion:1.0,ce_type:io.open-cluster-management.operator.multiclusterglobalhubs.managedcluster,content-type:application/json,ce_time:2025-09-02T08:10:27.281152858Z,ce_extversion:0.1	{"update":[{"kind":"ManagedCluster","apiVersion":"cluster.open-cluster-management.io/v1","metadata":{"name":"hub1-cluster1","uid":"fd400c49-9ab6-467d-9c7f-540279d23b8d","resourceVersion":"1953","generation":4,"creationTimestamp":"2025-09-02T08:05:52Z","labels":{"cluster.open-cluster-management.io/clusterset":"default","vendor":"OpenShift"},"annotations":{"global-hub.open-cluster-management.io/managed-by":"hub1"}},"spec":{"managedClusterClientConfigs":[{"url":"https://hub1-cluster1-control-plane:6443","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJTmNyaW9QSGtBTVF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBNU1ESXdPREF3TURoYUZ3MHpOVEE0TXpFd09EQTFNRGhhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURwV01RdW5rVlNNM1pUS2crTENYM2FubTFGdzQ3QTN0clQ2ZWZvOHpXb0hGU1h4dVE5M01SUXNwN0gKWGhQM3pENFo3S2NqYTh3MHd1T0NHUXlaMmVUcS9xY1RpaGxWK2xTb0dQc0JCQUxOY0hyL05QSDJCRS9lTUlGZAp0MENFSng1a3YvVElIYmlTMTZyb0FnMmVDUW41TnhjaXJrMTJhL0VIV0QxTitLb2hqSkxvOS9qN3l3czZxdmd1Cnoyb1VZYlN6Zzl3a3dKLzlTWUpHdkVkMjFHRWJWcDM2aTBtcHh1ekZqakI0N0dHRVIxakZJckRRLyt3UWl1RTcKRk5FcDhqWnVpR3dIcEdYWGFxanRFSDVpQUlQc2g3S09hbUxvQ3Z2d1c1Y2J4Z1V0NXc5MUdSb1BrNS9jU0FHKwplOVhLN2JtTmljUmVBV3VPWlFrZ2dTM1M3UzVOQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTUGJMd1JiMHc5VEcwWVArMmVXcmtsbzArV2dqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2FXdjdiL3JDSQo4Yk9tSlBSRGUrUnpzY3hWWUNVbGtuam5jQ3Q2NDM1ajNxWm1jaWxiTE5ZL1lmTUl1eHZWaUFqc0ZkNUdXdXptCjg1U21KNTZoQnRuRk1lTnFicWt6anBtWlRUYndDVWhjNUpXTEZtYnM3UFBUcHpsdzMwNzFSQ3BIN3lzTGUzRVQKa29UMGoxUjVGM1NUVWhQQTNOdjdmb25wZlV6NWVublJTUWdOY2JXZ2x2bTAyNm5SNnB3TE9DQXFrR0Q5aWxGOQozVnZzR0NRUmZsYlIyQ2MrVnFwR2dtU3RTREFjY0JvdzJtSGNMM3lRZjZmQjQ2eFArYnBURDB4T1JPdUlMY2dZClkxWE1DRU1OcUtNUHFyR1k3eVlMUFlKNG5WUFpKblFENHpsbm9RbitYRDRMZDBQUHNhOUlmLzdNbjZ4QTJDUDEKUG1uL1I2N0xmN2g2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"}],"hubAcceptsClient":true,"leaseDurationSeconds":60},"status":{"conditions":[{"type":"HubAcceptedManagedCluster","status":"True","lastTransitionTime":"2025-09-02T08:05:59Z","reason":"HubClusterAdminAccepted","message":"Accepted by hub cluster admin"},{"type":"ManagedClusterJoined","status":"True","lastTransitionTime":"2025-09-02T08:05:59Z","reason":"ManagedClusterJoined","message":"Managed cluster joined"},{"type":"ManagedClusterConditionAvailable","status":"True","lastTransitionTime":"2025-09-02T08:05:59Z","reason":"ManagedClusterAvailable","message":"Managed cluster is available"},{"type":"ManagedClusterConditionClockSynced","status":"True","lastTransitionTime":"2025-09-02T08:05:59Z","reason":"ManagedClusterClockSynced","message":"The clock of the managed cluster is synced with the hub."}],"capacity":{"cpu":"64","ephemeral-storage":"313685996Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"65306492Ki","pods":"110"},"allocatable":{"cpu":"64","ephemeral-storage":"313685996Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"65306492Ki","pods":"110"},"version":{"kubernetes":"v1.30.0"},"clusterClaims":[{"name":"id.k8s.io","value":"7df54fe0-bbdd-46d8-aadb-880eadad1169"}]}}]} 

      kafka's logs

      2025-09-02 08:10:25,248 INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-4ad3292d-65cf-496e-9bfc-6f5d3941f058 for group hub2 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-6]
      2025-09-02 08:15:19,021 INFO [GroupCoordinator 0]: Preparing to rebalance group global_hub in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Leader rdkafka-70b1c4a2-6447-408d-a96f-661af21743b8 re-joining group during Stable; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-7] 

      from above logs, we can see the message is in kafka topic at 2025-09-02 08:10. but the kafka handles the message at 2025-09-02 08:15:19.

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1.  
      2.  
      3. ...

      Actual results:

      Expected results:

      Additional info:

              clyang82 Chunlin Yang
              clyang82 Chunlin Yang
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: