-
Bug
-
Resolution: Done
-
Major
-
Logging 5.6.0, Logging 5.5.z
-
False
-
None
-
False
-
NEW
-
VERIFIED
-
Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue.
-
-
Log Collection - Sprint 226
After upgrading Red Hat OpenShift Logging from stable-5.4 to stable-5.5, the resources associated with log collector keeps on getting recreated.
Below are the resources which are getting recreated:
1. Collector secrets.
2. Collector pods
3. Collector daemonset
State of pods during this issue:
Terminating -> Pending -> Running -> Terminating
Below logs are observed in cluster-logging-operator pod:
~~~
{"_ts":"2022-09-08T14:06:43.247384021Z","_level":"0","_component":"cluster-logging-operator","_message":"clusterRequest.reconcileCollectorDaemonset","_error":{"msg":"daemonsets.apps \"collector\" not found"}}
{"_ts":"2022-09-08T14:06:43.247455916Z","_level":"0","_component":"cluster-logging-operator","_message":"Unable to reconcile collection for \"instance\": daemonsets.apps \"collector\" not found","_error":{"msg":"daemonsets.apps \"collector\" not found"}}
{"_ts":"2022-09-08T14:06:43.247501321Z","_level":"0","_component":"cluster-logging-operator","_message":"clusterlogforwarder-controller returning, error","_error":{"msg":"Unable to reconcile collection for \"instance\": daemonsets.apps \"collector\" not found"}}
{"_ts":"2022-09-08T14:08:34.585998336Z","_level":"0","_component":"cluster-logging-operator","_message":"Could not find Secret","Name":"logcollector-token","_error":{"msg":"Secret \"logcollector-token\" not found"}}
~~~
- clones
-
LOG-3049 [release-5.5] Resources associated with collector / fluentd keep on getting recreated
- Closed
- links to
- mentioned on