-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
file-integrity-operator-1.3.4
-
False
-
-
False
-
-
-
Moderate
Issue:
After a fresh installation of FIO all aide-fileintegrity-xxxx pods are getting in terminating state. Although these are getting recreated but again getting into terminating state and giving below error logs:
2025-09-25T17:07:46Z: Starting the AIDE runner daemon W0925 17:07:46.739496 1 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-09-25T17:07:46Z: running aide check E0925 17:07:49.678727 1 retrywatcher.go:129] "Watch failed" err="context canceled" 2025-09-25T17:07:49Z: aide check returned status 22 E0925 17:07:50.678875 1 retrywatcher.go:129] "Watch failed" err="context canceled" E0925 17:07:51.679978 1 retrywatcher.go:129] "Watch failed" err="context canceled" E0925 17:07:52.680950 1 retrywatcher.go:129] "Watch failed" err="context canceled" E0925 17:07:53.681912 1 retrywatcher.go:129] "Watch failed" err="context canceled" E0925 17:07:54.682080 1 retrywatcher.go:129] "Watch failed" err="context canceled"
Workaround:
Once we reinitiate the database, pods starts in running state without any issue.
oc annotate fileintegrities/fileintegrity file-integrity.openshift.io/re-init=
Steps to Reproduce:
- Install operator from operator hub.
- Create a configmap.(oc create cm custom-aide-conf --from-file=aide.conf)
- Create a Fileintegrity.
- All daemonset pods goes into terminating state without running frequently.