-
Bug
-
Resolution: Done
-
Normal
-
None
-
4.14.z, 4.15.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
Yes
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
This is similar to OCPBUGS-8502 and OCPBUGS-17129.
On a vSphere IPI cluster, when adding a new Node via MachineSet, the new Node fails the File Integrity Check. We observed that initially both the "keepalived.conf" and also the "Corefile" showed as "Failed" on only the added node. After a few minutes, the regular checks on other Nodes also failed.
We can observe that in the files mentioned above, the IPs of the newly added Node is added and thus the File Integrity Check fails.
The issue was also discussed in a call with the customer and in Slack here: https://redhat-internal.slack.com/archives/CHCRR73PF/p1730723601095989
Version-Release number of selected component (if applicable):
- File Integrity Operator v1.3.4
- OpenShift Container Platform 4.15.37
How reproducible:
Always, wenshen@redhat.com was also able to reproduce it internally
Steps to Reproduce:
1. Set up an OpenShift Container Platform cluster on vSphere using IPI installation
2. Install the File Integrity Operator
3. Confirm that all checks succeed
4. Using MachineSets, add another Node to the cluster
5. Observe the check results
Actual results:
Note that "keepalived.conf" and also the "Corefile" showed as "Failed" on only the added node. After a few minutes (after the checks have run again), all Nodes show as "Failed", with the same files.
Expected results:
When adding new Nodes to the cluster, no failure of the File Integrity Operator for "Corefile" and "keepalived.conf" is shown.
Additional info:
- This was also demonstrated by the customer in a call on November 25th, 14.00 CET
- Slack Discussion: https://redhat-internal.slack.com/archives/CHCRR73PF/p1730723601095989