-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
None
-
Quality / Stability / Reliability
-
False
-
-
False
-
-
-
Moderate
Description of problem:
After scaling up nodes on a vsphere-ipi-ovn-dualstack-privmaryv6 cluster, the fileintegritynodestatuses become Failed on old nodes Jul 2 05:26:09.039: INFO: The fileintegritynodestatuses for all nodes are: $ oc get fileintegritynodestatuses NAME NODE STATUS fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-master-0 ci-op-db8yvd3k-5d26e-8sq4f-master-0 Failed fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-master-1 ci-op-db8yvd3k-5d26e-8sq4f-master-1 Failed fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-master-2 ci-op-db8yvd3k-5d26e-8sq4f-master-2 Failed fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-worker-0-dlhlp ci-op-db8yvd3k-5d26e-8sq4f-worker-0-dlhlp Failed fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-worker-0-fq87p ci-op-db8yvd3k-5d26e-8sq4f-worker-0-fq87p Failed fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-worker-0-lzbdg ci-op-db8yvd3k-5d26e-8sq4f-worker-0-lzbdg Succeeded fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-worker-0-z9pq2 ci-op-db8yvd3k-5d26e-8sq4f-worker-0-z9pq2 Failed $ oc --kubeconfig=/tmp/kubeconfig-109021651 get cm aide-fileintegrity-master-d2y19hbbgs-ci-op-db8yvd3k-5d26e-8sq4f-master-1-failed -n openshift-file-integrity -o=jsonpath={.data} {"integritylog":"Start timestamp: 2025-07-02 05:25:26 +0000 (AIDE 0.16)\nAIDE found differences between database and filesystem!!\n\nSummary:\n Total number of entries:\t34867\n Added entries:\t\t0\n Removed entries:\t\t0\n Changed entries:\t\t2\n\n---------------------------------------------------\n Changed entries:\n---------------------------------------------------\n\n f ... .C... : /hostroot/etc/coredns/Corefile\nf ... .C... : /hostroot/etc/keepalived/keepalived.conf\n\n---------------------------------------------------\n Detailed information about changes:\n---------------------------------------------------\n\n File: hostroot/etc/coredns/Corefile\n SHA512 : CJU1N+ifFhBaZg0y07mXA8o7JG4PW6uA | CfvBz+MpjRPL4rAVLc74EQcajSxfNIxj\n raYDprCJ/TBCNXSmkShrHro3iBw6/cI8 | Hrtw4be/aSNYHn9fklI7ddpXXr9IBrB3\n qJs1sYZNZlXdH8ISHzGPwQ== | YMkxpQzMk7Roq7CFUTpWmA==\n\n File: /hostroot/etc/keepalived/keepalived.conf\n SHA512 : 5RtLhWYKbMXKcJLUXwAkx+Obf7MerBRN | CQuYMYvbf9rb3zg6s7L4JMtwYnJSVqnw\n 8vnB4FUXRTf4DX+2p0A8FnEsZ7c5Apt4 | xNcYNQpLprnarZLGU3HDcoMhozlr797W\n i00HSVVfB5aXBI8QKKrWVg== | hwae3X2M+UDck3kGZtG/Pg==\n\n\n
Version-Release number of selected component (if applicable):
4.15 nightly build + 1.3.6
How reproducible:
Always
Steps to Reproduce:
1. Install File Integrity Operator and create a fileintegrity on a vsphere-ipi-ovn-dualstack-privmaryv6 cluster 2. Make sure all fileintegritynodestatus is Succceeded 3. Scaleup nodes
Actual results:
After scaling up nodes on a vsphere-ipi-ovn-dualstack-privmaryv6 cluster, the fileintegritynodestatuses become Failed on old nodes. More details seen from the description
Expected results:
The fileintegritynodestatus for the old node should be Succceeded
Additional info:
No such issue on a single ipv4 stack cluster.