-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.18.z
-
None
-
None
-
False
-
The file-groupowner-ovs-db-conf rule is written to be aware of the underlying system architecture:
This is also apparent in the architecture-specific rules:
platform: ocp4-node and not_s390x_arch
As noted in the following file:
However, while in the process of implementing ARM64 support for the Compliance Operator, I noticed that I couldn't get the rule to fail, even though the group owner for openvswitch configuration files is different than it is on x86.
$ oc get ccr ocp4-cis-node-master-file-groupowner-ovs-conf-db -o yaml apiVersion: compliance.openshift.io/v1alpha1 description: |- Verify Group Who Owns The Open vSwitch Configuration Database Check if the group owner of /etc/openvswitch/conf.db is hugetlbfs on architectures other than s390x or openvswitch on s390x. id: xccdf_org.ssgproject.content_rule_file_groupowner_ovs_conf_db instructions: |- To check the group ownership of /etc/openvswitch/conf.db, you'll need to log into a node in the cluster. As a user with administrator privileges, log into a node in the relevant pool: $ oc debug node/$NODE_NAME At the sh-4.4# prompt, run: # chroot /host Then, run the command: $ ls -lL /etc/openvswitch/conf.db If properly configured, the output should indicate the following group-owner: hugetlbfs on architectures other than s390x. On s390x, the group-owner should be openvswitch. Is it the case that <code>/etc/openvswitch/conf.db</code> does not have a group owner of code>hugetlbfs</code> on architectures other than s390x or <code>openvswitch</code> on s390x.? kind: ComplianceCheckResult metadata: annotations: compliance.openshift.io/last-scanned-timestamp: "2025-03-12T20:19:20Z" compliance.openshift.io/rule: file-groupowner-ovs-conf-db creationTimestamp: "2025-03-12T20:20:29Z" generation: 1 labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: PASS compliance.openshift.io/profile-guid: fea955f1-9f13-56fd-aacf-868b95b7283f compliance.openshift.io/scan-name: ocp4-cis-node-master compliance.openshift.io/suite: cis name: ocp4-cis-node-master-file-groupowner-ovs-conf-db namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: ocp4-cis-node-master uid: 212e2d8e-9d0f-4d12-b503-2c6b74f3abaf resourceVersion: "36363" uid: c731cfe5-16e7-4baa-800d-7f0485ad8ee4 rationale: CNI (Container Network Interface) files consist of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. Allowing writeable access to the files could allow an attacker to modify the networking configuration potentially adding a rogue network connection. severity: medium status: PASS
When I ran the steps manually on the host, I can see the that owner is actually openvswitch:
$ oc debug node/ip-10-0-112-168.us-east-2.compute.internal
Starting pod/ip-10-0-112-168us-east-2computeinternal-debug-x65cr ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.112.168
If you don't see a command prompt, try pressing enter.
sh-5.1# chroot /host
sh-5.1# ls -la /etc/openvswitch/conf.db
-rw-r-----. 1 openvswitch openvswitch 290853 Mar 12 20:41 /etc/openvswitch/conf.db
I think the following rules are affected, based on how they're written:
- file_groupowner_ovs_conf_db
- file_groupowner_ovs_conf_db_lock
- file_groupowner_ovs_sys_id_conf
- file_permissions_cni_conf
- blocks
-
CMP-3209 Support CIS profiles on ARM64
-
- Closed
-
- links to
-
RHBA-2025:3728 OpenShift Compliance Operator 1.7.0