-
Bug
-
Resolution: Done
-
Major
-
None
-
4.13.0
-
None
-
Moderate
-
No
-
False
-
Description of problem:
Rule rhcos4-ensure-logrotate-activated fail after autoremediaiton applied for rhel9
Version-Release number of selected component (if applicable):
openshift-compliance-operator-bundle-container-0.1.61-7
How reproducible:
Always
Steps to Reproduce:
1. Install compliance operator 2. $ oc label node xx.compute.internal node-role.kubernetes.io/wscan= $ oc create -f - <<EOF apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: wscan labels: pools.operator.machineconfiguration.openshift.io/wscan: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,wscan]} nodeSelector: matchLabels: node-role.kubernetes.io/wscan: "" EOF 3 Create a ss: $ oc apply -f -<<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: test namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: - wscan scanTolerations: - operator: Exists schedule: 0 1 * * * showNotApplicable: false strictNodeScan: true scanLimits: { "cpu": "150m", "memory": "512Mi" } debug: true autoApplyRemediations: true autoUpdateRemediations: true EOF 4. Create a ssb. $ oc compliance bind -N test -S test profile/rhcos4-high 5. Check the remediation will be applied and reboot for nodes in wscan mcp will be triggered. 6. Retrigger rerun for ssb until all cr get applied. 6. Rerun ssb for one more time, the check whether there are rules fail after auto remediations applied.
Actual results:
Rule rhcos4-ensure-logrotate-activated fail after autoremediaiton applied for rhel9. $ oc get ccr -n openshift-compliance -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL rhcos4-high-wrscan-ensure-logrotate-activated $ oc get ccr rhcos4-high-wscan-ensure-logrotate-activated -o=jsonpath={.instructions} To determine the status and frequency of logrotate, run the following command: $ sudo grep logrotate /var/log/cron* If logrotate is configured properly, output should include references to $ oc get cr rhcos4-high-wrscan-ensure-logrotate-activated -o yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: creationTimestamp: "2023-03-22T14:17:24Z" generation: 2 labels: compliance.openshift.io/scan-name: rhcos4-high-wrscan compliance.openshift.io/suite: high-testpl3vg7ns06 name: rhcos4-high-wrscan-ensure-logrotate-activated namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: rhcos4-high-wrscan-ensure-logrotate-activated uid: 5397099b-7af8-46c5-bc0b-507fec83d7b2 resourceVersion: "345488" uid: 2f46c6ac-1c58-47a8-9580-f213fa32e1db spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:,%23%20see%20%22man%20logrotate%22%20for%20details%0A%23%20rotate%20log%20files%20daily%0Adaily%0A%0A%23%20keep%204%20weeks%20worth%20of%20backlogs%0Arotate%2030%0A%0A%23%20create%20new%20%28empty%29%20log%20files%20after%20rotating%20old%20ones%0Acreate%0A%0A%23%20use%20date%20as%20a%20suffix%20of%20the%20rotated%20file%0Adateext%0A%0A%23%20uncomment%20this%20if%20you%20want%20your%20log%20files%20compressed%0A%23compress%0A%0A%23%20RPM%20packages%20drop%20log%20rotation%20information%20into%20this%20directory%0Ainclude%20%2Fetc%2Flogrotate.d%0A%0A%23%20system-specific%20logs%20may%20be%20also%20be%20configured%20here. mode: 420 overwrite: true path: /etc/logrotate.conf outdated: {} type: Configuration status: applicationState: Applied $ oc debug node/xx Starting pod/xx ... To use host binaries, run `chroot /host` Pod IP: xx If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# cat /etc/logrotate.conf # see "man logrotate" for details # rotate log files daily daily # keep 4 weeks worth of backlogs rotate 30 # create new (empty) log files after rotating old ones create # use date as a suffix of the rotated file dateext # uncomment this if you want your log files compressed #compress # RPM packages drop log rotation information into this directory include /etc/logrotate.d # system-specific logs may be also be configured here.sh-5.1# sh-5.1# sudo grep logrotate /var/log/cron* grep: /var/log/cron*: No such file or directory sh-5.1# sudo ls -ltr /var/log/ total 88 drwxr-sr-x+ 3 root systemd-journal 46 Mar 22 01:25 journal lrwxrwxrwx. 1 root root 39 Mar 22 01:25 README -> ../../usr/share/doc/systemd/README.logs -rw-------. 1 root root 0 Mar 22 01:25 tallylog drwxr-x---. 2 chrony chrony 6 Mar 22 01:25 chrony drwxr-xr-x. 2 root root 6 Mar 22 01:25 glusterfs drwxr-xr-x. 2 root root 6 Mar 22 01:25 qemu-ga drwx------. 3 root root 17 Mar 22 01:25 samba drwxr-x---. 2 sssd sssd 6 Mar 22 01:25 sssd drwx------. 2 root root 6 Mar 22 01:25 private -rw-rw----. 1 root utmp 0 Mar 22 01:25 btmp drwx------. 2 root root 23 Mar 22 01:25 audit -rw-rw-r--. 1 root utmp 289080 Mar 22 01:25 lastlog drwxr-x---. 2 openvswitch hugetlbfs 54 Mar 22 01:25 openvswitch drwx------. 3 root root 18 Mar 22 01:26 crio drwxr-xr-x. 2 root root 6 Mar 22 08:02 usbguard -rw-rw-r--. 1 root utmp 28800 Mar 22 14:24 wtmp drwxr-xr-x. 19 root root 4096 Mar 22 14:25 pods drwxr-xr-x. 2 root root 45056 Mar 22 14:25 containers sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ...
Expected results:
All rules with auto-remediation available should PASS after the auto remedations applied.
Additional info:
This bug is for rhel9 os only.
- links to