Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-24685

Probes for sysctl don't fetch expected data from OCP node

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Undefined Undefined
    • None
    • None
    • openscap
    • None
    • None
    • None
    • rhel-sst-security-compliance
    • ssg_security
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None

      What were you trying to do that didn't work?

      In OCP 4.11, 4.12, 4.13, 4.14 and 4.15 the following rules evaluate to FAIL.

      Remediations are being applied, but oscap still sees non-compliant sysctl values.

       oc get ccr -lcompliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL
      NAME                                                               STATUS   SEVERITY
      rhcos4-high-master-sysctl-net-core-bpf-jit-harden                  FAIL     medium
      rhcos4-high-master-sysctl-net-ipv6-conf-all-accept-ra              FAIL     medium
      rhcos4-high-master-sysctl-net-ipv6-conf-all-accept-redirects       FAIL     medium
      rhcos4-high-master-sysctl-net-ipv6-conf-default-accept-ra          FAIL     medium
      rhcos4-high-master-sysctl-net-ipv6-conf-default-accept-redirects   FAIL     medium
      rhcos4-high-worker-sysctl-net-core-bpf-jit-harden                  FAIL     medium
      rhcos4-high-worker-sysctl-net-ipv6-conf-all-accept-ra              FAIL     medium
      rhcos4-high-worker-sysctl-net-ipv6-conf-all-accept-redirects       FAIL     medium
      rhcos4-high-worker-sysctl-net-ipv6-conf-default-accept-ra          FAIL     medium
      rhcos4-high-worker-sysctl-net-ipv6-conf-default-accept-redirects   FAIL     medium

      It seems that the oscap is not able to see the sysctl runtime sysctl value:
      The node environment sees the sysctl value as 2:

       $ oc debug node/ip-10-0-18-141.ec2.internal
      Starting pod/ip-10-0-18-141ec2internal-debug-l7snj ...
      To use host binaries, run `chroot /host`
      Pod IP: 10.0.18.141
      If you don't see a command prompt, try pressing enter.
      
      sh-4.4# chroot /host
      
      sh-5.1# sysctl net.core.bpf_jit_harden
      net.core.bpf_jit_harden = 2
      
      sh-5.1# cat /proc/sys/net/core/bpf_jit_harden 
      2
      
      sh-5.1# grep -r "net\.core\.bpf_jit_harden" /etc/sysctl.d
      /etc/sysctl.d/75-sysctl_net_core_bpf_jit_harden.conf:net.core.bpf_jit_harden=2
      
      sh-5.1# grep -r "net\.core\.bpf_jit_harden" /etc/sysctl.conf 
      sh-5.1#

      Whereas oscap doesnn't see the sysctl at all:

      I: oscap: 0 objects defined by 'oval:ssg-object_sysctl_net_core_bpf_jit_harden_runtime:obj:1' exist on the system. [oscap(9):oscap(7ff8af4c1bc0):oval_resultTest.c:918:_oval_result_test_evaluate_items]

      In other cases oscap sees the sysctl with a different value:

      These are commands ran on the node:

      oc debug node/ip-10-0-18-141.ec2.internal - sysctl net.ipv6.conf.default.accept_ra
      Starting pod/ip-10-0-18-141ec2internal-debug-ndk9p ...
      To use host binaries, run `chroot /host`
      net.ipv6.conf.default.accept_ra = 0
      Removing debug pod ... 

      And this is log output from oscap:

      D: oscap: ("seap.msg" ":id" 4 ":reply-id" 4 (2 () ((("unix:sysctl_item" ":id" "100009619" ) ("name" "net.ipv6.conf.default.accept_ra" ) ("value" "1" ) ) ) () ) ) [oscap(9):probe_worker(7ff80d7f2700):seap-packet.c:261:SEAP_packet_msg2sexp]

      (Edit: Fixed the log above to show the right sysctl)

      Openscap installed in the openscap-container:

      openscap-1.3.8-1.el8_8.x86_64 rhel-8-for-x86_64-appstream-rpms 3.9 MB
      openscap-scanner-1.3.8-1.el8_8.x86_64 rhel-8-for-x86_64-appstream-rpms 78.7 kB 

      Please provide the package NVR for which bug is seen:

      openscap-1.3.8-1

      How reproducible:

      Always

      Steps to reproduce

      1. Get a OCP 4.14 cluster
        You can do "launch 4.14 aws" on cluster-bot
      2. Install CO 1.4.0 (which uses openscap-1.3.8)
        oc create -f default-install.yaml
      3. Create a binding with rhcos4-high and default-auto-apply ScanSetting
        oc compliance bind -N rhcos4-high -S default-auto-apply profile/rhcos4-high
      4. Wait for scan to finish and remediations to apply
        Until "oc get nodes -w" doesn't show "SchedulingDisabled"
      5. Do a rerun
        oc compliance rerun-now scansettingbinding rhcos4-high
      6. Check the rules with automation that are failing
        oc get ccr -lcompliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL 

      Other logs and outputs are also attached.

      Expected results

      The sysctl rules in pass.

      Actual results

      The following sysctl rules always evaluate fail, even though the node is compliant.

      • `sysctl_net_core_bpf_jit_harden`
      • `sysctl_net_ipv6_conf_all_accept_ra`
      • `sysctl_net_ipv6_conf_all_accept_ra_redirects`
      • `sysctl_net_ipv6_conf_default_accept_ra`
      • `sysctl_net_ipv6_conf_default_accept_ra_redirects`

      Other sysctl rules in the `rhcos4-high` profile pass. Only the sysctls listed above fail.

      Additional notes

      On OCP 4.10 with CO 1.3.0 and openscap-1.3.7, the issue doesn't manifest.
      But on the same cluster, when CO 1.3.0 with openscap-1.3.8 is installed the issue manifests.
      A relevant change may be https://github.com/OpenSCAP/openscap/pull/1976, which added offline capabilities to sysctl.

              jcerny@redhat.com Jan Cerny
              wsato@redhat.com Watson Sato
              Jan Cerny Jan Cerny
              SSG Security QE SSG Security QE
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: