The CLI is not updating the progress when running the v0.2+ on 4.12.

      While 4.12 is not GA yet, I am setting this card to the v0.3 development.

      Steps to reproduce:

      • Install 4.12 cluster (4.12.0-rc.4)
      • Setup the Certification environment (set node label and taints, registry, etc)
      • Download and run the v0.2+ OPCT
      • run: $ ./openshift-provider-cert-linux-amd64-v0.2.0 run -w;
      • (check the progress [field MESSAGE] will not be updated)
      • Check the log of pod 'sonobuoy-20-openshift-conformance-validated-job-<id>', and container 'report-progress', and check that nothing is read
      • Check the log of the container 'plugin' and check the openshift-tests publishing messages on the stdout (and writing to pipe file)
      • Install a cluster in an older version (<=4.11), run the same OPCT version and check the progress being updated, and report-progress reading the pipe messages

      Note:

      • This problem is not causing the crash on the tool, only in the update progress engine.
      • The tool has the ability to signilze when the execution is finished, gracefully shutdown the job

      Possible cause:

      • the file pipe used to capture the stdout for openshift-tests is not being read by report-progress, responsible to parse the data and send to the sonobuoy aggregator
      • SCC could be blocking the file pipe usage? Why only in 4.12?

            [OPCT-11] [bug] 4.12 execution is not updating progress

            Marco Braga added a comment -

            PRs closed, tag v0.3.0-alpha1 created for both plugins and CLI.

            Marco Braga added a comment - PRs closed, tag v0.3.0-alpha1 created for both plugins and CLI.

            Marco Braga added a comment -

            Marco Braga added a comment - rhn-support-rvanderp ptal on https://github.com/redhat-openshift-ecosystem/provider-certification-tool/pull/42 ?

            Marco Braga added a comment - PRs submited and waiting for review https://github.com/redhat-openshift-ecosystem/provider-certification-tool/pull/42 https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/pull/34  

            Marco Braga added a comment -

            I think I found the root cause: The output of openshift-tests binary have been changed on 4.12, for that reason the report-progress service is not correctly parsing the output and not sending update counters to Sonobuoy aggregator.

            So the following regex "([0-9]{1,}\/[0-9]{1,}\/[0-9]{1,})" [0] is not parsing the counters.

            [0] https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/openshift-tests-provider-cert/plugin/report-progress.sh#L200

            See the result logs from artifacts executed on 4.11[1] and 4.12[2]:

            [1] sample lines of execution on 4.11

            $ cat .opct-41124/clusters/opct-41124/opct/results/podlogs/openshift-provider-certification/sonobuoy-20-openshift-conformance-validated-job-1de7026225994192/logs/plugin.txt | grep ^started |head
            started: (0/2/3476) "[sig-scheduling][Early] The openshift-monitoring pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/3/3476) "[sig-scheduling][Early] The openshift-console pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/4/3476) "[sig-scheduling][Early] The HAProxy router pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/5/3476) "[sig-scheduling][Early] The openshift-image-registry pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/6/3476) "[sig-scheduling][Early] The openshift-apiserver pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/7/3476) "[sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] [Skipped:Disconnected] [Suite:openshift/conformance/parallel]"
            started: (0/8/3476) "[sig-arch][Early] Managed cluster should start all core operators [Skipped:Disconnected] [Suite:openshift/conformance/parallel]"
            started: (0/9/3476) "[sig-scheduling][Early] The openshift-operator-lifecycle-manager pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: (0/10/3476) "[sig-cluster-lifecycle][Feature:Machines][Early] Managed cluster should have same number of Machines and Nodes [Suite:openshift/conformance/parallel]"
            started: (0/11/3476) "[sig-etcd] etcd cluster has the same number of master nodes and voting members from the endpoints configmap [Early] [Suite:openshift/conformance/parallel]"
             

            [2] sample lines of execution on 4.12

            $ cat .opct-4120/clusters/opct-4120/opct/results/podlogs/openshift-provider-certification/sonobuoy-20-openshift-conformance-validated-job-0497668c4be7421f/logs/plugin.txt | grep ^started |head
            started: 0/1/23 "[sig-ci] [Early] prow job name should match network type [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]"
            started: 0/2/23 "[sig-etcd] etcd record the start revision of the etcd-operator [Early] [Suite:openshift/conformance/parallel]"
            started: 0/3/23 "[sig-arch][Early] CRDs for openshift.io should have a status in the CRD schema [Suite:openshift/conformance/parallel]"
            started: 0/4/23 "[sig-node] Managed cluster record the number of nodes at the beginning of the tests [Early] [Suite:openshift/conformance/parallel]"
            started: 0/5/23 "[sig-scheduling][Early] The openshift-oauth-apiserver pods [apigroup:oauth.openshift.io][apigroup:user.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: 0/6/23 "[sig-scheduling][Early] The HAProxy router pods [apigroup:route.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: 0/7/23 "[sig-scheduling][Early] The openshift-apiserver pods [apigroup:apps.openshift.io][apigroup:authorization.openshift.io][apigroup:build.openshift.io][apigroup:image.openshift.io][apigroup:project.openshift.io][apigroup:quota.openshift.io][apigroup:route.openshift.io][apigroup:security.openshift.io][apigroup:template.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: 0/8/23 "[sig-scheduling][Early] The openshift-monitoring prometheus-adapter pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: 0/9/23 "[sig-scheduling][Early] The openshift-etcd pods [apigroup:operator.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]"
            started: 0/10/23 "[sig-arch][Early] CRDs for openshift.io should have subresource.status [Suite:openshift/conformance/parallel]"
             

             

            Marco Braga added a comment - I think I found the root cause: The output of openshift-tests binary have been changed on 4.12, for that reason the report-progress service is not correctly parsing the output and not sending update counters to Sonobuoy aggregator. So the following regex "( [0-9] {1,}\/ [0-9] {1,}\/ [0-9] {1,})" [0] is not parsing the counters. [0] https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/openshift-tests-provider-cert/plugin/report-progress.sh#L200 See the result logs from artifacts executed on 4.11 [1] and 4.12 [2] : [1] sample lines of execution on 4.11 $ cat .opct-41124/clusters/opct-41124/opct/results/podlogs/openshift-provider-certification/sonobuoy-20-openshift-conformance-validated-job-1de7026225994192/logs/plugin.txt | grep ^started |head started: (0/2/3476) "[sig-scheduling][Early] The openshift-monitoring pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/3/3476) "[sig-scheduling][Early] The openshift-console pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/4/3476) "[sig-scheduling][Early] The HAProxy router pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/5/3476) "[sig-scheduling][Early] The openshift-image-registry pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/6/3476) "[sig-scheduling][Early] The openshift-apiserver pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/7/3476) "[sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] [Skipped:Disconnected] [Suite:openshift/conformance/parallel]" started: (0/8/3476) "[sig-arch][Early] Managed cluster should start all core operators [Skipped:Disconnected] [Suite:openshift/conformance/parallel]" started: (0/9/3476) "[sig-scheduling][Early] The openshift- operator -lifecycle-manager pods should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: (0/10/3476) "[sig-cluster-lifecycle][Feature:Machines][Early] Managed cluster should have same number of Machines and Nodes [Suite:openshift/conformance/parallel]" started: (0/11/3476) "[sig-etcd] etcd cluster has the same number of master nodes and voting members from the endpoints configmap [Early] [Suite:openshift/conformance/parallel]" [2] sample lines of execution on 4.12 $ cat .opct-4120/clusters/opct-4120/opct/results/podlogs/openshift-provider-certification/sonobuoy-20-openshift-conformance-validated-job-0497668c4be7421f/logs/plugin.txt | grep ^started |head started: 0/1/23 "[sig-ci] [Early] prow job name should match network type [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" started: 0/2/23 "[sig-etcd] etcd record the start revision of the etcd- operator [Early] [Suite:openshift/conformance/parallel]" started: 0/3/23 "[sig-arch][Early] CRDs for openshift.io should have a status in the CRD schema [Suite:openshift/conformance/parallel]" started: 0/4/23 "[sig-node] Managed cluster record the number of nodes at the beginning of the tests [Early] [Suite:openshift/conformance/parallel]" started: 0/5/23 "[sig-scheduling][Early] The openshift-oauth-apiserver pods [apigroup:oauth.openshift.io][apigroup:user.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/6/23 "[sig-scheduling][Early] The HAProxy router pods [apigroup:route.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/7/23 "[sig-scheduling][Early] The openshift-apiserver pods [apigroup:apps.openshift.io][apigroup:authorization.openshift.io][apigroup:build.openshift.io][apigroup:image.openshift.io][apigroup:project.openshift.io][apigroup:quota.openshift.io][apigroup:route.openshift.io][apigroup:security.openshift.io][apigroup:template.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/8/23 "[sig-scheduling][Early] The openshift-monitoring prometheus-adapter pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/9/23 "[sig-scheduling][Early] The openshift-etcd pods [apigroup: operator .openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/10/23 "[sig-arch][Early] CRDs for openshift.io should have subresource.status [Suite:openshift/conformance/parallel]"  

            Marco Braga added a comment -

            Scenarios that upgrade is not working:

            • When running the OPCT in a NEW 4.12 cluster

            Any existing clusters prior 4.12, and upgrade feature 4.11->4.12 (opct pods created before upgrade, but running on 4.12) it works (like last comment).

            It could decrease the investigation for 4.12 clusters only.

            The PR to fix warnings on Pod Security Admission, first comment, has been added on PR#41, the warnings have been cleaned but the update progress still not worked.

            On 1.25 (OCP 4.12), the PSA[1] and Local Storage Capacity Isolation[2] becomes GA, I am not sure if it's related, but the file pipe, used to publish the output of the execution on 'plugin' container and consumed by 'report-progress' container, is created in the a local storage mounted on both containers.

            [1] https://kubernetes.io/blog/2022/09/19/local-storage-capacity-isolation-ga/

            [2] https://kubernetes.io/docs/concepts/security/pod-security-admission/

            Marco Braga added a comment - Scenarios that upgrade is not working: When running the OPCT in a NEW 4.12 cluster Any existing clusters prior 4.12, and upgrade feature 4.11->4.12 (opct pods created before upgrade, but running on 4.12) it works (like last comment). It could decrease the investigation for 4.12 clusters only. The PR to fix warnings on Pod Security Admission, first comment, has been added on PR#41 , the warnings have been cleaned but the update progress still not worked. On 1.25 (OCP 4.12), the PSA [1] and Local Storage Capacity Isolation [2] becomes GA, I am not sure if it's related, but the file pipe, used to publish the output of the execution on 'plugin' container and consumed by 'report-progress' container, is created in the a local storage mounted on both containers. [1] https://kubernetes.io/blog/2022/09/19/local-storage-capacity-isolation-ga/ [2] https://kubernetes.io/docs/concepts/security/pod-security-admission/

            Marco Braga added a comment - - edited

            When running the upgrade feature with fixes on SCC, this PR, the progress update is working:

            $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc get clusterversion
            NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
            version   4.12.0-rc.6   True        False         55m     Cluster version is 4.12.0-rc.6
            
            $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig ./.opct-411to412-openshift-provider-cert status
            Fri, 06 Jan 2023 17:48:02 -03> Global Status: running
            JOB_NAME                           | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
            05-openshift-cluster-upgrade       | failed     |            | 0/0 (0 failures)          | waiting for post-processor...                     
            10-openshift-kube-conformance      | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
            20-openshift-conformance-validated | running    |            | 546/3476 (1 failures)     | status=running                                    
            99-openshift-artifacts-collector   | running    |            | 0/0 (0 failures)          | status=waiting-for=20-openshift-conformance-validated=(0/-2930/0)=[9/100]
            
            $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc logs -n openshift-provider-certification sonobuoy-20-openshift-conformance-validated-job-50b12e70fbf640ac -c plugin -f --tail=10
            started: (1/545/3476) "[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
            
            passed: (4.6s) 2023-01-06T20:45:51 "[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
            
            started: (1/546/3476) "[sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes [Skipped:SingleReplicaTopology] [Suite:openshift/conformance/serial] [Suite:k8s]"
            
            passed: (2m6s) 2023-01-06T20:47:57 "[sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes [Skipped:SingleReplicaTopology] [Suite:openshift/conformance/serial] [Suite:k8s]"
            
            started: (1/547/3476) "[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
            
             $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc logs -n openshift-provider-certification sonobuoy-20-openshift-conformance-validated-job-50b12e70fbf640ac -c report-progress -f --tail=10
                    "total":3476,
                    "failures":["[sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial] [Suite:openshift/conformance/serial] [Suite:k8s]"],
                    "msg":"status=running"
                }
            20230106-204757> [report] Sending report payload [updater]: {
                    "completed":547,
                    "total":3476,
                    "failures":["[sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial] [Suite:openshift/conformance/serial] [Suite:k8s]"],
                    "msg":"status=running"
                }
            
            $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig ./.opct-411to412-openshift-provider-cert status
            Fri, 06 Jan 2023 17:48:59 -03> Global Status: running
            JOB_NAME                           | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
            05-openshift-cluster-upgrade       | failed     |            | 0/0 (0 failures)          | waiting for post-processor...                     
            10-openshift-kube-conformance      | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
            20-openshift-conformance-validated | running    |            | 547/3476 (1 failures)     | status=running                                    
            99-openshift-artifacts-collector   | running    |            | 0/0 (0 failures)          | status=waiting-for=20-openshift-conformance-validated=(0/-2929/0)=[3/100]
            
            

            Marco Braga added a comment - - edited When running the upgrade feature with fixes on SCC, this PR , the progress update is working: $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc get clusterversion NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS version   4.12.0-rc.6   True        False         55m     Cluster version is 4.12.0-rc.6 $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig ./.opct-411to412-openshift-provider-cert status Fri, 06 Jan 2023 17:48:02 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE 05-openshift-cluster-upgrade | failed | | 0/0 (0 failures) | waiting for post-processor... 10-openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... 20-openshift-conformance-validated | running | | 546/3476 (1 failures) | status=running 99-openshift-artifacts-collector | running | | 0/0 (0 failures) | status=waiting- for =20-openshift-conformance-validated=(0/-2930/0)=[9/100] $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc logs -n openshift-provider-certification sonobuoy-20-openshift-conformance-validated-job-50b12e70fbf640ac -c plugin -f --tail=10 started: (1/545/3476) "[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" passed: (4.6s) 2023-01-06T20:45:51 "[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" started: (1/546/3476) "[sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes [Skipped:SingleReplicaTopology] [Suite:openshift/conformance/serial] [Suite:k8s]" passed: (2m6s) 2023-01-06T20:47:57 "[sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes [Skipped:SingleReplicaTopology] [Suite:openshift/conformance/serial] [Suite:k8s]" started: (1/547/3476) "[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig oc logs -n openshift-provider-certification sonobuoy-20-openshift-conformance-validated-job-50b12e70fbf640ac -c report-progress -f --tail=10 "total" :3476, "failures" :[ "[sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial] [Suite:openshift/conformance/serial] [Suite:k8s]" ], "msg" : "status=running" } 20230106-204757> [report] Sending report payload [updater]: { "completed" :547, "total" :3476, "failures" :[ "[sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial] [Suite:openshift/conformance/serial] [Suite:k8s]" ], "msg" : "status=running" } $ KUBECONFIG=$PWD/.opct-411to412/clusters/opct-411to412/auth/kubeconfig ./.opct-411to412-openshift-provider-cert status Fri, 06 Jan 2023 17:48:59 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE 05-openshift-cluster-upgrade | failed | | 0/0 (0 failures) | waiting for post-processor... 10-openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... 20-openshift-conformance-validated | running | | 547/3476 (1 failures) | status=running 99-openshift-artifacts-collector | running | | 0/0 (0 failures) | status=waiting- for =20-openshift-conformance-validated=(0/-2929/0)=[3/100]

            Marco Braga added a comment -

            Blocking by SPLAT-874 as we need to fix SCC to check if the filepipe will work to capture the plugin openshift-tests stdout

            Marco Braga added a comment - Blocking by SPLAT-874 as we need to fix SCC to check if the filepipe will work to capture the plugin openshift-tests stdout

            Marco Braga added a comment - - edited

            Error being reported on the logs of aggregator when scheduling the jobs:

             

            time="2022-12-08T20:24:22Z" level=info msg="Launching plugin 99-openshift-artifacts-collector with order 0"
            time="2022-12-08T20:24:22Z" level=info msg="Launching plugin 10-openshift-kube-conformance with order 0"
            time="2022-12-08T20:24:22Z" level=info msg="Launching plugin 20-openshift-conformance-validated with order 0"
            W1208 20:24:22.903496 1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
            W1208 20:24:23.050086 1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
            W1208 20:24:23.249029 1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "report-progress", "plugin", "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") 

            Marco Braga added a comment - - edited Error being reported on the logs of aggregator when scheduling the jobs:   time= "2022-12-08T20:24:22Z" level=info msg= "Launching plugin 99-openshift-artifacts-collector with order 0" time= "2022-12-08T20:24:22Z" level=info msg= "Launching plugin 10-openshift-kube-conformance with order 0" time= "2022-12-08T20:24:22Z" level=info msg= "Launching plugin 20-openshift-conformance-validated with order 0" W1208 20:24:22.903496 1 warnings.go:70] would violate PodSecurity "restricted:latest" : allowPrivilegeEscalation != false (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation= false ), unrestricted capabilities (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.capabilities.drop=[ "ALL" ]), runAsNonRoot != true (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.runAsNonRoot= true ), seccompProfile (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost" ) W1208 20:24:23.050086 1 warnings.go:70] would violate PodSecurity "restricted:latest" : allowPrivilegeEscalation != false (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation= false ), unrestricted capabilities (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.capabilities.drop=[ "ALL" ]), runAsNonRoot != true (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.runAsNonRoot= true ), seccompProfile (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost" ) W1208 20:24:23.249029 1 warnings.go:70] would violate PodSecurity "restricted:latest" : allowPrivilegeEscalation != false (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.allowPrivilegeEscalation= false ), unrestricted capabilities (containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.capabilities.drop=[ "ALL" ]), runAsNonRoot != true (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.runAsNonRoot= true ), seccompProfile (pod or containers "report-progress" , "plugin" , "sonobuoy-worker" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost" )

              rhn-support-mrbraga Marco Braga
              rhn-support-mrbraga Marco Braga
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved:

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0 minutes
                  0m
                  Logged:
                  Time Spent - 6 hours
                  6h