-
Bug
-
Resolution: Done
-
Undefined
-
None
-
4.11.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
Proposed
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Following error is seen in precahe job on spoke, when attempt to use 4.11 TALM on 4.12 spoke. Warning FailedCreate 97s job-controller Error creating: pods "pre-cache-48q8w" is forbidden: violates PodSecurity "restricted:latest": privileged (container "pre-cache-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "pre-cache-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pre-cache-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "pre-cache-container" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "pre-cache-container" must not set runAsUser=0), seccompProfile (pod or container "pre-cache-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Note that the 4.11 TALM was upgraded from 4.10.
Version-Release number of selected component (if applicable):
TALM 4.11.2
How reproducible:
100%
Steps to Reproduce:
1. Create CGU with precache enabled. 2. Wait for precache to finish 3.
Actual results:
2. precahe stuck at "Starting" in CGU Precache job show pod security error on 4.12 spoke cluster.
Expected results:
Additional info:
CGU:
[kni@provisionhost-0-0 ~]$ oc get cgu test-1 -o yaml
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"ran.openshift.io/v1alpha1","kind":"ClusterGroupUpgrade","metadata":{"annotations":{},"name":"test-1","namespace":"default"},"spec":{"backup":false,"clusters":["spoke-4","spoke-3"],"enable":true,"managedPolicies":["common-config-policy","common-subscriptions-policy"],"preCaching":true,"remediationStrategy":{"maxConcurrency":2,"timeout":17}}}
creationTimestamp: "2022-10-27T22:59:48Z"
finalizers:
- ran.openshift.io/cleanup-finalizer
generation: 2
name: test-1
namespace: default
resourceVersion: "152474785"
uid: a9e7c4a1-b0a0-4e3b-a8c3-c3bc2198d155
spec:
actions:
afterCompletion:
deleteObjects: true
beforeEnable: {}
backup: false
clusters:
- spoke-4
- spoke-3
enable: true
managedPolicies:
- common-config-policy
- common-subscriptions-policy
preCaching: true
remediationStrategy:
maxConcurrency: 2
timeout: 17
status:
computedMaxConcurrency: 2
conditions:
- lastTransitionTime: "2022-10-27T22:59:48Z"
message: Precaching is not completed (required)
reason: PrecachingRequired
status: "False"
type: Ready
- lastTransitionTime: "2022-10-27T22:59:48Z"
message: Precaching is required and not done
reason: PrecachingNotDone
status: "False"
type: PrecachingDone
- lastTransitionTime: "2022-10-27T22:59:49Z"
message: Pre-caching spec is valid and consistent
reason: PrecacheSpecIsWellFormed
status: "True"
type: PrecacheSpecValid
managedPoliciesNs:
common-config-policy: ztp-common
common-subscriptions-policy: ztp-common
precaching:
clusters:
- spoke-4
- spoke-3
spec:
operatorsIndexes:
- registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/olm/redhat-operators:v4.11
- registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/olm/far-edge-sriov-fec:v4.11
- registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/olm/amq-operator:v4.10
operatorsPackagesAndChannels:
- sriov-network-operator:stable
- ptp-operator:stable
- cluster-logging:stable
- local-storage-operator:stable
- sriov-fec:stable
- amq7-interconnect-operator:1.10.x
- bare-metal-event-relay:stable
status:
spoke-3: Starting
spoke-4: Starting
status: {}
- depends on
-
OCPBUGS-1424 Pod security warning when deploying from upstream source
-
- Closed
-
- links to
- mentioned on
(1 mentioned on)