-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
4.12.z
-
No
-
Rejected
-
False
-
Description of problem:
Trident pods are running with custom SCC dynatrace. If you look at the output trident SA is linked with trident controller. Only difference between dynastrace and trident SCC is priority.
$ oc -n trident get pods -o custom-columns=NAME:.metadata.name,SCC:.metadata.annotations.openshift
.io/scc,SA:.spec.serviceAccount
NAME SCC SA
trident-csi-65df7f4865-x6g8q dynatrace-oneagent-csi-driver trident-controller
trident-csi-7jvk9 dynatrace-oneagent-csi-driver trident-node-linux
trident-csi-bvzcp dynatrace-oneagent-csi-driver trident-node-linux
trident-csi-g6677 dynatrace-oneagent-csi-driver trident-node-linux
trident-csi-hdtkq dynatrace-oneagent-csi-driver trident-node-linux
trident-csi-rs47g dynatrace-oneagent-csi-driver trident-node-linux
trident-csi-xlcxn dynatrace-oneagent-csi-driver trident-node-linux
trident-operator-6fb4895fb-w8dfh dynatrace-oneagent-csi-driver trident-operator
$ oc get scc -o custom-columns=NAME:.metadata.name,PRIORITY:priority,USERS:users | grep trident
trident-controller <nil> [system:serviceaccount:trident:trident-controller]
trident-node-linux <nil> [system:serviceaccount:trident:trident-node-linux]
Ideally SCC assignment based on SA then priority but in this case it working very oppositely
$ oc get scc dynatrace-oneagent-csi-driver -o yaml
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- '*'
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
creationTimestamp: "2023-03-01T13:15:37Z"
generation: 2
labels:
app.kubernetes.io/component: csi-driver
app.kubernetes.io/name: dynatrace-operator
app.kubernetes.io/version: 0.9.1
name: dynatrace-oneagent-csi-driver
resourceVersion: "106204423"
uid: d8364315-be3a-4baa-8d37-02e93ecc58d6
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities: null
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
seccompProfiles: - '*'
supplementalGroups:
type: RunAsAny
users: - system:serviceaccount:dynatrace:dynatrace-oneagent-csi-driver
volumes: - '*'
$ oc get scc trident-node-linux -o yaml
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: trident-node-linux is a clone of the privileged built-in,
and is meant just for use with trident.
creationTimestamp: "2023-02-08T12:16:41Z"
generation: 1
labels:
app: node.csi.trident.netapp.io
name: trident-node-linux
ownerReferences:
- apiVersion: trident.netapp.io/v1
controller: true
kind: TridentOrchestrator
name: trident
uid: bc27c5f5-828f-44bf-bb48-445531efb27c
resourceVersion: "106205091"
uid: 7acdb5cd-476e-4682-9143-03c7413301b7
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities: null
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users: - system:serviceaccount:trident:trident-node-linux
volumes: - downwardAPI
- emptyDir
- hostPath
- projected
Its very odd behavior happening with random pods whenever we using custom SCC.