-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.20.0
-
None
Description of problem:
When running Pods that get into Completed phase, kube-controller-manager still reports them as conflicting with other Pods that use the same volume and run with a different SELinux label.
Version-Release number of selected component (if applicable): 4.20.0
How reproducible: always
Steps to Reproduce:
- Create a PVC + run a pod with label "c0,c1" that finishes quickly and stays Completed.
$ oc create -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi --- apiVersion: v1 kind: Pod metadata: name: testpod-c1 labels: name: test spec: restartPolicy: Never securityContext: seLinuxOptions: level: "s0:c0,c1" containers: - image: quay.io/centos/centos:8 command: - "sleep" - "1" name: centos volumeMounts: - name: vol mountPath: /mnt/test volumes: - name: vol persistentVolumeClaim: claimName: myclaim EOF
2. Wait for the pod to get Completed.
3. Start a second pod that uses the same PVC and runs with "c98,c99":
$ oc create -f - <<EOF apiVersion: v1 kind: Pod metadata: name: testpod-c2 labels: name: test spec: restartPolicy: Never securityContext: seLinuxOptions: level: "s0:c98,c99" containers: - image: quay.io/centos/centos:8 command: - "sleep" - "1" name: centos volumeMounts: - name: vol mountPath: /mnt/test volumes: - name: vol persistentVolumeClaim: claimName: myclaim EOF
Actual results:
- kube-controller-manager emits events about conflicting SELinux labels, despite one of the conflicting Pods does not run at all and is Completed
- kube-controller-manager includes Completed pods in selinux_warning_controller_selinux_volume_conflict metric.
Expected results:
no events, no metric
The same should apply to a Pod that reaches its final state and it won't get ever Running. Such as Failed.
Pods that may eventually run should still be counted as conflicting. Such as crashlooping pods, pods waiting to be scheduled etc.