1. Proposed title of this feature request
[RFE] OpenShift - Need authorization for setting tolerations
2. What is the nature and description of the request?
The feature request is discussed in length here, but the upstream issues seem to have gone nowhere.
https://github.com/kubernetes/kubernetes/issues/48041
https://github.com/kubernetes/kubernetes/issues/61185
3. Why does the customer need this? (List the business requirements here)
Currently, any user can add tolerations for our master noschedule taints. This is potentially dangerous and also is not in line with the intention of the master noschedule taints. Non-admin users should not be able to schedule any pods on the masters.
4. List any affected packages or components.
—
Further details:
In upstream kubernetes, there is nothing that stops a non-admin user from creating any toleration for any taint - at least according to the doc.
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
It seems to be exactly the same in OpenShift.
Here's my test with 4.6.23:
Creating an unprivileged user and logging in with that user:
[root@openshift-jumpserver-0 ~]# export KUBECONFIG=/root/openshift-install/auth/kubeconfig [root@openshift-jumpserver-0 ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.23 True False 32d Cluster version is 4.6.23 [root@openshift-jumpserver-0 ~]# htpasswd -c -B -b users.htpasswd user1 MyPassword! Adding password for user user1 [root@openshift-jumpserver-0 ~]# oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd -n openshift-config secret/htpass-secret created [root@openshift-jumpserver-0 ~]# cat <<'EOF' | oc apply -f - apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider mappingMethod: claim type: HTPasswd htpasswd: fileData: name: htpass-secret EOF [root@openshift-jumpserver-0 ~]# oc login -u user1 --password=MyPassword! Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> [root@openshift-jumpserver-0 ~]# oc whoami user1
oc new-project user1 cat <<'EOF' > pod.yaml apiVersion: v1 kind: Pod metadata: name: fedora-pod labels: app: fedora-pod spec: containers: - name: fedora image: fedora command: - sleep - infinity imagePullPolicy: IfNotPresent tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - "openshift-master-0" EOF oc apply -f pod.yaml
[root@openshift-jumpserver-0 ~]# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fedora-pod 1/1 Running 0 21s 172.26.0.69 openshift-master-0 <none> <none> [root@openshift-jumpserver-0 ~]#
[root@openshift-jumpserver-0 ~]# oc whoami user1