-
Bug
-
Resolution: Obsolete
-
Normal
-
None
-
None
-
1
-
False
-
-
False
-
rhel-sst-network-fastdatapath
-
-
-
ssg_networking
-
Low
When we run ovn-ctl as non-root (a pod using SCC=restricted-v2), we can see the following warnings in the log:
chown: changing ownership of '/etc/ovn/.ovnnb_db.db.~lock~': Operation not permitted chown: changing ownership of '/etc/ovn/ovnnb_db.db': Operation not permitted chown: changing ownership of '/etc/ovn': Operation not permitted chown: changing ownership of '/tmp': Operation not permitted chown: changing ownership of '/tmp': Operation not permitted chown: changing ownership of '/etc/ovn/.ovnnb_db.db.~lock~': Operation not permitted chown: changing ownership of '/etc/ovn/ovnnb_db.db': Operation not permitted chown: changing ownership of '/etc/ovn': Operation not permitted
The errors don't affect service startup and seem cosmetic to me.
This happens because the script attempts to change the owner of the files and directories:
# Set the owner of the ovn_dbdir (with -R option) to OVN_USER if set. # This is required because the ovndbs are created with root permission # if not present when create_cluster/upgrade_db is called. INSTALL_USER="root" INSTALL_GROUP="root" [ "$OVN_USER" != "" ] && INSTALL_USER="${OVN_USER%:*}" [ "${OVN_USER##*:}" != "" ] && INSTALL_GROUP="${OVN_USER##*:}" chown -R $INSTALL_USER:$INSTALL_GROUP $ovn_dbdir chown -R $INSTALL_USER:$INSTALL_GROUP $OVN_RUNDIR chown -R $INSTALL_USER:$INSTALL_GROUP $ovn_logdir chown -R $INSTALL_USER:$INSTALL_GROUP $ovn_etcdir
But since the current user is not root, chown commands fail.
$ oc exec -it ovsdbserver-nb-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1$ ls -ld /etc/ovn
drwxrwsr-x. 2 root 1000660000 64 Jan 5 17:30 /etc/ovn
bash-5.1$ ls -l /etc/ovn
total 20
-rw-r-----. 1 1000660000 1000660000 16478 Jan 5 17:30 ovnnb_db.db
bash-5.1$ whoami
1000660000
bash-5.1$ id -g
0
bash-5.1$ id -G
0 1000660000
Above, /etc/ovn is a PVC and is mounted by OpenShift into the container as follows:
volumeMounts: - mountPath: /etc/ovn name: ovndbcluster-nb-etc-ovn volumes: - name: ovndbcluster-nb-etc-ovn persistentVolumeClaim: claimName: ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0
$ oc get -o yaml -n openstack pvc ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: "2024-01-04T22:37:25Z" finalizers: - kubernetes.io/pvc-protection labels: service: ovsdbserver-nb name: ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 namespace: openstack ownerReferences: - apiVersion: ovn.openstack.org/v1beta1 blockOwnerDeletion: false controller: true kind: OVNDBCluster name: ovndbcluster-nb uid: 3526c8e5-d84d-4abb-a010-f0ec9491d63b resourceVersion: "49602" uid: 816c3823-3932-4dbc-9572-161c8409bd63 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G storageClassName: local-storage volumeMode: Filesystem volumeName: local-storage11-crc-74q6p-master-0 status: accessModes: - ReadWriteOnce - ReadWriteMany - ReadOnlyMany capacity: storage: 10Gi phase: Bound
$ oc get -o yaml -n openstack pv local-storage11-crc-74q6p-master-0 apiVersion: v1 kind: PersistentVolume metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{"pv.kubernetes.io/provisioned-by":"crc-devsetup"},"labels":{"provisioned-by":"crc-devsetup"},"name":"local-storage11-crc-74q6p-master-0"},"spec":{"accessModes":["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"],"capacity":{"storage":"10Gi"},"local":{"path":"/mnt/openstack/pv11","type":"DirectoryOrCreate"},"nodeAffinity":{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["crc-74q6p-master-0"]}]}]}},"persistentVolumeReclaimPolicy":"Delete","storageClassName":"local-storage","volumeMode":"Filesystem"}} pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: crc-devsetup creationTimestamp: "2024-01-04T22:21:10Z" finalizers: - kubernetes.io/pv-protection labels: provisioned-by: crc-devsetup name: local-storage11-crc-74q6p-master-0 resourceVersion: "49592" uid: b7dce01a-553b-4ef3-a9f9-c532456a4449 spec: accessModes: - ReadWriteOnce - ReadWriteMany - ReadOnlyMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 namespace: openstack resourceVersion: "49578" uid: 816c3823-3932-4dbc-9572-161c8409bd63 local: path: /mnt/openstack/pv11 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - crc-74q6p-master-0 persistentVolumeReclaimPolicy: Delete storageClassName: local-storage volumeMode: Filesystem status: phase: Bound
Note: when I tried to pass
--ovn-user=$(id -u):$(id -u)
in hope that this will make chown commands a no-op and suspend the errors, indeed the errors were gone but then ovsdb-server failed to start with the following message:
ovsdb-server: /tmp/ovnnb_db.pid: only root can use --user option
uid = getuid(); if (geteuid() || uid) { VLOG_FATAL("%s: only root can use --user option", pidfile); }
PS: I also noticed that the script has a --ovs-user argument listed in the script help text message, but when you try to pass it, the script barks that the argument is unknown.
/usr/share/ovn/scripts/ovn-ctl: unknown option "--ovs-user=1000660000:0" (use --help for help)
- clones
-
FDP-245 ovn-ctl generates warnings in log when run ovsdb-server as non-root from a pod
- Closed