Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-469

unable to use hostPath persistent storage

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Medium Medium
    • None
    • 4.7, 4.8, 4.9, 4.10-1.2.0
    • sandboxed-containers
    • None
    • False
    • False
    • Hide
      If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the hostPath volume in an OpenShift Container Platform cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks.

      Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default. This also provides stronger protection against potential security flaws in the virtiofsd daemon or QEMU.

      If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to container_file_t, following SELinux policy for container runtimes. See Persistent storage using local volumes for more information.

      Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host to allow the virtiofsd daemon to access these specific labels. (BZ#1904609)
      Show
      If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the hostPath volume in an OpenShift Container Platform cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks. Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default. This also provides stronger protection against potential security flaws in the virtiofsd daemon or QEMU. If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to container_file_t, following SELinux policy for container runtimes. See Persistent storage using local volumes for more information. Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host to allow the virtiofsd daemon to access these specific labels. (BZ#1904609)
    • Known Issue
    • Done
    • Kata Sprint #196
    • 0
    • 0

      Description of problem:
      From the doc:
      https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/persistent-storage-hostpath.html

      On the kata node, create /mnt/data.

      $ cat pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
      name: task-pv-volume
      labels:
      type: local
      spec:
      storageClassName: manual
      capacity:
      storage: 5Gi
      accessModes:

      • ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        hostPath:
        path: "/mnt/data"

      $ oc create -f pv.yaml

      $ cat pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: task-pvc-volume
      spec:
      accessModes:

      • ReadWriteOnce
        resources:
        requests:
        storage: 1Gi
        storageClassName: manual

      $ oc create -f pvc.yaml

      $ cat pod.yaml
      apiVersion: v1
      kind: Pod
      metadata:
      name: pod-sleep
      spec:
      containers:

      • name: pod-sleep
        image: ubi8
        securityContext:
        privileged: true
        volumeMounts:
      • mountPath: /data
        name: hostpath-privileged
        command: ["sleep"]
        args: ["10000"]
        securityContext: {}
        volumes:
      • name: hostpath-privileged
        persistentVolumeClaim:
        claimName: task-pvc-volume
        runtimeClassName: kata

      $ oc create -f pod.yaml

      We end up with unable to create the pod.

      Warning Failed 30s (x11 over 10m) kubelet Error: CreateContainer failed: Timeout reached after 10s waiting for device 0:0:0:0/block: unknown

      Version-Release number of selected component (if applicable):
      kata 1.11.3-1.el8

      How reproducible:
      always

            jira-bugzilla-migration RH Bugzilla Integration
            qcai@redhat.com Qian Cai (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

              Created:
              Updated:
              Resolved: