Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-62767

Nvidia GPU Operator - Exposed NVIDIA Persistence Socket in Container

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 4.18.z
    • Unknown
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • x86_64
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Our pentest team is reporting about the container having access to /run/nvidia-persistenced/socket , we are using Nvidia GPU operator installed via openshift cluster market place and manually not mounting

      The gpu operator version is:
      wxai-prod-mum # oc get csv | grep gpu
      gpu-operator-certified.v25.3.0 NVIDIA GPU Operator 25.3.0 gpu-operator-certified.v24.9.2 Succeeded

      Container/PODs Security Vulnerability
      196835: Exposed NVIDIA Persistence Socket in Container (High)
      Description
      During Active Reconnaissance, it was observed that the container has access to the NVIDIA persistence daemon socket located at /run/nvidia-persistenced/socket. This socket is typically used by NVIDIA drivers on the host to manage GPU resources. Exposing this socket to containers introduces a security risk, as it may allow unauthorized access to GPU control operations or interaction with the host's GPU management services.

      Steps to Reproduced:
      Verified NVIDIA socket is mounted using /etc/mtab | grep sock.

      Confirmed /run/nvidia-persistenced/socket exists with world-accessible permissions.

      Listed /dev/nvidia* to confirm GPU device files are exposed in the container.
      Ran nvidia-smi to verify GPU usage and container-level access to GPU resources.
      Retrieved NVIDIA driver version using /proc/driver/nvidia/version for CVE reference.

      Example pod running on gpu:
      This is reported for all the containers using GPU:

      wxai-prod-mum # oc rsh fms-detector-hap-56b6f894c9-j9gkk
      sh-5.1$ ls -l /run/nvidia-persistenced/socket
      srwxrwxrwx. 1 root root 0 May 30 17:37 /run/nvidia-persistenced/socket
      sh-5.1$ ls -l /dev/nvidia*
      crw-rw-rw-. 1 root root 195, 254 Jun 19 17:05 /dev/nvidia-modeset
      crw-rw-rw-. 1 root root 511, 0 May 30 17:37 /dev/nvidia-uvm
      crw-rw-rw-. 1 root root 511, 1 May 30 17:37 /dev/nvidia-uvm-tools
      crw-rw-rw-. 1 root root 195, 2 May 30 17:37 /dev/nvidia2
      crw-rw-rw-. 1 root root 195, 255 May 30 17:37 /dev/nvidiactl
      sh-5.1$ nvidia-smi
      Fri Jul 11 17:27:48 2025
      -----------------------------------------------------------------------------------------

      NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8
      -----------------------------------------------------------------------------------+
      GPU Name Persistence-M Bus-Id Disp.A Volatile Uncorr. ECC
      Fan Temp Perf Pwr:Usage/Cap Memory-Usage GPU-Util Compute M.
          MIG M.
      =======================================================================================
      0 NVIDIA L4 On 00000000:3C:00.0 Off 0
      N/A 47C P0 28W / 72W 1293MiB / 23034MiB 0% Default
          N/A

      -----------------------------------------------------------------------------------

      -----------------------------------------------------------------------------------------

      Processes:
      GPU GI CI PID Type Process name GPU Memory
      ID ID Usage
      =========================================================================================
      0 N/A N/A 1 C python 1284MiB

      -----------------------------------------------------------------------------------------
      sh-5.1$ cat /proc/driver/nvidia/version
      NVRM version: NVIDIA UNIX Open Kernel Module for x86_64 570.124.06 Release Build (dvs-builder@U22-I3-AE18-09-6) Wed Feb 26 01:52:55 UTC 2025
      GCC version: gcc version 11.4.1 20231218 (Red Hat 11.4.1-4) (GCC)

              Unassigned Unassigned
              shankarpentyala07 shankar pentyala
              None
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: