su - rhcert -c 'ilab system info' stdout: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA L40S, compute capability 8.9, VMM: yes Device 1: NVIDIA L40S, compute capability 8.9, VMM: yes Device 2: NVIDIA L40S, compute capability 8.9, VMM: yes Device 3: NVIDIA L40S, compute capability 8.9, VMM: yes Platform: sys.version: 3.11.7 (main, Jan 8 2025, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] sys.platform: linux os.name: posix platform.release: 5.14.0-427.65.1.el9_4.x86_64 platform.machine: x86_64 platform.node: localhost.localdomain platform.python_version: 3.11.7 os-release.ID: rhel os-release.VERSION_ID: 9.4 os-release.PRETTY_NAME: Red Hat Enterprise Linux 9.4 (Plow) memory.total: 754.93 GB memory.available: 747.56 GB memory.used: 2.50 GB InstructLab: instructlab.version: 0.26.1 instructlab-dolomite.version: 0.2.0 instructlab-eval.version: 0.5.1 instructlab-quantize.version: 0.1.0 instructlab-schema.version: 0.4.2 instructlab-sdg.version: 0.8.2 instructlab-training.version: 0.10.2 Torch: torch.version: 2.6.0 torch.backends.cpu.capability: AVX512 torch.version.cuda: 12.4 torch.version.hip: None torch.cuda.available: True torch.backends.cuda.is_built: True torch.backends.mps.is_built: False torch.backends.mps.is_available: False torch.cuda.bf16: True torch.cuda.current.device: 0 torch.cuda.0.name: NVIDIA L40S torch.cuda.0.free: 43.9 GB torch.cuda.0.total: 44.3 GB torch.cuda.0.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.1.name: NVIDIA L40S torch.cuda.1.free: 43.9 GB torch.cuda.1.total: 44.3 GB torch.cuda.1.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.2.name: NVIDIA L40S torch.cuda.2.free: 43.9 GB torch.cuda.2.total: 44.3 GB torch.cuda.2.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.3.name: NVIDIA L40S torch.cuda.3.free: 43.9 GB torch.cuda.3.total: 44.3 GB torch.cuda.3.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) llama_cpp_python: llama_cpp_python.version: 0.3.6 llama_cpp_python.supports_gpu_offload: True stderr: time="2025-09-23T10:34:03Z" level=warning msg="The input device is not a TTY. The --tty and --interactive flags might not work properly" returned: 0 Running command - sudo -u rhcert -i -- bash -c ' ilab config init | tee /var/home/rhcert/ilab_init.log' Review: Verify the ilab config init | tee /var/home/rhcert/ilab_init.log output