-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
rhelai-1.5
-
None
-
False
-
-
False
-
-
To Reproduce Steps to reproduce the behavior:
- Run single-phase training on system with 2xA100
- Command - ilab model train -y --data-path <messages.jsonl> --is-padding-free False
- Fails with error -
RuntimeError: Suffered a failure during distributed training. Please see the training logs for more context.
Expected behavior
- Test should pass
Screenshots
- Attached Logs
Device Info (please complete the following information):
- Hardware Specs: FUJITSU PRIMERGY RX2540 M7
- OS Version: Rhel AI 1.5 Nvidia
- InstructLab Version: 0.26.1
- Provide the output of these two commands:
- sudo bootc status --format json | jq .status.booted.image.image.image to print the name and tag of the bootc image
- registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.5
- Version: 9.20250429.0 (2025-05-15T19:47:15Z)
- ilab system info to print detailed information about InstructLab version, OS, and hardware – including GPU / AI accelerator hardware
- ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
Device 1: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
Platform:
sys.version: 3.11.7 (main, Jan 8 2025, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)]
sys.platform: linux
os.name: posix
platform.release: 5.14.0-427.65.1.el9_4.x86_64
platform.machine: x86_64
platform.node: localhost.localdomain
platform.python_version: 3.11.7
os-release.ID: rhel
os-release.VERSION_ID: 9.4
os-release.PRETTY_NAME: Red Hat Enterprise Linux 9.4 (Plow)
memory.total: 4029.57 GB
memory.available: 4012.32 GB
memory.used: 6.01 GB
- ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
- sudo bootc status --format json | jq .status.booted.image.image.image to print the name and tag of the bootc image
InstructLab:
instructlab.version: 0.26.1
instructlab-dolomite.version: 0.2.0
instructlab-eval.version: 0.5.1
instructlab-quantize.version: 0.1.0
instructlab-schema.version: 0.4.2
instructlab-sdg.version: 0.8.2
instructlab-training.version: 0.10.2
Torch:
torch.version: 2.6.0
torch.backends.cpu.capability: AVX512
torch.version.cuda: 12.4
torch.version.hip: None
torch.cuda.available: True
torch.backends.cuda.is_built: True
torch.backends.mps.is_built: False
torch.backends.mps.is_available: False
torch.cuda.bf16: True
torch.cuda.current.device: 0
torch.cuda.0.name: NVIDIA A100 80GB PCIe
torch.cuda.0.free: 78.7 GB
torch.cuda.0.total: 79.1 GB
torch.cuda.0.capability: 8.0 (see https://developer.nvidia.com/cuda-gpus#compute)
torch.cuda.1.name: NVIDIA A100 80GB PCIe
torch.cuda.1.free: 78.7 GB
torch.cuda.1.total: 79.1 GB
torch.cuda.1.capability: 8.0 (see https://developer.nvidia.com/cuda-gpus#compute)
llama_cpp_python:
llama_cpp_python.version: 0.3.6
llama_cpp_python.supports_gpu_offload: True
Bug impact
- Certification Test
Known workaround
- No Workaround
Additional context
- Certification was approved based on earlier discussion that multi-phase runs both skills and knowledge training and the checkpoint selection