-
Bug
-
Resolution: Done
-
Undefined
-
rhelai-1.3
To Reproduce Steps to reproduce the behavior:
1. Create a qna.yaml file inside any desired directory inside the default "/root/.local/share/instructlab/taxonomy/knowledge/" directory.
2. Run the "ilab taxonomy diff" command to test your qna.yml file.
3. If the file has any errors, the previous command will fail, but it will output a blank error message like the following:
[root@bastion redhataccelerators]# ilab taxonomy diff knowledge/technology/redhataccelerators/qna.yaml Reading taxonomy failed with the following error: [root@bastion redhataccelerators]#
Expected behavior
The command should output what is the actual error found.
Screenshots
- Attached Image
Device Info (please complete the following information):
- Hardware Specs: [e.g. Apple M2 Pro Chip, 16 GB Memory, etc.]
[root@bastion redhataccelerators]# hostnamectl Static hostname: bastion.hrdnh.internal Icon name: computer-vm Chassis: vm 🖴 Machine ID: ec2a61c5ad048b14138ac906def5cd15 Boot ID: 9ce3fe82342148229c642ccaf015e8ab Virtualization: amazon Operating System: Red Hat Enterprise Linux 9.20241104.0.4 (Plow) CPE OS Name: cpe:/o:redhat:enterprise_linux:9::baseos Kernel: Linux 5.14.0-427.42.1.el9_4.x86_64 Architecture: x86-64 Hardware Vendor: Amazon EC2 Hardware Model: g6.12xlarge Firmware Version: 1.0 [root@bastion redhataccelerators]# free -ht total used free shared buff/cache available Mem: 181Gi 2.2Gi 179Gi 1.0Mi 1.7Gi 179Gi Swap: 8.0Gi 0B 8.0Gi Total: 189Gi 2.2Gi 187Gi [root@bastion redhataccelerators]# nproc 48 [root@bastion redhataccelerators]# nvidia-smi Mon Mar 3 18:03:13 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L4 On | 00000000:38:00.0 Off | 0 | | N/A 27C P8 11W / 72W | 1MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L4 On | 00000000:3A:00.0 Off | 0 | | N/A 26C P8 11W / 72W | 1MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA L4 On | 00000000:3C:00.0 Off | 0 | | N/A 29C P8 11W / 72W | 1MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA L4 On | 00000000:3E:00.0 Off | 0 | | N/A 26C P8 11W / 72W | 1MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
- OS Version: [e.g. Mac OS 14.4.1, Fedora Linux 40]
[root@bastion redhataccelerators]# cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="9.20241104.0.4 (Plow)" ID="rhel" ID_LIKE="fedora" VERSION_ID="9.4" PLATFORM_ID="platform:el9" PRETTY_NAME="Red Hat Enterprise Linux 9.20241104.0.4 (Plow)" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9" BUG_REPORT_URL="https://issues.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.4 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.4" OSTREE_VERSION='9.20241104.0' VARIANT="RHEL AI" VARIANT_ID=rhel_ai RHEL_AI_VERSION_ID='1.3.0' [root@bastion redhataccelerators]# uname -a Linux bastion.hrdnh.internal 5.14.0-427.42.1.el9_4.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct 18 14:35:40 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux
- InstructLab Version: [output of \\\{{{}ilab --version{}}}]
[root@bastion redhataccelerators]# ilab --version ilab, version 0.21.2
- Provide the output of these two commands:
- sudo bootc status --format json | jq .status.booted.image.image.image to print the name and tag of the bootc image, should look like registry.stage.redhat.io/rhelai1/bootc-intel-rhel9:1.3-1732894187
[root@bastion redhataccelerators]# bootc status --format json | jq .status.booted.image.image.image "registry.stage.redhat.io/rhelai1/bootc-aws-nvidia-rhel9:1.3.1"
- ilab system info to print detailed information about InstructLab version, OS, and hardware – including GPU / AI accelerator hardware
[root@bastion redhataccelerators]# ilab system info Platform: sys.version: 3.11.7 (main, Oct 9 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] sys.platform: linux os.name: posix platform.release: 5.14.0-427.42.1.el9_4.x86_64 platform.machine: x86_64 platform.node: bastion.hrdnh.internal platform.python_version: 3.11.7 os-release.ID: rhel os-release.VERSION_ID: 9.4 os-release.PRETTY_NAME: Red Hat Enterprise Linux 9.4 (Plow) memory.total: 181.77 GB memory.available: 179.52 GB memory.used: 0.81 GB InstructLab: instructlab.version: 0.21.2 instructlab-dolomite.version: 0.2.0 instructlab-eval.version: 0.4.1 instructlab-quantize.version: 0.1.0 instructlab-schema.version: 0.4.1 instructlab-sdg.version: 0.6.1 instructlab-training.version: 0.6.1 Torch: torch.version: 2.4.1 torch.backends.cpu.capability: AVX2 torch.version.cuda: 12.4 torch.version.hip: None torch.cuda.available: True torch.backends.cuda.is_built: True torch.backends.mps.is_built: False torch.backends.mps.is_available: False torch.cuda.bf16: True torch.cuda.current.device: 0 torch.cuda.0.name: NVIDIA L4 torch.cuda.0.free: 21.8 GB torch.cuda.0.total: 22.0 GB torch.cuda.0.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.1.name: NVIDIA L4 torch.cuda.1.free: 21.8 GB torch.cuda.1.total: 22.0 GB torch.cuda.1.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.2.name: NVIDIA L4 torch.cuda.2.free: 21.8 GB torch.cuda.2.total: 22.0 GB torch.cuda.2.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) torch.cuda.3.name: NVIDIA L4 torch.cuda.3.free: 21.8 GB torch.cuda.3.total: 22.0 GB torch.cuda.3.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute) llama_cpp_python: llama_cpp_python.version: 0.2.79 llama_cpp_python.supports_gpu_offload: True
- sudo bootc status --format json | jq .status.booted.image.image.image to print the name and tag of the bootc image, should look like registry.stage.redhat.io/rhelai1/bootc-intel-rhel9:1.3-1732894187
Bug impact
Without knowing what the error is, the user can't fix it properly.
Known workaround
No known workaround.
Additional context
[root@bastion redhataccelerators]# pwd /root/.local/share/instructlab/taxonomy/knowledge/technology/redhataccelerators [root@bastion redhataccelerators]# cat qna.yaml version: 3 domain: technology created_by: AlexonOliveiraRH seed_examples: - context: | The Red Hat Accelerators program is a global enterprise customer community of Red Hat technology experts and enthusiasts who willingly share their knowledge and expertise with peers in the industry, their community, and with Red Hat. questions_and_answers: - question: What kind of community is the Red Hat Accelerators? answer: It is a global enterprise customer community. - question: What kind of professional is part of the Red Hat Accelerators? answer: Red Hat technology experts and enthusiasts. - question: What kind of contributions do Accelerators community members make? answer: | They willingly share their knowledge and expertise with peers in the industry, their community, and with Red Hat. - context: | Members engage and contribute in the following ways: - Be involved: Network with industry peers who share your passion and interests. - Get inside: Gain exclusive access to product teams, executives and business leaders as well as upcoming releases and strategic direction. - Share knowledge: Teach others what you know from your experiences. - Join the conversation: Participate and contribute in technical discussions, product feedback sessions and share your insights. questions_and_answers: - question: What is expected of Accelerators community members in terms of engagement? answer: Be involved, get inside, share knowledge and join the conversation. - question: How can the members of Accelerators community join the conversation? answer: Participate and contribute in technical discussions, product feedback sessions and share your insights. - question: How can the members of Accelerators community share knowledge? answer: Teach others what you know from your experiences. - context: | One of the many benefits of being a member of the program is officially identifying yourself as a Red Hat Accelerator. A digital badge lets others know you are a part of our exclusive community group. Red Hat Accelerator badges may be used on personal websites, blogs, email signatures and social media networks. The details on proper usage of the digital badges will be provided to Red Hat Accelerators after being accepted into the program. Also, as a Red Hat Accelerator, you'll bolster your Red Hat technology knowledge, skills, and profile by having access to the following: - Peer to peer networking: Members join a unique community of Red Hat enterprise customers just like you, providing the opportunity to network, learn, and share your passion and interests. - Member perks: Members gain special privileges, including early access to products and technologies, cool swag merchandise, and VIP treatment at sponsored events. - Speak to Red Hat: You'll get direct access to Red Hat experts, product team members, and materials, allowing you to share your experience and provide feedback to influence product development and feature/function improvements. - Be recognized: Gain validation of your expertise and get the recognition you deserve by helping to build your public profile and persona. - Share your expertise: You have a unique set of experiences and expertise with Red Hat products. Share, contribute, and teach others in the community during technical discussions, group meetings, and events. questions_and_answers: - question: What is one of the many benefits of being a member of the Accelerators program? answer: | It is officially identifying yourself as a Red Hat Accelerator. A digital badge lets others know you are a part of our exclusive community group - question: What are five extended benefits of being part of the Accelerators community? answer: Peer to peer networking, member perks, speak to Red Hat, be recognized and share your expertise. - question: What are some of the perks for members of the Accelerators community? answer: | Members gain special privileges, including early access to products and technologies, cool swag merchandise, and VIP treatment at sponsored events. - context: | We want to hear from you, understand your unique IT needs, and build that long-term partnership your business relies on. This is your opportunity to be part of our unique enterprise customer community-sharing your feedback, insight, and experiences while networking with like-minded industry experts and gaining new professional skills along the way. - Feedback matters: Accelerator members provide trusted feedback, from real-world experience. We need your input to help us improve. - Spread the word: Members are encouraged to speak publicly and share their experience, knowledge, and solutions using Red Hat technologies. - Product validation: Your unique membership periodically provides early access to new products and features, before they are released, giving you the opportunity to see, demo, and validate Red Hat technologies for product teams. - Understanding your needs: Your experiences help us understand your unique IT requirements. These help us develop products you need to solve your most difficult business challenges. questions_and_answers: - question: What is the Accelerators community interested in hearing? answer: | Wants to hear from you, understand your unique IT needs, and build that long-term partnership your business relies on. - question: What are the four ways the Accelerators community uses to listen to its members? answer: | Feedback matters, spread the world, product validation, and understanding your needs. - question: What type of product validation do Accelerators community members have access to? answer: | Your unique membership periodically provides early access to new products and features, before they are released, giving you the opportunity to see, demo, and validate Red Hat technologies for product teams. - context: | The program is led by Andi Fisher, Lili Mihaila and Kat Ho, having Alexon Oliveira, Jason Beard, Jason Breitweg, Joshua Loscar, Kent Perrier, Matthew Sweikert, Morgan Peterman, Rick Greene and Uco Mesdag as fellow facilitators. The facilitators also take the lead in presentations called "TAM Talk" and "Lightening Talk", which are divided into three different pillars, currently Ansible, OpenShift and Platform, in addition to helping with the moderation and collaboration of the community's different internal communication channels. The program is made up of customers and partners around the world who share a desire to advocate for Red Hat. questions_and_answers: - question: Who leads the Red Hat Accelerators program? answer: The program is led by Andi Fisher, Lili Mihaila and Kat Ho. - question: Who helps by facilitating the Red Hat Accelerators program? answer: | Alexon Oliveira, Jason Beard, Jason Breitweg, Joshua Loscar, Kent Perrier, Matthew Sweikert, Morgan Peterman, Rick Greene and Uco Mesdag. - question: What are the three technology pillars of the Accelerators program? answer: Ansible, OpenShift and Platform. document_outline: | Red Hat Accelerators' Description and Facts document: repo: https://github.com/AlexonOliveiraRH/instructlab-examples.git commit: 2ba0d11f63b0b2d06a4b24d38efa069260921e54 patterns: - redhataccelerators/facts.md [root@bastion redhataccelerators]# ilab taxonomy diff knowledge/technology/redhataccelerators/qna.yaml Reading taxonomy failed with the following error: [root@bastion redhataccelerators]#
- is blocked by
-
AIPCC-626 outlines_core fails to build with rust 1.84.1
-
- Closed
-
- mentioned on