Uploaded image for project: 'Red Hat Enterprise Linux AI'
  1. Red Hat Enterprise Linux AI
  2. RHELAI-2692

[Doc Improvement] Fix ilab data generate --num-cpu 4

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • Documentation
    • None
    • False
    • Hide

      None

      Show
      None
    • False

      Replace ilab data generate --num-cpu 4 with ilab data generate --num-cpus 4 --enable-serving-output

      The below command does not work.

       ilab data generate --num-cpu 4 
      
      [instruct@bastion ~]$ ilab data generate --num-cpu 4 --enable-serving-output
      Usage: ilab data generate [OPTIONS]
      Try 'ilab data generate --help' for help.
      
      
      Error: No such option: --num-cpu Did you mean --num-cpus?
      [instruct@bastion ~]$ ilab data generate --num-cpu '4' --enable-serving-output
      Usage: ilab data generate [OPTIONS]
      Try 'ilab data generate --help' for help.
      
      
      Error: No such option: --num-cpu Did you mean --num-cpus?
      

      The correct command is:

      [instruct@bastion ~]$ ilab data generate --num-cpus 4 --enable-serving-output
      INFO 2024-12-12 13:48:25,926 numexpr.utils:148: Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
      INFO 2024-12-12 13:48:25,926 numexpr.utils:161: NumExpr defaulting to 16 threads.
      INFO 2024-12-12 13:48:27,049 datasets:59: PyTorch version 2.3.1 available.
      INFO 2024-12-12 13:48:28,778 instructlab.model.backends.vllm:105: Trying to connect to model server at http://127.0.0.1:8000/v1
      INFO 2024-12-12 13:48:29,934 instructlab.model.backends.vllm:308: vLLM starting up on pid 63 at http://127.0.0.1:54567/v1
      INFO 2024-12-12 13:48:29,934 instructlab.model.backends.vllm:114: Starting a temporary vLLM server at http://127.0.0.1:54567/v1
      INFO 2024-12-12 13:48:29,934 instructlab.model.backends.vllm:129: Waiting for the vLLM server to start at http://127.0.0.1:54567/v1, this might take a moment... Attempt: 1/120
      INFO 2024-12-12 13:48:33,178 instructlab.model.backends.vllm:129: Waiting for the vLLM server to start at http://127.0.0.1:54567/v1, this might take a moment... Attempt: 2/120
      

       

      You can exclude `--enable-serving-output`

       

      To Reproduce Steps to reproduce the behavior:
      To Reproduce Steps to reproduce the behavior:

      1. Go to '...'
      2. Click on '....'
      3. Scroll down to '....'
      4. See error

      Expected behavior

      • <your text here>

      Screenshots

      • Attached Image 

      Device Info (please complete the following information):

      • Hardware Specs: [e.g. Apple M2 Pro Chip, 16 GB Memory, etc.]
      • OS Version: [e.g. Mac OS 14.4.1, Fedora Linux 40]
      • InstructLab Version: [output of \\\\{{{}ilab --version{}}}]
      • Provide the output of these two commands:
        • sudo bootc status --format json | jq .status.booted.image.image.image to print the name and tag of the bootc image, should look like registry.stage.redhat.io/rhelai1/bootc-intel-rhel9:1.3-1732894187
        • ilab system info to print detailed information about InstructLab version, OS, and hardware – including GPU / AI accelerator hardware

      Additional context

      • <your text here>

              kelbrown@redhat.com Kelly Brown
              rhn-support-abroy Abhijit Roy
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: