Uploaded image for project: 'Red Hat OpenShift AI Engineering'
  1. Red Hat OpenShift AI Engineering
  2. RHOAIENG-6515

[css] Consider documenting alternative way to access the inference endpoint for a deployed model

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • Documentation
    • 2
    • False
    • Hide

      None

      Show
      None
    • False
    • No
    • No
    • Testable

      In https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2-latest/html/serving_models/serving-large-models_serving-large-models#accessing-inference-endpoints-for-models-deployed-on-single-model-serving-platform_serving-large-models, the method we document for accessing the inference endpoint of a deployed model is via the Projects page.

      Per the current (that is, 2.9) dashboard layout, you can also access this from the Model Serving page. Consider documenting this approach.

            jbyrne@redhat.com John Byrne
            jbyrne@redhat.com John Byrne
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated: