-
Feature Request
-
Resolution: Unresolved
-
Normal
-
None
-
4.16
-
None
-
Future Sustainability
-
None
-
False
-
-
None
-
None
-
None
-
-
None
-
None
-
None
-
None
-
None
The current documentation does not provide a detailed procedure for deploying OpenShift Lightspeed Self-Managed in an on-premises data center.
While the documentation includes the required Custom Resource (CR) file, it lacks a clear procedure or references to relevant Red Hat documentation on how to enable the required APIs listed in the CR. Please include a link to the documentation that explains how to configure vLLM to work with the API required by Lightspeed.
Additionally, the CR file specifies credentialsSecretRef, but there is no guidance on how to create this secret for various operational scenarios, such as vLLM. The procedure for generating the necessary credentials is missing.
For instance, can credentialsSecretRef be created using the following command?
oc create secret generic ollama-credentials --from-literal=apitoken='' -n openshift-operators
Currently, customers must rely on trial and error. In this case, the customer has deployed a custom Ollama instance with a compatible OpenAI API on a single GPU worker node. This API does not require certificates. However, the CR file does not support this configuration, as it always mandates a secret.
A documented procedure for handling such scenarios would greatly improve the deployment experience.