Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-3189

Provide cluster name in RenderConfig

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Minor Minor
    • None
    • None
    • MCO
    • False
    • False
    • Not Selected

      As a runtime-cfg developer
      I need to know the cluster name for the asset rendering
      to assign VIPs correctly.

      More detailed story:
      On bare metal systems we need to assign VIPs (e.g. for the api server). The templates therefor are created by the runtime-cfg rendering. To generate the templates we need to know the cluster name and the clusters base domain (runtimecfg pkg/config/node.go#L255-L262).

      Currently we get this information by splitting the api server url by a dot into api.<clustername>.<domain> (from the kubeconfig): node.go#L125.
      This can lead to problems if the cluster name itself contains a dot, as we then don't know where to split (see BZ 1971709).

      The following solutions have been checked without success:

      • Get the cluster name out of the cluster-config-v1 config map in the kube-system namespace (provided in the install-config .metadata.name) during the runtimecfg rendering.
        Issue: both of the used kubeconfigs (/var/lib/kubelet/kubeconfig and /etc/kubernetes/kubeconfig) lack permissions to read this CM.
      • Use the .Infra.Status.InfrastructureName from RenderConfig in MCO as cluster name (commit 3610dbc0).
        Issue: The provided value contains an suffix to the cluster name, and replaces all dots in the cluster name with dashes. This can be solved by checking all different combinations of replacements of dashes with dots. BUT the infraName has a max. length of 27, which is achieved by cutting of chars from the right. So important parts could be cut off (installer pkg/asset/installconfig/clusterid.go#L60-L79)

      As we render the required assets in MCO (e.g. init containers for keepalived: machine-config-operator templates/common/on-prem/files/keepalived.yaml#L36-42), the cluster name could be provided by MCO in the RenderConfig. So we could pass this value as an env var to runtimecfg for the rendering.

      One option for MCO could be to use the cluster-config-v1 config map, as it seems to have access to it:

      $ oc -n openshift-machine-config-operator exec -it machine-config-operator-54bfc5486-cgzhq -- sh
      sh-4.4$ APISERVER=https://kubernetes.default.svc
      sh-4.4$ SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
      sh-4.4$ NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
      sh-4.4$ TOKEN=$(cat ${SERVICEACCOUNT}/token)
      sh-4.4$ CACERT=${SERVICEACCOUNT}/ca.crt
      sh-4.4$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/kube-system/configmaps/cluster-config-v1
      {
        "kind": "ConfigMap",
        "apiVersion": "v1",
        "metadata": {
          "name": "cluster-config-v1",
          "namespace": "kube-system",
        },
        "data": {
          "install-config": ... 
      

      Another option for MCO could be to get the cluster name from the install config during bootstrapping and persist it in the ControllerConfigSpec. This way it would be available for other components later as well.

       

      I haven't found any config resources which provide the cluster name, yet. Only the cluster-config-v1 CM.

      Slack threads:

            rhn-support-mrussell Mark Russell
            cstabler@redhat.com Christoph Stäbler
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: