Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-9565

Doc test Install by Installing the product

    XMLWordPrintable

Details

    • Task
    • Resolution: Unresolved
    • Undefined
    • None
    • ACM 2.9.0
    • Documentation
    • False
    • None
    • False
    • No

    Description

      installing-while-connected-online
      = Installing while connected online

      {product-title}

      is installed through {olm}, which manages the installation, upgrade, and removal of the components that encompass the {product-title-short} hub cluster.

      *Required access:* Cluster administrator. *{ocp-short} Dedicated environment required access:* You must have `cluster-admin` permissions. By default `dedicated-admin` role does not have the required permissions to create namespaces in the {ocp-short} Dedicated environment.

      • By default, the hub cluster components are installed on worker nodes of your {ocp-short} cluster without any additional configuration. You can install the hub cluster on worker nodes by using the {ocp-short} OperatorHub web console interface, or by using the {ocp-short} CLI.
      • If you have configured your {ocp-short} cluster with infrastructure nodes, you can install the hub cluster on those infrastructure nodes by using the {ocp-short} CLI with additional resource parameters. See the Installing the {product-title-short} hub cluster on infrastructure node section for more details.
      • If you plan to import Kubernetes clusters that were not created by {ocp-short} or {product-title-short}, you need to configure an image pull secret.

      For information on how to configure advanced configurations, see options in the xref:../install/adv_config_install.adoc#advanced-config-hub[MultiClusterHub advanced configuration] section of the documentation.

      • <<connect-prerequisites,Prerequisites>>
      • <<confirm-ocp-installation,Confirm your {ocp-short} installation>>
      • <<installing-from-the-operatorhub,Installing from the OperatorHub web console interface>>
      • <<installing-from-the-cli,Installing from the {ocp-short} CLI>>
      • <<installing-on-infra-node,Installing the {product-title-short} hub cluster on infrastructure nodes>>

      connect-prerequisites
      == Prerequisites

      Before you install {product-title-short}, see the following requirements:

      • Your {ocp} cluster must have access to the {product-title-short} operator in the OperatorHub catalog from the {ocp-short} console.
      • Your {ocp-short} permissions must allow you to create a namespace. Without a namespace, installation will fail.
      • You must have an Internet connection to access the dependencies for the operator.
      • *Important:* To install in a {ocp-short} Dedicated environment, see the following requirements:
        • You must have the {ocp-short} Dedicated environment configured and running.
        • You must have `cluster-admin` authority to the {ocp-short} Dedicated environment where you are installing the hub cluster.
        • To import, you must use the `stable-2.0` channel of the klusterlet operator for {product-version}.

      confirm-ocp-installation
      == Confirm your {ocp-short} installation

      You must have a supported {ocp-short} version, including the registry and storage services, installed and working. For more information about installing {ocp-short}, see the {ocp-short} documentation.

      . Verify that a {product-title-short} hub cluster is not already installed on your {ocp-short} cluster. {product-title-short} allows only one single {product-title-short} hub cluster installation on each {ocp-short} cluster. Continue with the following steps if there is no {product-title-short} hub cluster installed:

      . To ensure that the {ocp-short} cluster is set up correctly, access the {ocp-short} web console with the following command:

      +


      kubectl -n openshift-console get route


      +
      See the following example output:

      +


      openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com
      console https reencrypt/Redirect None


      . Open the URL in your browser and check the result. If the console URL displays `console-openshift-console.router.default.svc.cluster.local`, set the value for `openshift_master_default_subdomain` when you install {ocp-short}. See the following example of a URL: `https://console-openshift-console.apps.new-coral.purple-chesterfield.com`.

      You can proceed to install {product-title-short} from the console or the CLI. Both procedures are documented.

      installing-from-the-operatorhub
      == Installing from the OperatorHub web console interface

      *Best practice:* From the Administrator view in your {ocp-short} navigation, install the OperatorHub web console interface that is provided with {ocp-short}.

      . Select Operators > OperatorHub to access the list of available operators, and select Advanced Cluster Management for Kubernetes operator.

      . On the Operator subscription page, select the options for your installation:

      +

      • Namespace information:
      • The {product-title-short} hub cluster must be installed in its own namespace, or project.
      • By default, the OperatorHub console installation process creates a namespace titled `open-cluster-management`. Best practice: Continue to use the `open-cluster-management` namespace if it is available.
         
      • If there is already a namespace named `open-cluster-management`, choose a different namespace.

      +

      • Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future Errata updates within that release are obtained.

      +

      • Approval strategy for updates: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed.
      • Select Automatic to ensure any updates within that release are automatically applied.
         
      • Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you.

      +
      Important: To upgrade to the next minor release, you must return to the OperatorHub page and select a new channel for the more current release.

      . Select Install to apply your changes and create the operator.

      . Create the MultiClusterHub custom resource.
      .. In the {ocp-short} console navigation, select Installed Operators > Advanced Cluster Management for Kubernetes.
      .. Select the MultiClusterHub tab.
      .. Select Create MultiClusterHub.
      .. Update the default values in the YAML file. See options in the MultiClusterHub advanced configuration section of the documentation.
       

      • The following example shows the default template. Confirm that `namespace` is your project namespace. See the sample:

      +
      [source,yaml]


      apiVersion: operator.open-cluster-management.io/v1
      kind: MultiClusterHub
      metadata:
      name: multiclusterhub
      namespace: <namespace>


      +
      . Select Create to initialize the custom resource. It can take up to 10 minutes for the {product-title-short} hub cluster to build and start.

      +
      After the {product-title-short} hub cluster is created, the `MultiClusterHub` resource status displays Running from the MultiClusterHub tab of the {product-title-short} operator details. To gain access to the console, see the link:../console/console_access.adoc#accessing-your-console[Accessing your console] topic.

      installing-from-the-cli
      == Installing from the {ocp-short} CLI

      . Create a {product-title-short} hub cluster namespace where the operator requirements are contained. Run the following command, where `namespace` is the name for your {product-title-short} hub cluster namespace. The value for `namespace` might be referred to as Project in the {ocp-short} environment:

      +


      oc create namespace <namespace>


      . Switch your project namespace to the one that you created. Replace `namespace` with the name of the {product-title-short} hub cluster namespace that you created in step 1.

      +


      oc project <namespace>


      . Create a YAML file to configure an `OperatorGroup` resource. Each namespace can have only one operator group. Replace `default` with the name of your operator group. Replace `namespace` with the name of your project namespace. See the following sample:

      +
      [source,yaml]


      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
      name: <default>
      spec:
      targetNamespaces:

      • <namespace>

        . Run the following command to create the `OperatorGroup` resource. Replace `operator-group` with the name of the operator group YAML file that you created:

      +


      oc apply -f <path-to-file>/<operator-group>.yaml


      +

      . Create a YAML file to configure an {ocp-short} subscription. Your file is similar to the following sample, replacing `release-2.x` with the current release:

      +
      [source,yaml]


      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
      name: acm-operator-subscription
      spec:
      sourceNamespace: openshift-marketplace
      source: redhat-operators
      channel: release-2.x
      installPlanApproval: Automatic
      name: advanced-cluster-management


      +
      Note: For installing the {product-title-short} hub cluster on infrastructure nodes, the see the xref:../install/install_connected.adoc#infra-olm-sub-add-config[\{olm} Subscription additional configuration] section.

      +
      . Run the following command to create the {ocp-short} subscription. Replace `subscription` with the name of the subscription file that you created:

      +


      oc apply -f <path-to-file>/<subscription>.yaml


      . Create a YAML file to configure the `MultiClusterHub` custom resource. Your default template should look similar to the following example. Replace `namespace` with the name of your project namespace:

      +
      [source,yaml]


      apiVersion: operator.open-cluster-management.io/v1
      kind: MultiClusterHub
      metadata:
      name: multiclusterhub
      namespace: <namespace>
      spec: {}


      +
      Note: For installing the {product-title-short} hub cluster on infrastructure nodes, see the xref:../install/install_connected.adoc#infra-mch-add-config[MultiClusterHub custom resource additional configuration] section:

      +
      . Run the following command to create the `MultiClusterHub` custom resource. Replace `custom-resource` with the name of your custom resource file:
       
      +


      oc apply -f <path-to-file>/<custom-resource>.yaml


      +
      If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created:

      +


      error: unable to recognize "./mch.yaml": no matches for kind "MultiClusterHub" in version "operator.open-cluster-management.io/v1"


      . Run the following command to get the custom resource. It can take up to 10 minutes for the `MultiClusterHub` custom resource status to display as `Running` in the `status.phase` field after you run the command:

      +


      oc get mch -o=jsonpath='{.items[0].status.phase}'


      If you are reinstalling {product-title-short} and the pods do not start, see link:../troubleshooting/trouble_reinstall.adoc#troubleshooting-reinstallation-failure[Troubleshooting reinstallation failure] for steps to work around this problem.

      Notes:

      • A `ServiceAccount` with a `ClusterRoleBinding` automatically gives cluster administrator privileges to {product-title-short} and to any user credentials with access to the namespace where you install {product-title-short}.
      • The installation also creates a namespace called `local-cluster` that is reserved for the {product-title-short} hub cluster when it is managed by itself. There cannot be an existing namespace called `local-cluster`. For security reasons, do not release access to the `local-cluster` namespace to any user who does not already have `cluster-administrator` access.

      installing-on-infra-node
      == Installing the {product-title-short} hub cluster on infrastructure nodes

      An {ocp-short} cluster can be configured to contain infrastructure nodes for running approved management components. Running components on infrastructure nodes avoids allocating {ocp-short} subscription quota for the nodes that are running those management components.

      After adding infrastructure nodes to your {ocp-short} cluster, follow the xref:../install/install_connected.adoc#installing-from-the-cli[Installing from the \{ocp-short} CLI] instructions and add configurations to the {olm} subscription and `MultiClusterHub` custom resource.

      adding-infra-nodes
      === Add infrastructure nodes to the {ocp-short} cluster

      Follow the procedures that are described in link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/machine_management/creating-infrastructure-machinesets[Creating infrastructure machine sets] in the {ocp-short} documentation. Infrastructure nodes are configured with a Kubernetes `taint` and `label` to keep non-management workloads from running on them.

      To be compatible with the infrastructure node enablement provided by {product-title-short}, ensure your infrastructure nodes have the following `taint` and `label` applied:

      [source,yaml]


      metadata:
      labels:
      node-role.kubernetes.io/infra: ""
      spec:
      taints:

      • effect: NoSchedule
        key: node-role.kubernetes.io/infra

      infra-olm-sub-add-config
      === {olm} Subscription additional configuration

      Add the following additional configuration before applying the {olm} Subscription:

      [source,yaml]


      spec:
      config:
      nodeSelector:
      node-role.kubernetes.io/infra: ""
      tolerations:

      • key: node-role.kubernetes.io/infra
        effect: NoSchedule
        operator: Exists

      infra-mch-add-config
      === MultiClusterHub custom resource additional configuration

      Add the following additional configuration before applying the `MultiClusterHub` custom resource:

      [source,yaml]


      spec:
      nodeSelector:
      node-role.kubernetes.io/infra: ""


      Attachments

        Activity

          People

            bswope@redhat.com Brandi Swope
            bswope@redhat.com Brandi Swope
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: