-
Sub-task
-
Resolution: Duplicate
-
Major
-
None
-
ACM 2.9.0
-
False
-
None
-
False
-
ACM-635 - HyperShift
-
-
Create an informative issue (See each section, incomplete templates/issues won't be triaged)
Using the current documentation as a model, please complete the issue template.
Note: Doc team updates the current version and the two previous versions (n-2). For earlier versions, we will address only high-priority, customer-reported issues for releases in support.
Prerequisite: Start with what we have
Always look at the current documentation to describe the change that is needed. Use the source or portal link for Step 4:
- Use the Customer Portal: https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes
- Use the GitHub link to find the staged docs in the repository: https://github.com/stolostron/rhacm-docs
Describe the changes in the doc and link to your dev story
Provide info for the following steps:
1. - [ x ] Mandatory Add the required version to the Fix version/s field.
2. - [ x ] Mandatory Choose the type of documentation change.
- [ x ] New topic in an existing section or new section
- [ ] Update to an existing topic
3. - [ x ] Mandatory for GA content:
- [ x ] Add steps and/or other important conceptual information here:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.8/html-single/clusters/index#specify-ansible-inventory
Title: Running an Ansible job for hosted clusters installation
To initiating an Ansible job for hosted cluster installation first you must create the `HostedCluster` and `NodePools` resources with the `pausedUntil` field. If using the `hcp create cluster` CLI you can specify the flag `--pausedUntil true`. See the following examples:
```
apiVersion: hypershift.openshift.io/v1beta1
kind: HostedCluster
metadata:
name: my-cluster
namespace: clusters
spec:
pausedUntil: 'true'
...
```
```
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: my-cluster-us-east-2
namespace: clusters
spec:
pausedUntil: 'true'
...
```
After the `HostedCluster` and `NodePool` resources are created, create a `ClusterCurator` resource with the same name as the `HostedCluster` resource and in the same namespace as the `HostedCluster` resource. See following example:
```
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ClusterCurator
metadata:
name: my-cluster
namespace: clusters
labels:
open-cluster-management: curator
spec:
desiredCuration: install
install:
jobMonitorTimeout: 5
prehook:
- name: Demo Job Template
extra_vars:
variable1: something-interesting
variable2: 2
- name: Demo Job Template
posthook:
- name: Demo Job Template
towerAuthSecret: toweraccess
```
If your Ansbile Tower requires authentication, you will also need to create a secret resource. See following example:
```
apiVersion: v1
kind: Secret
metadata:
name: toweraccess
namespace: clusters
stringData:
host: https://my-tower-domain.io
token: ANSIBLE_TOKEN_FOR_admin
```
Title: Running an Ansible job for hosted clusters upgrade
To initiating an Ansible job for hosted cluster upgrade, create/edit a `ClusterCurator` resource similar to the following example:
```
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ClusterCurator
metadata:
name: my-cluster
namespace: clusters
labels:
open-cluster-management: curator
spec:
desiredCuration: upgrade
upgrade:
desiredUpdate: 4.13.7
monitorTimeout: 120
prehook:
- name: Demo Job Template
extra_vars:
variable1: something-interesting
variable2: 2
- name: Demo Job Template
posthook:
- name: Demo Job Template
towerAuthSecret: toweraccess
```
Note: Upgrading a hosted cluster this way will upgrade both the hosted control plane and node pools to the same version. Upgrading them to different versions is not supported.
Title: Running an Ansible job for hosted clusters destroy
To initiating an Ansible job for hosted cluster upgrade, create/edit a `ClusterCurator` resource similar to the following example:
```
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ClusterCurator
metadata:
name: my-cluster
namespace: clusters
labels:
open-cluster-management: curator
spec:
desiredCuration: destroy
destroy:
jobMonitorTimeout: 5
prehook:
- name: Demo Job Template
extra_vars:
variable1: something-interesting
variable2: 2
- name: Demo Job Template
posthook:
- name: Demo Job Template
towerAuthSecret: toweraccess
```
Note: Destroying a hosted cluster of the type AWS is not supported.
- [ ] Add Required access level for the user to complete the task here:
- [ ] Add verification at the end of the task, how does the user verify success (a command to run or a result to see?)
- [ x ] Add link to dev story here: https://issues.redhat.com/browse/ACM-6494
4. - [ ] Mandatory for bugs: What is the diff? Clearly define what the problem is, what the change is, and link to the current documentation: