-
Story
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
Product / Portfolio Work
-
False
-
-
False
-
None
-
None
-
None
-
None
Background
Today hive provisions/deprovisions like this:
- Invoke openshift-install CLI to provision the cluster. Installer lays down a metadata.json file.
- Parse metadata.json, storing certain fields in ClusterDeployment.spec.clusterMetadata. Note that we also save the metadata.json file verbatim in ClusterProvision.spec.metadataJSON – but this was introduced recently (
HIVE-2204/ https://github.com/openshift/hive/pull/1997 / https://github.com/openshift/hive/pull/1997/commits/0350613841566b4f892afc8de8eaf9e2a85294a2). More on that in a moment. - At destroy time, we pass certain fields from ClusterDeployment.Spec.clusterMetadata through the ClusterDeprovision object to the uninstall pod via CLI arguments.
- The uninstall CLI parses those CLI arguments to generate a ClusterUninstaller struct. That type is defined by the installer, and {{Run()}}ing it invokes the (vendored) installer destroy code.
This setup means that any time installer adds a new field to metadata.json, in addition to revendoring the installer code, we have to plumb the new field through this complex sequence:
metadata.json => ClusterProvision => ClusterDeprovision => uninstall pod spec => uninstall CLI => ClusterUninstaller => destroy code
As an example, see https://github.com/openshift/hive/pull/2100.
Solution
Instead of plumbing individual fields through, carry the entire metadata.json as an opaque blob. Mount it on the uninstall pod, which (blindly!) unmarshals it into the ClusterUninstaller struct.
Now if installer changes the metadata.json schema – or even begins to use a previously-latent field, see https://github.com/openshift/hive/pull/2103 – all we have to do is revendor to pick up the change.
About metadataJSON
If ClusterProvision.spec.metadataJSON had existed since the beginning of time, this would be a pretty easy change to make. However, prior to https://github.com/openshift/hive/pull/1997 we were (unsuccessfully) attempting to store that information in a RawExtension field called ClusterProvision.spec.metadata. Because the object it was based on (installer-defined ClusterMetadata) is not a RuntimeObject, that field just ended up empty and therefore useless.
This means that clusters created since #1997 have metadataJSON, but those created before it do not. We'll need to account for that when implementing this card.
Rather than maintaining two code paths, I think we could probably just include a little routine to retrofit metadataJSON in legacy clusters based on the information we would have passed through piecemeal.