Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-3479

Better user experience for using GCP workload identity for tokens is bad

XMLWordPrintable

    • False
    • None
    • False
    • Not Selected
    • 0
    • 0% 0%

      We would like to configure our GCP clusters with GCP workload identity tokens. This is documented here:

      https://docs.openshift.com/container-platform/4.11/authentication/managing_cloud_provider_credentials/cco-mode-gcp-workload-identity.html

      and seems to work.

      but the user experience is currently really bad. we have to perform the manual step documented and repeat those steps before upgrade clusters. as we plan to install a larger number of clusters in GCP this is a large administrative overhead.

      we would like it integrated into the openshift installer and also to provide automatic updates without performing those manual steps with every upgrade.

       
      Basically, our wish would be not to have to perform all the manual steps described in the procedure, but have them automatically done by the installer and by the cluster during an upgrade. (extract ccoctl and credentials requests, process the requests, create install manifests, copy the credential secrets and tls key into manifests directory) To us, it seems this could be almost completely automated. Especially in a running cluster - for upgrading. The prospect of re-requesting credentials for every cluster and every upgrade is almost a no-go for us.

      Using workload Identity in GCP is otherwise appealing to us, because we can use the short-lived tokens. All other install methods we've tried use permanent keys. (We can't use Mint Mode, due to our VPC architecture. We are waiting for support of shared VPCs for this - GA 4.13 we understand. We also dont' want to use UPI)

            julim Ju Lim
            rhn-support-vmedina1 Victor Medina
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: