Description of problem:
When installing with the Assisted Installer and adding the vCenter cloud credentials on day 2 there are a few issues.
When applying the config with the GUI I get a strange error message:
Error "Invalid value: "object": apiServerInternallPs list is required once set" for field "spec.platformSpec.vsphere". (I do not have permissions to attach files for some reason)
The ConfigMap cloud-provider-config in openshift-config is created with a bunch of debug garbage:
kind: ConfigMap
apiVersion: v1
metadata:
name: cloud-provider-config
namespace: openshift-config
data:
config: |
global:=true
user: ""=true
password: ""=true
server: ""=true
port: 0=true
insecureFlag: true=true
datacenters: []=true
soapRoundtripCount: 0=true
caFile: ""=true
thumbprint: ""=true
secretName: vsphere-creds=true
secretNamespace: kube-system=true
secretsDirectory: ""=true
apiDisable: false=true
apiBinding: ""=true
ipFamily: []=true
ipFamily: []=true
vcenter:=true
vcenterplaceholder:=true
tenantref: ""=true
server: vcenterplaceholder=true
port: 443=true
datacenters:=true
- datacenterplaceholder=true
secretref: ""=true
secretName: ""=true
secretNamespace: ""=true
labels:=true
zone: ""=true
region: ""=true
[Global]
secret-name=vsphere-creds
secret-namespace=kube-system
insecure-flag=1
[Workspace]
server=vcsnsx-vc.infra.demo.redhat.com
datacenter=SDDC-Datacenter
default-datastore=/SDDC-Datacenter/datastore/workload_share_dwPsq
folder="/SDDC-Datacenter/vm/4r9z4"
resourcepool-path=/SDDC-Datacenter/host/Cluster-1/Resources
[VirtualCenter "vcsnsx-vc.infra.demo.redhat.com"]
datacenters=SDDC-Datacenter
Once removing the garbage data from the ConfigMap and letting everything sync, the CSI driver complains about not being able to access the "vcenterplaceholder" cluster
log of openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-operator-798667986-9s8hg:
... I0331 17:44:16.005083 1 config.go:293] ReadConfig INI succeeded. INI-based cloud-config is deprecated and will be removed in 2.0. Please use YAML based cloud-config. I0331 17:44:16.005143 1 config.go:302] Config initialized W0331 17:44:16.005897 1 vspherecontroller.go:939] vCenter vcsnsx-vc.infra.demo.redhat.com is missing from vCenter map I0331 17:44:16.006227 1 vspherecontroller.go:262] Marking vCenter connection status as false W0331 17:44:16.006246 1 vspherecontroller.go:553] Marking cluster as degraded: openshift_api_error error parsing secret "vmware-vsphere-cloud-credentials": key "vcenterplaceholder.username" not found W0331 17:44:16.006264 1 vspherecontroller.go:609] Marking cluster un-upgradeable because error parsing secret "vmware-vsphere-cloud-credentials": key "vcenterplaceholder.username" not found
The cloud-conf ConfigMap appears to be incorrect:
kind: ConfigMap
apiVersion: v1
metadata:
name: cloud-conf
namespace: openshift-cloud-controller-manager
data:
cloud.conf: |
global:
insecureFlag: true
secretName: vsphere-creds
secretNamespace: kube-system
vcenter:
vcenterplaceholder:
server: vcenterplaceholder
datacenters:
- datacenterplaceholder
vcsnsx-vc.infra.demo.redhat.com:
server: vcsnsx-vc.infra.demo.redhat.com
datacenters:
- SDDC-Datacenter
But the associated ConfigMaps openshift-config/cloud-provider-config and openshift-config-managed/kube-cloud-config appear to be synced with the one from openshift-config.
The storage CO is unhappy but cloud-controller-manager is happy:
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
annotations:
capability.openshift.io/name: Storage
include.release.openshift.io/hypershift: 'true'
include.release.openshift.io/ibm-cloud-managed: 'true'
include.release.openshift.io/self-managed-high-availability: 'true'
include.release.openshift.io/single-node-developer: 'true'
creationTimestamp: '2025-03-31T15:21:39Z'
generation: 1
name: storage
ownerReferences:
- apiVersion: config.openshift.io/v1
controller: true
kind: ClusterVersion
name: version
uid: 09f1923a-1f1c-4a9f-9ba3-1e50cfaf6d12
spec: {}
status:
conditions:
- lastTransitionTime: '2025-03-31T16:34:01Z'
message: 'VSphereCSIDriverOperatorCRDegraded: VMwareVSphereOperatorCheckDegraded: error parsing secret "vmware-vsphere-cloud-credentials": key "vcenterplaceholder.username" not found'
reason: VSphereCSIDriverOperatorCR_VMwareVSphereOperatorCheck_openshift_api_error
status: 'True'
type: Degraded
- lastTransitionTime: '2025-03-31T17:51:51Z'
message: All is well
reason: AsExpected
status: 'False'
type: Progressing
- lastTransitionTime: '2025-03-31T15:27:02Z'
message: |-
DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator
VSphereCSIDriverOperatorCRAvailable: CSI driver for VSphere is disabled: error parsing secret "vmware-vsphere-cloud-credentials": key "vcenterplaceholder.username" not found
VSphereProblemDetectorMonitoringControllerAvailable: vsphere-problem-detector alerts are enabled
VSphereProblemDetectorControllerAvailable: failed to parse config: unable to load config: 1:1: expected section header
reason: AsExpected
status: 'True'
type: Available
- lastTransitionTime: '2025-03-31T16:31:50Z'
message: 'VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: error parsing secret "vmware-vsphere-cloud-credentials": key "vcenterplaceholder.username" not found'
reason: VSphereCSIDriverOperatorCR_VMwareVSphereController_openshift_api_error
status: 'False'
type: Upgradeable
- lastTransitionTime: '2025-03-31T15:25:58Z'
reason: NoData
status: Unknown
type: EvaluationConditionsDetected
extension: null
relatedObjects:
- group: ''
name: vsphere-csi-driver-operator-trusted-ca-bundle
namespace: openshift-cluster-csi-drivers
resource: configmaps
- group: ''
name: vmware-vsphere-csi-driver-operator
namespace: openshift-cluster-csi-drivers
resource: serviceaccounts
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-role
namespace: openshift-cluster-csi-drivers
resource: roles
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-rolebinding
namespace: openshift-cluster-csi-drivers
resource: rolebindings
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-clusterrole
resource: clusterroles
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-clusterrolebinding
resource: clusterrolebindings
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-prometheus
namespace: openshift-cluster-csi-drivers
resource: roles
- group: rbac.authorization.k8s.io
name: vmware-vsphere-csi-driver-operator-prometheus
namespace: openshift-cluster-csi-drivers
resource: rolebindings
- group: monitoring.coreos.com
name: vmware-vsphere-csi-driver-operator
namespace: openshift-cluster-csi-drivers
resource: prometheusrules
- group: operator.openshift.io
name: csi.vsphere.vmware.com
resource: clustercsidrivers
- group: ''
name: openshift-cluster-storage-operator
resource: namespaces
- group: ''
name: openshift-cluster-csi-drivers
resource: namespaces
- group: operator.openshift.io
name: cluster
resource: storages
- group: rbac.authorization.k8s.io
name: cluster-storage-operator-role
resource: clusterrolebindings
versions:
- name: operator
version: 4.18.5
Steps to Reproduce:
1. Install a OCP cluster with the Assisted installer with vSphere integrations
2. Once cluster is up and running, use the console to add the credentials for vCenter: https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/installing-on-vsphere#vsphere-post-installation-configuration-console_installing-on-vsphere
- blocks
-
OCPBUGS-57384 vsphere cloud credentials not syncing correctly
-
- Closed
-
- is blocked by
-
OCPBUGS-55101 VSphere API changed - settings are not aligned
-
- POST
-
- is cloned by
-
OCPBUGS-57384 vsphere cloud credentials not syncing correctly
-
- Closed
-
- links to