-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.15
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
No
-
None
-
None
-
Rejected
-
Hypershift Sprint 247
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When setting kms backup key in the HostedCluster CR, need to manually update resource of kms-provider policy on AWS console, so kms-provider has permission to backup key. slack discussion: https://redhat-internal.slack.com/archives/G01QS0P2F6W/p1701719512687199
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. generate KeyA and KeyB 2. Create a cluster with activeKey=KeyA 3. create a secret on the guest cluster and decode it 4. Set activeKey=KeyB, backupKey=KeyA in the HC
Actual results:
A container in the kube-apiserver pod is not ready
Expected results:
kube-apiserver gets to ready state
Additional info:
Status of the new kube-apiserver pod
status: conditions: - lastProbeTime: null lastTransitionTime: "2023-12-04T19:29:51Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-12-04T19:29:49Z" message: 'containers with unready status: [kube-apiserver]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-12-04T19:29:49Z" message: 'containers with unready status: [kube-apiserver]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-12-04T19:29:49Z" status: "True" type: PodScheduled
Modifying the the permission directly in AWS adding keyB resolves the issue