-
Bug
-
Resolution: Done
-
Undefined
-
4.14.0
-
Moderate
-
No
-
MCO Sprint 241, MCO Sprint 242, MCO Sprint 243, MCO Sprint 244
-
4
-
False
-
Description of problem:
In an on-cluster build pool, when we create a MC to update the sshkeys, we can't find the new keys in the nodes after the configuration is built and applied.
Version-Release number of selected component (if applicable):
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.0-0.nightly-2023-08-30-191617 True False 7h52m Cluster version is 4.14.0-0.nightly-2023-08-30-191617
How reproducible:
Always
Steps to Reproduce:
1. Enable the on-cluster build functionality in the "worker" pool 2. Check the value of the current keys $ oc debug node/$(oc get nodes -l node-role.kubernetes.io/worker -ojsonpath="{.items[0].metadata.name}") -- chroot /host cat /home/core/.ssh/authorized_keys.d/ignition Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Starting pod/sregidor-sr3-bfxxj-worker-a-h5b5jcopenshift-qeinternal-debug-ljxgx ... To use host binaries, run `chroot /host` ssh-rsa AAAA..................................................................................................................................................................qe@redhat.com Removing debug pod ... 3. Create a new MC to configure the "core" user's sshkeys. We add 2 extra keys. $ oc get mc -o yaml tc-59426-add-ssh-key-9tv2owyp apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: creationTimestamp: "2023-09-01T10:57:14Z" generation: 1 labels: machineconfiguration.openshift.io/role: worker name: tc-59426-add-ssh-key-9tv2owyp resourceVersion: "135885" uid: 3cf31fbb-7a4e-472d-8430-0c0eb49420fc spec: config: ignition: version: 3.2.0 passwd: users: - name: core sshAuthorizedKeys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPmGf/sfIYog...... mco_test@redhat.com - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf....... mco_test2@redhat.com 3. Verify that the new rendered MC contains the 3 keys $ oc get mcp worker NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-02d04d7c47cd3e08f8f305541cf85000 True False False 2 2 2 0 8h $ oc get mc -o yaml rendered-worker-02d04d7c47cd3e08f8f305541cf85000 | grep users -A9 users: - name: core sshAuthorizedKeys: - ssh-rsa AAAAB...............................qe@redhat.com - ssh-rsa AAAAB...............................mco_test@redhat.com - ssh-rsa AAAAB...............................mco_test2@redhat.com storage:
Actual results:
Only the initial key is present in the node $ oc debug node/$(oc get nodes -l node-role.kubernetes.io/worker -ojsonpath="{.items[0].metadata.name}") -- chroot /host cat /home/core/.ssh/authorized_keys.d/ignition Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Starting pod/sregidor-sr3-bfxxj-worker-a-h5b5jcopenshift-qeinternal-debug-ljxgx ... To use host binaries, run `chroot /host` ssh-rsa AAAA.........qe@redhat.com Removing debug pod ...
Expected results:
The added ssh keys should be configure in /home/core/.ssh/authorized_keys.d/ignition file as well.
Additional info: