-
Bug
-
Resolution: Won't Do
-
Major
-
None
-
4.14, 4.15
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When we create a new nodepool with InPlace upagradeTyep in a hosted cluster, and we try to use the "oc debug" command to access the nodes in the new nodepool, then the "oc debug" command fails often but intermittently with error: http: server gave HTTP response to HTTPS client
Version-Release number of selected component (if applicable):
Management cluster: sh-4.4$ /cli/oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.0-0.nightly-2024-03-06-142724 True False 5h Cluster version is 4.15.0-0.nightly-2024-03-06-142724 OC client version: sh-4.4$ /cli/oc version Client Version: 4.15.0-202403051607.p0.g48dcf59.assembly.stream.el8-48dcf59 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: 4.15.0-0.nightly-2024-03-06-142724 Kubernetes Version: v1.28.6+6216ea1 Host cluster: sh-4.4$ /cli/oc --kubeconfig /tmp/hosted get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.0-0.nightly-multi-2024-03-07-060125 True False 4h33m Cluster version is 4.15.0-0.nightly-multi-2024-03-07-060125
How reproducible:
Intermittent, but very often.
Steps to Reproduce:
1. Create a new nodepool using "InPlace" upgradeType and 2 nodes sh-4.4$ /cli/oc get nodepool -n clusters NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE 0efbbee192db7f2ce7cb-us-east-1a 0efbbee192db7f2ce7cb 3 3 False False 4.15.0-0.nightly-multi-2024-03-07-060125 inplace-upgrade 0efbbee192db7f2ce7cb 2 2 False False 4.15.0-0.nightly-multi-2024-03-07-060125 2. Execute the "oc debug" command to access the new nodes in the new nodepool (use a while loop to execute it until failure, just in case you need several tries). sh-4.4$ /cli/oc --kubeconfig /tmp/hosted get nodes NAME STATUS ROLES AGE VERSION ip-10-0-132-127.ec2.internal Ready worker 7m31s v1.28.6+6216ea1 <---- NEW NODE ip-10-0-133-173.ec2.internal Ready worker 4h9m v1.28.6+6216ea1 ip-10-0-134-242.ec2.internal Ready worker 7m31s v1.28.6+6216ea1 <---- NEW NODE ip-10-0-142-22.ec2.internal Ready worker 4h15m v1.28.6+6216ea1 ip-10-0-143-96.ec2.internal Ready worker 4h2m v1.28.6+6216ea1 sh-4.4$ while /cli/oc --kubeconfig /tmp/hosted debug node/ip-10-0-132-127.ec2.internal -- chroot /host ls /root; do :; done Starting pod/ip-10-0-132-127ec2internal-debug-92mjr ... To use host binaries, run `chroot /host` Removing debug pod ... Error from server: Get "https://10.0.132.127:10250/containerLogs/default/ip-10-0-132-127ec2internal-debug-92mjr/container-00?follow=true": http: server gave HTTP response to HTTPS client
Actual results:
The "oc debug command" fails with error: Error from server: Get "https://10.0.132.127:10250/containerLogs/default/ip-10-0-132-127ec2internal-debug-92mjr/container-00?follow=true": http: server gave HTTP response to HTTPS client
Expected results:
The "oc debug" command should not fail
Additional info:
Link to the slack conversation: https://redhat-internal.slack.com/archives/CH76YSYSC/p1709739450663849