Description of problem:
Independently on the cluster size and utilization there are always logs in etcd like: ---------------------------------------------- 2024-12-10T14:07:37.590235000Z {"level":"warn","ts":"2024-12-10T14:07:37.590153Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.11.111.2:45236","server-name":"","error":"EOF"} 2024-12-10T14:07:40.669391972Z {"level":"warn","ts":"2024-12-10T14:07:40.669332Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.11.111.3:33410","server-name":"","error":"EOF"} 2024-12-10T14:07:49.696345652Z {"level":"warn","ts":"2024-12-10T14:07:49.696279Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.11.111.2:60180","server-name":"","error":"EOF"} ---------------------------------------------- This does not have any direct impact on the clusters as per our analysis, however it is something to consider as a bug as per our investigation it looks like that kube-apiserver close the connection with etcd without any data transmitted.
How reproducible:
This scenario is easy to reproduce as these logs has been found on cluster freshly installed with zero customization. Seen in: - Baremetal - VMware - AWS
Actual results:
- Log spam in ETCD. - Network noise with no value.
Expected results:
These logs should be more clear when the kube-apiserver close brutally a connection or even better should not even be there in first place as in these connection there are not data exchange at all.
Additional info:
Using stap-perf and tcpdump we understand that is the kube-apiserver which behave in this way, from deep investigation the clues lead to an issue with how go handle these connection and could be related to the upstream.