-
Bug
-
Resolution: Not a Bug
-
Normal
-
None
-
4.16, 4.17, 4.18, 4.19, 4.20
-
None
-
None
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
OpenShift pods that use the host’s DNS configuration (hostNetwork: true) do not automatically pick up DNS server updates that are configured through NNCP.
Version-Release number of selected component (if applicable):
All 4.x version
How reproducible:
100 %
Steps to Reproduce:
1- Create nncp configuration to update DNS server on the nodes like below:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: custom-dns
spec:
desiredState:
dns-resolver:
config:
server:
- x.x.x.x
- y.y.y.y
nodeSelector:
beta.kubernetes.io/os: linux
2- Ensure the nncp is got applied on all the nodes:
amuhamme@amuhamme-mac ~ % oc get nncp NAME STATUS REASON custom-dns Available SuccessfullyConfigured
3- Check the nodes /etc/resolv.conf to see new DNS servers
[root@worker-2 ~]# cat /etc/resolv.conf # Generated by NetworkManager search shrocp4upi417ovn.lab.upshift.rdu2.redhat.com nameserver x.x.x.x nameserver y.y.y.y
4- Verify the pods which are using dns from hosts are still running with old dns servers until a manual rollout restart
# oc get pods -A -o jsonpath='{range .items[?(@.spec.hostNetwork==true)]}{.metadata.namespace}{"\t"}{.metadata.name}{"\n"}{end}' ---> this will give all the pods which are using hostnetwork:true # oc rsh -n <ns> <podname> # cat /etc/resolv.conf
Actual results:
The pods are still using old DNS until it restarted
Expected results:
While applying NNCP, do the pods that rely on the host DNS also need to be updated, or should we provide an SOP outlining the required steps during NNCP changes—such as whether a node reboot or a manual rollout of those pods is required?
Additional info: