-
Bug
-
Resolution: Obsolete
-
Normal
-
4.11
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
NFD Operator controller manager pod constantly restarting and the pod remains in 1/2 state. ~~~ 4s Normal LeaderElection configmap/nfd.openshift.io nfd-controller-manager-7d5bf59c74-xx7l5_ea83b0a8-a301-4472-9b69-91acc4dd4d8d became leader 4s Normal LeaderElection lease/nfd.openshift.io nfd-controller-manager-7d5bf59c74-xx7l5_ea83b0a8-a301-4472-9b69-91acc4dd4d8d became leader ~~~
Version-Release number of selected component (if applicable):
4.11
How reproducible:
100%
Steps to Reproduce:
1. Install NFD operator in OCP v4.11
2. Create the CR with minimum fields as given below, we can notice there is no spec.workerConfig.configData section. Now the NFD Operator controller manager pod fails to start.
~~~
$ oc get nodefeaturediscovery -n openshift-nfd -o yaml
apiVersion: v1
items:
- apiVersion: nfd.openshift.io/v1
kind: NodeFeatureDiscovery
metadata:
creationTimestamp: "2023-04-21T04:28:27Z"
finalizers:
- foreground-deletion
generation: 3
name: nfd-instance
namespace: openshift-nfd
resourceVersion: "419164841"
uid: ce696058-fe83-4999-bb9c-2f9f7f8f8796
spec:
operand:
image: registry.redhat.io/openshift4/ose-node-feature-discovery
servicePort: 12000
topologyupdater: false
kind: List
metadata:
resourceVersion: ""
~~~
3. Remove the CR and create it again with spec.customConfig.configData and without spec.workerConfig.configData and that also fails and pod restarts
4. Now delete the CR and create it again with both spec.customConfig.configData and spec.workerConfig.configData and now the pod runs without any issue.
Actual results:
nfd-controller-manager pod fails to start and restarts continuously.
Expected results:
nfd-controller-manager pod should run without any issues.
Additional info: