-
Bug
-
Resolution: Duplicate
-
Undefined
-
None
-
4.14.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
NodeNetworkConfigurationPolicy created successfully but after 2 days the status change from available to degraded $ oc get nncp egress-policy3 NAME STATUS REASON egress-policy3 Degraded FailedToConfigure
the NNCP add a simple route-rule below
spec:
desiredState:
route-rules:
config:
- ip-to: 100.79.228.64/27
priority: 5550
route-table: 254
nodeSelector:
node-role.kubernetes.io/gateway: ""
NNCP message status : 1/2 nodes failed to configure status: conditions: - lastHeartbeatTime: "2024-08-14T11:36:57Z" lastTransitionTime: "2024-08-14T11:36:57Z" reason: FailedToConfigure status: "False" type: Available - lastHeartbeatTime: "2024-08-14T11:36:57Z" lastTransitionTime: "2024-08-14T11:36:57Z" message: 1/2 nodes failed to configure reason: FailedToConfigure status: "True" type: Degraded
Version-Release number of selected component (if applicable):
kubernetes-nmstate-operator.4.14.0-202406180839 4.14.0-202406180839 ocp: 4.14.29
How reproducible:
Not always
Steps to Reproduce:
1. Deploy NNCP to add route-rules for egress IP
2. Check NNCP status is available
3. Wait sometime 1 or 2 days and check NNCP status
Actual results:
NodeNetworkConfigurationPolicy status change from available (reason:SuccessfullyConfigured) to degrade as 1 of the node config
Expected results:
NodeNetworkConfigurationPolicy status is available and reason SuccessfullyConfigured
Additional info:
1 restart of the nmstate-handler pod fixed the issue for one of their ocp cluster, but for the second cluster multiple restart of the pod were done to get the nncp status to available.
- relates to
-
OCPBUGS-37666 NMstate: Failed to create a nmstate policy - failed to verify certificate
-
- Closed
-