-
Bug
-
Resolution: Done
-
Blocker
-
None
-
8
-
False
-
-
False
-
Committed
-
No Docs Impact
-
ovn-operator-bundle-container-1.0.0-45
-
Proposed
-
Proposed
-
None
-
Release Note Not Required
-
-
-
-
Important
Right now, when a new ovs container image is rolled, all OVS pods will restart; on new vswitchd process start, it will flush all kernel flows. This will disrupt gateway datapath, until ovn-controller reconnects to vswitchd and reinstalls its flows again.
To avoid it, we can adopt flow-restore-wait option for vswitchd. It is implemented as `reload` action on vswitchd systemd service unit. Since we don't use systemd units in podified env, we need to reimplement it in our service stop/startup scripts.
In PreStop:
- dump flows to a file on PVC;
In PreStart:
- set flow-restore-wait=true;
- start vswitchd;
- once vswitchd is up:
- restore flows;
- set flow-restore-wait=false to allow ovn-controller to reconnect.
When backing flows up, consider how long it may take, and if the default pod timeouts for startup / liveness are long enough to never hit a forced kill before vswitchd is started. (alternatively, consider modifying liveness checks to monitor flow dump progress.)
- blocks
-
OSPRH-691 BZ#2092485 [RFE] [P1] Podified Control Plane : Neutron
- Closed
- causes
-
OSPRH-11228 ovs-vswitchd container not logging to console
- Backlog
- incorporates
-
OSPRH-4385 Ensure we can perform minor updates of Neutron components without impacting workload (connection drops, packets lost)
- Closed
- is related to
-
OSPRH-3663 Migrate ping test from tripleo-upgrade to CI-framework update role.
- Closed
- relates to
-
OSPRH-6141 Make EDPM deployment not disrupt OVS dataplane traffic when openvswitch package is updated
- Verified
- links to
- mentioned in
-
Page Loading...
- mentioned on