-
Task
-
Resolution: Done
-
Undefined
-
None
-
False
-
False
-
?
-
?
-
OSPRH-811 - Red Hat OpenStack 18.0 Greenfield Deployment
-
?
-
?
-
-
This is a follow-up to the fix for destructive OVN db cluster pod deletions: https://github.com/openstack-k8s-operators/ovn-operator/pull/247
Before the merged fix, if one were to delete all pods from a cluster, the RAFT cluster was back broken, either partially or completely. This bug is reported to add a kuttl test scenario to cover the scenario.
The test scenario would do the following steps:
- stand up a 3 (or 5) replicas ovn db cluster.
- confirm pods are all up.
- confirm that the pods established the mesh.
- delete all pods.
- check that new pods are respawned.
- check that they re-established the mesh with the correct number of pods.
A modification of the same scenario can e.g. delete just the first pod (to trigger leadership transfer to one of the other pods), then deletion of the other pods (all or some of them.)
The cluster membership checks can be done by executing cluster/status ctl command. Also, logs can be checked for any raft errors present.
- is blocked by
-
OSPRH-6899 ovn dbs scale up broken with TLS Enabled
- Closed
- links to
- mentioned in
-
Page Loading...
- mentioned on