-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
rhos-18.0.0
-
False
-
-
False
-
?
-
?
-
?
-
?
-
-
-
Moderate
Updating replicas to 0 is accepted by the openstack-operator and the 0 is persisted in the OpenStackControlPlane CR
$ oc patch openstackcontrolplane/openstack-galera-network-isolation --type='json' -p='[{"op": "replace", "path": "/spec/rabbitmq/templates/rabbitmq/replicas", "value":0}]' openstackcontrolplane.core.openstack.org/openstack-galera-network-isolation patched
It is propagated to the RabbitMq CR, but never applied to the deployement:
$ oc get rabbitmq -o yaml | grep replicas replicas: 0 replicas: 1 $ oc get pods | grep rabbit rabbitmq-cell1-server-0 1/1 Running 0 23h rabbitmq-server-0 1/1 Running 0 5m26s
rabbitmq-operator logs
{"level":"error","ts":"2024-06-27T11:03:35Z","msg":"Cluster Scale down not supported; tried to scale cluster from 1 nodes to 0 nodes","controller":"rabbitmqcluster","controllerGroup":"rabbitmq.com","controllerKind":"RabbitmqCluster","RabbitmqCluster":{"name":"rabbitmq","namespace":"openstack"},"namespace":"openstack","name":"rabbitmq","reconcileID":"205d4830-ff39-49f5-bbd1-0e40e72fdf09","error":"UnsupportedOperation","stacktrace":"github.com/rabbitmq/cluster-operator/v2/controllers.(*RabbitmqClusterReconciler).scaleDown\n\t/workspace/controllers/reconcile_scale_down.go:25\ngithub.com/rabbitmq/cluster-operator/v2/controllers.(*RabbitmqClusterReconciler).Reconcile\n\t/workspace/controllers/rabbitmqcluster_controller.go:190\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"}
Then when the replicas is set back to 1 the running rabbitmq pod is restarted.
$ oc patch openstackcontrolplane/openstack-galera-network-isolation --type='json' -p='[{"op": "replace", "path": "/spec/rabbitmq/templates/rabbitmq/replicas", "value":1}]' openstackcontrolplane.core.openstack.org/openstack-galera-network-isolation patched
causing an unnecessary (but temporary) rabbit outage:
rabbitmq-server-0 1/1 Terminating 0 7m32s rabbitmq-server-0 0/1 Terminating 0 7m39s rabbitmq-server-0 0/1 Terminating 0 7m39s rabbitmq-server-0 0/1 Terminating 0 7m39s rabbitmq-server-0 0/1 Terminating 0 7m39s rabbitmq-server-0 0/1 Pending 0 0s rabbitmq-server-0 0/1 Terminating 0 0s rabbitmq-server-0 0/1 Terminating 0 0s rabbitmq-server-0 0/1 Pending 0 0s rabbitmq-server-0 0/1 Pending 0 0s rabbitmq-server-0 0/1 Pending 0 0s rabbitmq-server-0 0/1 Pending 0 1s rabbitmq-server-0 0/1 Pending 0 1s rabbitmq-server-0 0/1 Init:0/1 0 1s rabbitmq-server-0 0/1 Init:0/1 0 2s rabbitmq-server-0 0/1 Init:0/1 0 2s rabbitmq-server-0 0/1 PodInitializing 0 32s rabbitmq-server-0 0/1 Running 0 33s rabbitmq-server-0 1/1 Running 0 42s