-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
False
-
-
False
-
?
-
None
-
-
-
-
Important
Bug Report: Octavia Load Balancer Creation/Failover Fails with Full Backend Member Subnet in RHOSP 17.1
Summary: In Red Hat OpenStack Platform (RHOSP) 17.1, the Octavia load balancer creation or failover process fails if the backend member subnet is fully utilized. This issue is attributed to changes introduced by commit https://review.opendev.org/c/openstack/octavia/+/665402, which causes Octavia to attempt to add IP addresses from the member subnet to the amphora, leading to an IP allocation failure in multi-subnet network configurations.
Version: RHOSP 17.1 OpenStack Octavia (commit https://review.opendev.org/c/openstack/octavia/+/665402 related changes)
Environment: Multi-subnet OpenStack network where one or more subnets are fully utilized.
Steps to Reproduce:
Verify IP availability: Ensure a subnet within a network is fully consumed. Note that the subnet `tenant-internal-direct-subnet3` is full.
Scenario 1: Load Balancer Creation with Member in Full Subnet
Create Load Balancer:
- openstack loadbalancer create --name lb2 --vip-subnet-id tenant-internal-direct-subnet5 –wait
- openstack loadbalancer listener create --name listener2 --protocol HTTP --protocol-port 80 --wait lb2
- openstack loadbalancer pool create --name pool2 --lb-algorithm ROUND_ROBIN --listener listener2 --protocol HTTP –wait
- openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path / --wait pool2
- openstack loadbalancer member create --subnet-id tenant-internal-direct-subnet3 --address 10.199.126.182 --protocol-port 80 --wait pool2
Expected Result:
The load balancer member should be created successfully, and HAProxy should be configured on the amphora. The Octavia service should not attempt to allocate new IP addresses for the amphora on the backend member subnet if it's not required for the amphora's operational connectivity.
Actual Result:
The openstack loadbalancer member addition did not work as expected. Although the load balancer resource eventually transitions to an ACTIVE state, HAProxy is not properly configured on the amphora. As a result we get 503 service unavailable when accessing the VIP
Error Log Snippet:
The following error message is present in worker.log
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server [-] Exception during message handling: octavia.network.base.NetworkException: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
Neutron server returns request_ids: ['req-12e5ba70-9c44-473c-a959-e649a414215a']
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/octavia/network/drivers/neutron/base.py", line 291, in plug_fixed_ip
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server updated_port = self.neutron_client.update_port(port_id, body)
...
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server neutronclient.common.exceptions.IpAddressGenerationFailureClient: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server Neutron server returns request_ids: ['req-12e5ba70-9c44-473c-a959-e649a414215a']
...
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/octavia/network/drivers/neutron/base.py", line 294, in plug_fixed_ip
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server raise base.NetworkException(str(e))
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server octavia.network.base.NetworkException: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
Scenario 2: Failover of Existing Active Load Balancer with Member in Full Subnet
Have an existing active load balancer with at least one member configured in a fully utilized subnet (e.g., tenant-internal-direct-subnet3).
Initiate a failover operation for this load balancer .
Expected Results:
The failover operation should complete successfully, and the load balancer should resume normal operation with HAProxy configured on the new amphora.
Actual Result:
The failover process fails.
The Octavia logs show an octavia.network.base.NetworkException indicating "No more IP addresses available on network" during the plug_fixed_ip operation.
Error Log Snippet:
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server [-] Exception during message handling: octavia.network.base.NetworkException: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
Neutron server returns request_ids: ['req-12e5ba70-9c44-473c-a959-e649a414215a']
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/octavia/network/drivers/neutron/base.py", line 291, in plug_fixed_ip
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server updated_port = self.neutron_client.update_port(port_id, body)
...
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server neutronclient.common.exceptions.IpAddressGenerationFailureClient: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server Neutron server returns request_ids: ['req-12e5ba70-9c44-473c-a959-e649a414215a']
...
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/octavia/network/drivers/neutron/base.py", line 294, in plug_fixed_ip
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server raise base.NetworkException(str(e))
2025-08-14 05:57:28.375 141942 ERROR oslo_messaging.rpc.server octavia.network.base.NetworkException: No more IP addresses available on network e33324cc-69d1-4b61-b4b5-9264f7ba0d92.
Root Cause Analysis:
The issue stems from the commit https://review.opendev.org/c/openstack/octavia/+/665402. This change appears to have altered Octavia's behaviour, causing it to attempt to assign an IP from the member's subnet to the amphora. When the member's subnet is full, this IP allocation fails, preventing the load balancer from being fully configured or failed over successfully.
Impact: Critical
This bug significantly hinders the ability to migrate Octavia to Wallaby (RHOSP 17.1). In production environments where many multi-subnet networks are utilized and load balancer members often reside in fully consumed subnets; this issue prevents successful creation or failover operations with the new amphora image.
- links to