-
Epic
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
Simplify Scaling of 3scale
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
To Do
-
0% To Do, 0% In Progress, 100% Done
Backend consists of two types of processes:
- Listeners
- Workers
We need to deploy them in a way to take maximum advantages of the cores at their disposal. Currently this is done with more processes / worker threads inside pods.
This EPIC is about implementing both Listeners and Workers as Async/Reactor processes, so that a single process is capable of utilizing 100% (or close) of CPU.
We will then change deployment so that we deploy one process per pod, and remove the need for configuring Listener and Worker scalability inside each pod.
Then, scalability will be pure horizontal pod scalability, and more cloud native.
That requires:
- code changes in backend for Listeners
- Code changes in backend for Workers
- Changes in redis job queues
- Changes in templates and operator for new deployment, removal of old ENV vars used
- Changes to 2.7 -> 2.8 upgrade process due to queue changes
- Changes to doc related to how to scale backend
- Changes to performance tests related to how backend scales (manually)
- Doing performance testing to make sure no major regression related to these changes and that we can still meet our SKU requirements
- QE work depends on plans to test scaling/performance in QE
HPA
=
Once scaling by pod has been implemented and tested we plan to investigate the use of HPA (Horizontal Pod Autoscaling) to have backend scale automatically based on metrics related to load.
If HPA works successfully, that may require some additional changes to Templates/Operators or docs - but we need to investigate it before we will know details. As we learn we will create issues withing this EPIC.
- relates to
-
THREESCALE-4878 2.8 Performance Testing
- Closed