-
Bug
-
Resolution: Done
-
Major
-
1.5.0
-
None
-
5
-
False
-
-
False
-
-
Bug Fix
-
Done
-
-
-
RHDH Plugins 3270, RHDH Plugins 3271
Plugin Name
rbac
๐ Description
I use a large number or records in csv file
'''
...
g, group:default/project_dev1, role:default/Group.Developer
g, group:default/project_dev2, role:default/Group.Developer
โฆ
g, group:default/project_dev1000, role:default/Group.Developer
'''
Every such record produces in logs lines
ย {{
,"eventName":"UpdateRole","isAuditLog":true,"level":"info","message":"Updated role: deleted members","meta":{"author":"csv permission policy file","members":["group:default/project_dev1"],"modifiedBy":"csv permission policy file","roleEntityRef":"role:default/Group.Developer","source":"csv-file"},"plugin":"permission","service":"backstage","stage":"handleRBACData","status":"succeeded"}
{"actor":
,"eventName":"UpdateRole","isAuditLog":true,"level":"info","message":"Updated role: deleted members","meta":{"author":"csv permission policy file","members":["group:default/project_dev2"],"modifiedBy":"csv permission policy file","roleEntityRef":"role:default/Group.Developer","source":"csv-file"},"plugin":"permission","service":"backstage","stage":"handleRBACData","status":"succeeded"}}}
ย
And this lasts 2-3 minutes for my case. During this time backstage hungs and returns 200 ok or 503 error response.
So, here I seeย two issues:
- main: it takes too long to upload all rbac records into DB, and why do we do this meanwhile? Can we ignore update if there were not any changes?
- secondary: deploy of new pod affects database and causes an issue for the currently active pod. This also happens when pod migrates between nodes. So regular process for k8s affect stability of application.
๐ Expected behavior
It should allow deployment of new version of application with close to zero downtime.
๐ Actual Behavior with Screenshots
Every pod restart it updates all records in DB and takes a couple of minutes for this process. Ongoing updates affect currently active backstage instance it starts to return 503.
๐ Reproduction steps
Create rbac-policy.csv with more than thousand of lines.
๐ Provide the context for the Bug.
I would like to minimize downtime and possible service degradation that can happen during deployment of new backstage version, or migration of pod(s) between k8s nodes.
๐ Have you spent some time to check if this bug has been raised before?
ย
I checked and didn't find similar issue
ย
๐ข Have you read the Code of Conduct?
ย
I have read theย Code of Conduct
ย
Are you willing to submit PR?
None
ย
Upstream link: https://github.com/backstage/community-plugins/issues/2552