-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
False
-
-
False
-
-
-
USER PROBLEM
What is the user experiencing as a result of the bug? Include steps to reproduce.
I found the following problems during the manual upgrade instructions verification for ACS patch release 4.9.2 release candidate 1.
- Since the security policy CRD does not change between version 4.8 and 4.9, the section for generating resources and extracting the CRD and applying it is not needed.
- Similarly for scanner tls-secret and config.
- As a result the text under https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.9/html/upgrading/upgrade-roxctl#upgrade-central-cluster:
After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the next step is to upgrade the Central cluster.
Should probably read:
After you have created a backup of the Central database, the next step is to upgrade the Central cluster.
What is missing, however, are:
- instructions for upgrading scanner-v4 (which is also shipped in the install bundle) in general, and
- in particular, instructions to update the resource requirements and limits that changed for Scanner v4 since version 4.8.
[stackrox]$ diff -ruN -U1 central-bundle-4.8.6/ central-bundle-4.9.0|filterdiff -i '*scanner-v4*deployment.yaml'|grepdiff --output-matching=hunk requests
diff -ruN -U1 central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-db-deployment.yaml central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-db-deployment.yaml
--- central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-db-deployment.yaml 2025-12-11 12:24:29.000000000 +0100
+++ central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-db-deployment.yaml 2025-12-11 12:19:27.000000000 +0100
@@ -93,10 +93,10 @@
readOnly: true
resources:
limits:
- cpu: 2000m
- memory: 4Gi
+ cpu: 4000m
+ memory: 8Gi
requests:
- cpu: 200m
- memory: 3Gi
+ cpu: 1000m
+ memory: 4Gi
containers:
- name: db
@@ -125,11 +125,11 @@
timeoutSeconds: 1
resources:
limits:
- cpu: 2000m
- memory: 4Gi
+ cpu: 4000m
+ memory: 8Gi
requests:
- cpu: 200m
- memory: 3Gi
+ cpu: 1000m
+ memory: 4Gi
volumeMounts:
- name: disk
mountPath: /var/lib/postgresql/data
diff -ruN -U1 central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-indexer-deployment.yaml central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-indexer-deployment.yaml
--- central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-indexer-deployment.yaml 2025-12-11 12:24:29.000000000 +0100
+++ central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-indexer-deployment.yaml 2025-12-11 12:19:27.000000000 +0100
@@ -98,11 +98,11 @@
resources:
limits:
- cpu: 2000m
+ cpu: 4000m
memory: 3Gi
requests:
- cpu: 1000m
- memory: 1500Mi
+ cpu: 1500m
+ memory: 0.5Gi
command:
- entrypoint.sh
- --conf=/etc/scanner/config.yaml
diff -ruN -U1 central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-matcher-deployment.yaml central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-matcher-deployment.yaml
--- central-bundle-4.8.6/scanner-v4/02-scanner-v4-07-matcher-deployment.yaml 2025-12-11 12:24:29.000000000 +0100
+++ central-bundle-4.9.0/scanner-v4/02-scanner-v4-07-matcher-deployment.yaml 2025-12-11 12:19:27.000000000 +0100
@@ -97,11 +97,11 @@
resources:
limits:
- cpu: 2000m
- memory: 2Gi
- requests:
cpu: 1000m
- memory: 500Mi
+ memory: 3Gi
+ requests:
+ cpu: 500m
+ memory: 1.5Gi
command:
- entrypoint.sh
- --conf=/etc/scanner/config.yaml
I established this by comparing the diff output between central installation bundles created with roxctl version 4.8.6 and 4.9.2-rc1.
There are also multiple differences in the secured cluster components missing in the upgrade docs.
I'm attaching them in the file. It was created by running kubectl diff between an installation bundle for a secured cluster downloaded from an upgraded 4.9.2 central's /main/clusters/UUID page, and a live configuration for the same secured cluster while still at 4.8.6.
A consequence of this is a subtle drift between how a freshly installed ACS and one upgraded using these instructions works. With each upgrade, the drift increases.
CONDITIONS
What conditions need to exist for a user to be affected? Is it everyone? Is it only those with a specific integration? Is it specific to someone with particular database content? etc.
- pending
ROOT CAUSE
What is the root cause of the bug?
- pending
FIX
How was the bug fixed (this is more important if a workaround was implemented rather than an actual fix)?
- pending