-
Bug
-
Resolution: Done
-
Critical
-
None
-
1.8.0
-
False
-
-
False
-
-
Description of problem:
securityContext field in cluster and kam deployment is missing after upgrading the operator from v1.7.2 to v1.8.0-19 RC. This is not reproducible with direct installation of 1.8.0-19 RC
Prerequisites (if any, like setup, operators/versions):
A cluster with Openshift GitOps v1.7.2 operator installed.
Steps to Reproduce
- Clone gitops-components-automated-testing repository
- export IIB_ID, QUAY_USER and NEW_VER as per the document
- Run $ make operator-upgrade
- Clone operator-e2e repository
- Change directory to gitops-operator and run
$ kubectl kuttl test ./tests/parallel --config ./tests/parallel/kuttl-test.yaml --test 1-064_validate_security_contexts
Actual results:
After upgrade:
$ oc get pods -n openshift-gitops NAME READY STATUS RESTARTS AGE cluster-6969cc9956-hsrq5 1/1 Running 0 56s kam-7d7bfc8675-wxfsn 1/1 Running 0 4m16s openshift-gitops-application-controller-0 1/1 Running 0 48s openshift-gitops-applicationset-controller-84fc5dbf77-ctdjg 1/1 Running 0 55s openshift-gitops-dex-server-74c4b4b97d-g58qj 1/1 Running 0 55s openshift-gitops-redis-bb656787d-4j2rw 1/1 Running 0 4m15s openshift-gitops-repo-server-7f8bc6888d-g59f6 1/1 Running 0 55s openshift-gitops-server-556999c9df-xq4zx 1/1 Running 0 55s
Running the 1-064_validate_security_contexts test gives below error:
.spec.template.spec.containers.securityContext: key is missing from map for both kam and cluster deployments
$ oc get deployment/kam -n openshift-gitops -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2023-03-09T18:55:14Z" generation: 1 name: kam namespace: openshift-gitops ownerReferences: - apiVersion: pipelines.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: GitopsService name: cluster uid: 3e9234c9-207e-4374-a117-7eccd3b6caac resourceVersion: "6015858" uid: af922630-3f7d-4978-9f7b-2ba5562491e1 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: kam strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app.kubernetes.io/name: kam spec: containers: - image: registry.redhat.io/openshift-gitops-1/kam-delivery-rhel8@sha256:f8af48a0deb6b6c393b93d94467030c7bf558faf8c40b4b52a72831f47cc484d imagePullPolicy: IfNotPresent name: kam ports: - containerPort: 8080 name: http protocol: TCP resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: linux restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: "2023-03-09T18:55:17Z" lastUpdateTime: "2023-03-09T18:55:17Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2023-03-09T18:55:14Z" lastUpdateTime: "2023-03-09T18:55:17Z" message: ReplicaSet "kam-7d7bfc8675" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1
$ oc get deployment/cluster -n openshift-gitops -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "48" creationTimestamp: "2023-03-09T18:55:14Z" generation: 48 name: cluster namespace: openshift-gitops ownerReferences: - apiVersion: pipelines.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: GitopsService name: cluster uid: 3e9234c9-207e-4374-a117-7eccd3b6caac resourceVersion: "6020886" uid: da448f34-f55e-43be-b5c6-ea3efcf0a95c spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: cluster strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app.kubernetes.io/name: cluster spec: containers: - env: - name: INSECURE value: "true" image: registry.redhat.io/openshift-gitops-1/gitops-rhel8@sha256:7da24c63073d98160924007e8a12d2cd11c09a161a07ed8f44d8f1957360927b imagePullPolicy: IfNotPresent name: cluster ports: - containerPort: 8080 name: http protocol: TCP resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/gitops/ssl name: backend-ssl readOnly: true dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: linux restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: gitops-service-cluster serviceAccountName: gitops-service-cluster terminationGracePeriodSeconds: 30 volumes: - name: backend-ssl secret: defaultMode: 420 secretName: cluster status: availableReplicas: 1 conditions: - lastTransitionTime: "2023-03-09T18:55:18Z" lastUpdateTime: "2023-03-09T18:55:18Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2023-03-09T18:55:14Z" lastUpdateTime: "2023-03-09T18:58:37Z" message: ReplicaSet "cluster-6969cc9956" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 48 readyReplicas: 1 replicas: 1 updatedReplicas: 1
Expected results:
Test should succeed
Reproducibility (Always/Intermittent/Only Once):
Always
Build Details:
Index image v4.8: registry-proxy.engineering.redhat.com/rh-osbs/iib:445041
Index image v4.9: registry-proxy.engineering.redhat.com/rh-osbs/iib:445044
Index image v4.10: registry-proxy.engineering.redhat.com/rh-osbs/iib:445049
Index image v4.11: registry-proxy.engineering.redhat.com/rh-osbs/iib:445058
Index image v4.12: registry-proxy.engineering.redhat.com/rh-osbs/iib:445061
Index image v4.13: registry-proxy.engineering.redhat.com/rh-osbs/iib:445063
Additional info (Such as Logs, Screenshots, etc):
- mentioned on