-
Bug
-
Resolution: Done
-
Critical
-
quay-v3.4.0
-
False
-
False
-
Undefined
-
Description:
This is an issue found when migration quay CR from QuayEcosystem to QuayRegistry, after triggered migration, found migration was failed at step "migrate managed database", checked TNG Operator POD logs, get error message "failed to migrate database {"quayecosystem": "quay1126/mig-quayecosystem", "error": "database config missing `credentialsSecretName` containing password for `postgres` user"}"
TNG Operator Logs:
2020-11-26T01:43:03.976Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "quayecosystem", "request": "quay310/perf-quayecosystem"} 2020-11-26T01:43:03.976Z INFO controllers.QuayEcosystem begin reconcile {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:43:03.976Z INFO controllers.QuayEcosystem `QuayEcosystem` not marked for migration, skipping {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:43:03.976Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "quayecosystem", "request": "quay1126/mig-quayecosystem"} 2020-11-26T01:54:29.148Z INFO controllers.QuayEcosystem begin reconcile {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:54:29.849Z INFO controllers.QuayEcosystem attempting to migrate managed object storage {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:54:29.849Z INFO controllers.QuayEcosystem successfully migrated managed object storage {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:54:29.849Z INFO controllers.QuayEcosystem attempting to migrate managed database {"quayecosystem": "quay1126/mig-quayecosystem"} 2020-11-26T01:54:29.849Z ERROR controllers.QuayEcosystem failed to migrate database {"quayecosystem": "quay1126/mig-quayecosystem", "error": "database config missing `credentialsSecretName` containing password for `postgres` user"} github.com/go-logr/zapr.(*zapLogger).Error /workspace/vendor/github.com/go-logr/zapr/zapr.go:128 github.com/quay/quay-operator/controllers/redhatcop.(*QuayEcosystemReconciler).Reconcile /workspace/controllers/redhatcop/quayecosystem_controller.go:144 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 k8s.io/apimachinery/pkg/util/wait.JitterUntil /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 k8s.io/apimachinery/pkg/util/wait.Until /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 2020-11-26T01:54:29.850Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "quayecosystem", "request": "quay1126/mig-quayecosystem"}
Quay V3.3 QuayEcosystem CR:
lizhang@lzha-mac Quay3.3_operator_testing % cat quayecosystem_cr_aws_332_migration.yaml apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: mig-quayecosystem spec: quay: imagePullSecretName: redhat-pull-secret image: quay.io/quay/quay:v3.3.1-3 registryBackends: - name: s3 s3: accessKey: *** bucketName: quayperf secretKey: *** host: s3.us-east-2.amazonaws.com database: volumeSize: 30Gi envVars: - name: DEBUGLOG value: "true" clair: enabled: true image: quay.io/quay/clair-jwt:v3.3.1-2 imagePullSecretName: redhat-pull-secret updateInterval: "60m"
Quay TNG Operator Image:
lizhang@lzha-mac Quay3.3_operator_testing % oc get pod NAME READY STATUS RESTARTS AGE mig-quayecosystem-clair-769bdf45b8-74r8b 1/1 Running 0 80m mig-quayecosystem-clair-postgresql-6f84c88b8f-mbtgp 1/1 Running 0 81m mig-quayecosystem-quay-6bdf75b6dc-v2z85 1/1 Running 0 81m mig-quayecosystem-quay-config-6959c6f5d7-plv95 1/1 Running 0 82m mig-quayecosystem-quay-postgresql-5b475cd4fc-5gttj 1/1 Running 0 82m mig-quayecosystem-redis-9c988d767-pld9d 1/1 Running 0 83m lizhang@lzha-mac Quay3.3_operator_testing % oc get pod quay-operator-59d4f8b9fd-hqbkb -n openshift-operators -o json | jq '.spec.containers[0].image' "registry.redhat.io/quay/quay-rhel8-operator@sha256:975d9a16750449b98fe6f40077a68dcef6a902e39e90b829c9a12868c8b47280"
Quay Postgresql Database POD:
lizhang@lzha-mac Quay3.3_operator_testing % oc get pod mig-quayecosystem-quay-postgresql-5b475cd4fc-5gttj -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "", "interface": "eth0", "ips": [ "10.131.1.115" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "", "interface": "eth0", "ips": [ "10.131.1.115" ], "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2020-11-26T01:33:49Z" generateName: mig-quayecosystem-quay-postgresql-5b475cd4fc- labels: app: quay-operator pod-template-hash: 5b475cd4fc quay-enterprise-component: quay-database quay-enterprise-cr: mig-quayecosystem managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:pod-template-hash: {} f:quay-enterprise-component: {} f:quay-enterprise-cr: {} f:ownerReferences: .: {} k:{"uid":"33883f33-bef4-4a41-8ee5-17df665dde8a"}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:containers: k:{"name":"mig-quayecosystem-quay-postgresql"}: .: {} f:env: .: {} k:{"name":"POSTGRESQL_DATABASE"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} k:{"name":"POSTGRESQL_PASSWORD"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} k:{"name":"POSTGRESQL_USER"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":5432,"protocol":"TCP"}: .: {} f:containerPort: {} f:protocol: {} f:readinessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/var/lib/pgsql/data"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} f:volumes: .: {} k:{"name":"data"}: .: {} f:name: {} f:persistentVolumeClaim: .: {} f:claimName: {} manager: kube-controller-manager operation: Update time: "2020-11-26T01:33:49Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} f:k8s.v1.cni.cncf.io/networks-status: {} manager: multus operation: Update time: "2020-11-26T01:34:06Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.131.1.115"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update time: "2020-11-26T01:34:24Z" name: mig-quayecosystem-quay-postgresql-5b475cd4fc-5gttj namespace: quay1126 ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: mig-quayecosystem-quay-postgresql-5b475cd4fc uid: 33883f33-bef4-4a41-8ee5-17df665dde8a resourceVersion: "51273649" selfLink: /api/v1/namespaces/quay1126/pods/mig-quayecosystem-quay-postgresql-5b475cd4fc-5gttj uid: 683a53df-9f28-420b-9638-40257b29fef3 spec: containers: - env: - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-username name: mig-quayecosystem-quay-postgresql - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: mig-quayecosystem-quay-postgresql - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: mig-quayecosystem-quay-postgresql image: registry.access.redhat.com/rhscl/postgresql-96-rhel7:1 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /usr/libexec/check-container - --live failureThreshold: 3 initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: mig-quayecosystem-quay-postgresql ports: - containerPort: 5432 protocol: TCP readinessProbe: exec: command: - /usr/libexec/check-container failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000610000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/pgsql/data name: data - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-khjhp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: default-dockercfg-q74wg nodeName: ip-10-0-185-102.us-east-2.compute.internal preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000610000 seLinuxOptions: level: s0:c25,c5 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: data persistentVolumeClaim: claimName: mig-quayecosystem-quay-postgresql - name: default-token-khjhp secret: defaultMode: 420 secretName: default-token-khjhp status: conditions: - lastProbeTime: null lastTransitionTime: "2020-11-26T01:33:56Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2020-11-26T01:34:24Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2020-11-26T01:34:24Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2020-11-26T01:33:56Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://04bc1785cba8d3aef6fa78c864c90937d3855b21b1763a13c7b8f445bdf18a7e image: registry.access.redhat.com/rhscl/postgresql-96-rhel7:1 imageID: registry.access.redhat.com/rhscl/postgresql-96-rhel7@sha256:5caffb1ae63e946ad738687cccab149b92f59f34120243d3946776b1993ffcf3 lastState: {} name: mig-quayecosystem-quay-postgresql ready: true restartCount: 0 started: true state: running: startedAt: "2020-11-26T01:34:07Z" hostIP: 10.0.185.102 phase: Running podIP: 10.131.1.115 podIPs: - ip: 10.131.1.115 qosClass: BestEffort startTime: "2020-11-26T01:33:56Z"
Catalogsource:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog-1126
namespace: openshift-marketplace
spec:
sourceType: grpc
image: brew.registry.redhat.io/rh-osbs/iib:28108
displayName: My Operator Catalog
publisher: grpc
Steps:
- Open OCP console
- Deploy Quay 3.3.2 Operator
- Create Quay CR resource under specified namespace using managed postgresql db and external AWS S3 as storage registry
- Login Quay and create new organization, team, image repository, and push images to above new image repository
- Uninstall Quay 3.3.2 Operator
- Deploy Quay V3.4 TNG Operator with all default settings
- Edit Quay CR QuayEcosystem with adding "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem, and save modification
- Wait to check the status of CR QuayRegistry
Expected Results:
Migration from QuayEcosystem to QuayRegistry completed successfully.
Actual Results:
Migration from QuayEcosystem to QuayRegistry was failed at step "migrate managed database"