Uploaded image for project: 'Project Quay'
  1. Project Quay
  2. PROJQUAY-2241

Operator stuck during upgrade from 3.4 to 3.5

XMLWordPrintable

    • False
    • False
    • Quay Enterprise
    • Undefined

      The operator for one of our clients does not progress with the upgrade procedure. This is the error message from the operator log:

      2021-06-15T23:34:33.110Z        ERROR   controllers.QuayRegistry        failed to create/update object  {"quayregistry": "quay-operated", "Name": "d0-quay-app", "GroupVersionKind": "apps/v1, Kind=Deployment
      ", "error": "Deployment.apps \"d0-quay-app\" is invalid: [spec.template.spec.volumes[0].projected: Forbidden: may not specify more than 1 volume type, spec.template.spec.containers[0].volumeMounts[0].name: 
      Not found: \"configvolume\"]"}
      

      However, according to the pod list, all Quay pods (currently 3.4.3) are running, the config volume exists and all secrets are there.

      This is the rest of the operator log:

      2021-06-15T23:34:34.118Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "quayregistry", "request": "quay-operated/d0"}
      2021-06-15T23:34:34.118Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"QuayRegistry","namespace":"quay-operated","name":"d0","uid":"cd80e21c-03cf-4dc3-80bc-e4785a30dbb3"
      ,"apiVersion":"quay.redhat.com/v1","resourceVersion":"648944560"}, "reason": "ComponentCreationFailed", "message": "all Kubernetes objects not created/updated successfully: Deployment.apps \"d0-quay-app\" i
      s invalid: [spec.template.spec.volumes[0].projected: Forbidden: may not specify more than 1 volume type, spec.template.spec.containers[0].volumeMounts[0].name: Not found: \"configvolume\"]"}
      W0616 01:34:42.626211       1 reflector.go:326] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: watch of *v1.Route ended with: an error on the server ("unable to decode an event from
       the watch stream: stream error: stream ID 17; INTERNAL_ERROR") has prevented the request from succeeding
      E0616 01:34:47.873132       1 reflector.go:153] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/rout
      e.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": http2: client connection lost") has prevented the r
      equest from succeeding (get routes.route.openshift.io)
      I0616 01:34:58.876840       1 trace.go:116] Trace[65117175]: "Reflector ListAndWatch" name:sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224 (started: 2021-06-16 01:34:48.87326468 +0000
       UTC m=+7239.489977105) (total time: 10.003546586s):
      Trace[65117175]: [10.003546586s] [10.003546586s] END
      E0616 01:34:58.876860       1 reflector.go:153] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)
      I0616 01:35:09.880675       1 trace.go:116] Trace[1157944885]: "Reflector ListAndWatch" name:sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224 (started: 2021-06-16 01:34:59.876967377 +0000 UTC m=+7250.493679827) (total time: 10.003685875s):
      Trace[1157944885]: [10.003685875s] [10.003685875s] END
      E0616 01:35:09.880695       1 reflector.go:153] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)
      I0616 01:35:40.883817       1 trace.go:116] Trace[1612251387]: "Reflector ListAndWatch" name:sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224 (started: 2021-06-16 01:35:10.880812467 +0000 UTC m=+7261.497524900) (total time: 30.002981525s):
      Trace[1612251387]: [30.002981525s] [30.002981525s] END
      E0616 01:35:40.883837       1 reflector.go:153] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)
      E0616 01:36:11.912826       1 reflector.go:307] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)
      I0616 01:36:38.537917       1 trace.go:116] Trace[764004384]: "Reflector ListAndWatch" name:sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224 (started: 2021-06-16 01:36:12.912938384 +0000 UTC m=+7323.529650780) (total time: 25.624953244s):
      Trace[764004384]: [25.62476336s] [25.62476336s] Objects listed
      W0622 13:45:17.163711       1 reflector.go:326] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 13483; INTERNAL_ERROR") has prevented the request from succeeding
      W0622 13:47:35.468964       1 reflector.go:326] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 13491; INTERNAL_ERROR") has prevented the request from succeeding
      W0622 13:48:57.668483       1 reflector.go:326] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 13497; INTERNAL_ERROR") has prevented the request from succeeding
      

      Then it just repeats the INTERNAL_ERROR over and over.

      I've added all the files the client sent me to the case. Please check! Thanks!

              rhn-support-ibazulic Ivan Bazulic
              rhn-support-ibazulic Ivan Bazulic
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: