Uploaded image for project: 'WildFly WIP'
  1. WildFly WIP
  2. WFWIP-207

UX: Force removal of Operator upon delete - do not hang due to finalizers

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • OpenShift

      We run yet into another use case where finalizers prevent users from deleting the project - the delete operation hangs.

      pods:

      $ oc get all
      NAME                                    READY   STATUS             RESTARTS   AGE
      pod/simple-jaxrs-operator-0             0/1     ImagePullBackOff   0          9m11s
      pod/simple-jaxrs-operator-1             0/1     ImagePullBackOff   0          9m11s
      pod/wildfly-operator-686846d6fb-db9sj   1/1     Running
      
      $ oc delete wildflyserver simple-jaxrs-operator 
      wildflyserver.wildfly.org "simple-jaxrs-operator" deleted
      ... hangs forever     
      

      operator log:

      {"level":"info","ts":1569308322.2926116,"logger":"controller_wildflyserver","msg":"Reconciling WildFlyServer","Request.Namespace":"pkremens-namespace","Request.Name":"simple-jaxrs-operator"}
      {"level":"info","ts":1569308322.2927597,"logger":"controller_wildflyserver","msg":"WildflyServer is marked for deletion. Waiting for finalizers to clean the workspace","Request.Namespace":"pkremens-namespace","Request.Name":"simple-jaxrs-operator"}
      {"level":"info","ts":1569308322.2929516,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"pkremens-namespace","Request.Name":"simple-jaxrs-operator","Pod Name":"simple-jaxrs-operator-0","IP Address":"10.128.0.227","Pod State":"SCALING_DOWN_RECOVERY_INVESTIGATION","Pod Phase":"Pending"}
      {"level":"info","ts":1569308322.2931426,"logger":"controller_wildflyserver","msg":"Transaction recovery scaledown processing","Request.Namespace":"pkremens-namespace","Request.Name":"simple-jaxrs-operator","Pod Name":"simple-jaxrs-operator-1","IP Address":"10.128.0.226","Pod State":"SCALING_DOWN_RECOVERY_INVESTIGATION","Pod Phase":"Pending"}
      {"level":"error","ts":1569308322.294659,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"wildflyserver-controller","request":"pkremens-namespace/simple-jaxrs-operator","error":"Finalizer processing: failed transaction recovery for WildflyServer pkremens-namespace:simple-jaxrs-operator name Error: Found 2 errors:\n [[Pod 'simple-jaxrs-operator-0' / 'simple-jaxrs-operator' is in pending phase Pending. It will be hopefully started in a while. Transaction recovery needs the pod being fully started to be capable to mark it as clean for the scale down.]], [[Pod 'simple-jaxrs-operator-1' / 'simple-jaxrs-operator' is in pending phase Pending. It will be hopefully started in a while. Transaction recovery needs the pod being fully started to be capable to mark it as clean for the scale down.]],","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.1.12/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190221213512-86fb29eff628/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190221213512-86fb29eff628/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190221213512-86fb29eff628/pkg/util/wait/wait.go:88"}
      

      This is a call between safety vs. usability, but we believe that these issues (hanging delete command due to EAP7-1192) could be a serious usability problem for users.

      actual

      • scale down can require manual user interaction forced by finalizers
      • delete can hang, requiring manual user interaction (delete deployment object, remove finalizer from operator CR, run delete again)

      expected

      • scale down can require manual user interaction forced by finalizers
      • delete should never hang, it should be treated like a pulling a plug (rm -rf), in case users needs to make s graceful shutdown, he make a proper scale down to 0 prior the project deletion - this should be properly documented

              ochaloup@redhat.com Ondrej Chaloupka (Inactive)
              pkremens@redhat.com Petr Kremensky (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: