Uploaded image for project: 'AMQ Streams'
  1. AMQ Streams
  2. ENTMQST-3637

MM2 PDB maxUnavailable is one when scaling down to zero

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Do
    • Minor
    • None
    • 1.8.4.GA
    • cluster-operator
    • False
    • False

    Description

      When scaling MM2 down to zero, its PDB has fixed "maxUnavailable: 1" and this causes warnings when draining the K8s node.

      # before
      $ kubectl get po
      NAME                                              READY   STATUS    RESTARTS   AGE
      my-cluster-tgt-entity-operator-5949f65c7d-mg4nw   3/3     Running   0          26m
      my-cluster-tgt-kafka-0                            1/1     Running   0          28m
      my-cluster-tgt-kafka-1                            1/1     Running   0          28m
      my-cluster-tgt-kafka-2                            1/1     Running   0          28m
      my-cluster-tgt-zookeeper-0                        1/1     Running   0          29m
      my-cluster-tgt-zookeeper-1                        1/1     Running   0          29m
      my-cluster-tgt-zookeeper-2                        1/1     Running   0          29m
      my-mm2-mirrormaker2-7485ccd555-6862x              1/1     Running   0          2m13s
      my-mm2-mirrormaker2-7485ccd555-cdgsz              1/1     Running   0          2m13s
      $ kubectl get pdb
      NAME                       MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
      my-cluster-tgt-kafka       N/A             1                 1                     28m
      my-cluster-tgt-zookeeper   N/A             1                 1                     30m
      my-mm2-mirrormaker2        N/A             1                 1                     12m
      
      # patching
      $ kubectl patch kmm2 my-mm2 --type json -p '
        [{
          "op":"replace",
          "path":"/spec/replicas",
          "value":0
        }]'
      kafkamirrormaker2.kafka.strimzi.io/my-mm2 patched
      
      # after
      $ kubectl get po
      NAME                                              READY   STATUS    RESTARTS   AGE
      my-cluster-tgt-entity-operator-5949f65c7d-mg4nw   3/3     Running   0          27m
      my-cluster-tgt-kafka-0                            1/1     Running   0          29m
      my-cluster-tgt-kafka-1                            1/1     Running   0          29m
      my-cluster-tgt-kafka-2                            1/1     Running   0          29m
      my-cluster-tgt-zookeeper-0                        1/1     Running   0          30m
      my-cluster-tgt-zookeeper-1                        1/1     Running   0          30m
      my-cluster-tgt-zookeeper-2                        1/1     Running   0          30m
      $ kubectl get pdb
      NAME                       MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
      my-cluster-tgt-kafka       N/A             1                 1                     29m
      my-cluster-tgt-zookeeper   N/A             1                 1                     30m
      my-mm2-mirrormaker2        N/A             1                 0                     12m
      

      During the reconciliation, we should set "maxUnavailable: <MAX_INT>" in that case, and back to "maxUnavailable: 1" when scaling up again.

      The workaround is to use templates, but one has to remember to change back to 1:

      spec:
        template:
          podDisruptionBudget:
            maxUnavailable: <MAX_INT>
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              rhn-support-fvaleri Federico Valeri
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: