Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-24439

EAP Operator scaling down shutdowns two pods instead of one

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • OP-3.0.0.GA
    • OP-2.4.0.GA
    • OpenShift, Operator
    • None
    • False
    • None
    • False
    • Hide

      1. Install the EAP Operator in a cluster.
      2. Build the image, for example by using helm charts:

      $ cat <<EOF > /tmp/helm-chart-build.yaml
      image:
        tag: latest
      build:
        enabled: true
        mode: s2i
        uri: 'https://github.com/jboss-developer/jboss-eap-quickstarts.git'
        ref: xp-4.0.x
        contextDir: microprofile-config
        output:
          kind: ImageStreamTag
        env:
          - name: MAVEN_ARGS_APPEND
            value: '-Dcom.redhat.xpaas.repo.jbossorg'
        triggers: {}
        s2i:
          version: latest
          arch: amd64
          jdk: '11'
          amd64:
            jdk11:
              builderImage: registry.redhat.io/jboss-eap-7/eap-xp4-openjdk11-openshift-rhel8
              runtimeImage: registry.redhat.io/jboss-eap-7/eap-xp4-openjdk11-runtime-openshift-rhel8
      deploy:
        enabled: false
      EOF
      
      $ helm install microprofile-config-app \
          -f /tmp/helm-chart-build.yaml \
          jboss-eap/eap74
      

      3. Deploy the application with persistent storage.

      cat <<EOF | oc create -f -
      apiVersion: wildfly.org/v1alpha1
      kind: WildFlyServer
      metadata:
        name: microprofile-config-operator-app
      spec:
       applicationImage: 'microprofile-config-app:latest'
       replicas: 3
       storage:
          volumeClaimTemplate:
            spec:
              resources:
                requests:
                  storage: 10Mi
      EOF
      

      4. Scale down the replicas from 3 to 2

      Show
      1. Install the EAP Operator in a cluster. 2. Build the image, for example by using helm charts: $ cat <<EOF > /tmp/helm-chart-build.yaml image: tag: latest build: enabled: true mode: s2i uri: 'https: //github.com/jboss-developer/jboss-eap-quickstarts.git' ref: xp-4.0.x contextDir: microprofile-config output: kind: ImageStreamTag env: - name: MAVEN_ARGS_APPEND value: '-Dcom.redhat.xpaas.repo.jbossorg' triggers: {} s2i: version: latest arch: amd64 jdk: '11' amd64: jdk11: builderImage: registry.redhat.io/jboss-eap-7/eap-xp4-openjdk11-openshift-rhel8 runtimeImage: registry.redhat.io/jboss-eap-7/eap-xp4-openjdk11-runtime-openshift-rhel8 deploy: enabled: false EOF $ helm install microprofile-config-app \ -f /tmp/helm-chart-build.yaml \ jboss-eap/eap74 3. Deploy the application with persistent storage. cat <<EOF | oc create -f - apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: microprofile-config- operator -app spec: applicationImage: 'microprofile-config-app:latest' replicas: 3 storage: volumeClaimTemplate: spec: resources: requests: storage: 10Mi EOF 4. Scale down the replicas from 3 to 2

      Reviewing the scaling down process done by the EAP Operator when the TX recovery facility is enabled, I've found that when the CRD is scaled down in one replica, two pods are stopped instead of one:

      This is the pod event sequence shown after scaling down from 3 replicas to 2:

      $ oc get pods -w
      NAME                                              READY   STATUS      RESTARTS   AGE
      eap-operator-8557b68cb-wg2r7                      1/1     Running     0          25m
      microprofile-config-app-2-build                   0/1     Completed   0          16m
      microprofile-config-app-build-artifacts-1-build   0/1     Completed   0          19m
      microprofile-config-operator-app-0                1/1     Running     0          3m50s
      microprofile-config-operator-app-1                1/1     Running     0          4m12s
      microprofile-config-operator-app-2                1/1     Running     0          4m33s
      
      
      microprofile-config-operator-app-2                1/1     Running     0          5m27s
      microprofile-config-operator-app-2                1/1     Running     0          5m57s
      microprofile-config-operator-app-2                1/1     Running     0          6m6s
      microprofile-config-operator-app-2                1/1     Terminating   0          6m20s
      microprofile-config-operator-app-1                1/1     Terminating   0          5m59s
      microprofile-config-operator-app-1                0/1     Terminating   0          6m5s
      microprofile-config-operator-app-1                0/1     Terminating   0          6m5s
      microprofile-config-operator-app-1                0/1     Terminating   0          6m5s
      microprofile-config-operator-app-2                0/1     Terminating   0          6m26s
      microprofile-config-operator-app-1                0/1     Pending       0          0s
      microprofile-config-operator-app-2                0/1     Terminating   0          6m26s
      microprofile-config-operator-app-1                0/1     Pending       0          0s
      microprofile-config-operator-app-2                0/1     Terminating   0          6m26s
      microprofile-config-operator-app-1                0/1     ContainerCreating   0          0s
      microprofile-config-operator-app-1                0/1     ContainerCreating   0          2s
      microprofile-config-operator-app-1                0/1     Running             0          3s
      microprofile-config-operator-app-1                1/1     Running             0          20s
      

      In the above sequence, microprofile-config-operator-app-1 and microprofile-config-operator-app-2 were stopped, however, I scaled down only one pod.

      I've checked this using EAP 7.4 images, EAP 8 images are not at this moment supported by the Operator with the transaction recover enabled.

              yborgess1@redhat.com Yeray Borges Santana
              yborgess1@redhat.com Yeray Borges Santana
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: