Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-5876

[Operator] Drainer POD is killed because the resource limits are not set and the defaults are used

XMLWordPrintable

    • False
    • False
    • Workaround Exists
    • Hide

      Set a bigger value for your default memory limit when defining Limit Ranges

      Show
      Set a bigger value for your default memory limit when defining Limit Ranges
    • Hide

      1. Define a limit range in your testing namespace with a very low values:

       

      apiVersion: v1
      kind: LimitRange
      metadata:
        name: core-resource-limits
        namespace: my-test-namespace
      spec:
        limits:
        - default:
            cpu: 100m
            memory: 100Mi
          defaultRequest:
            cpu: 100m
            memory: 100Mi
          type: Container
      

      2. Create a new AMQ cluster applying a ActiveMQArtemis CR with:

      • spec.deploymentPlan.size = 2
      • spec.deploymentPlan.messageMigration: true
      • spec.deploymentPlan.persistenceEnabled: true

      3. Connect to broker POD number "1" and produce some messages into a queue.

      4. Upgrade your ActiveMQArtemis and set spec.deploymentPlan.size=1

      5. Wait and check the log for the drainer POD started. An output log similar to the one attached should appear. One of the log lines show the amount of memory set for the JVM, which is a very small value:

      -Xms13m -Xmx50m

      6. The drainer pod is Killed and the messages are not moved into the other AMQ node in the cluster.

       

       

       

       

       

      Show
      1. Define a limit range in your testing namespace with a very low values:   apiVersion: v1 kind: LimitRange metadata:   name: core-resource-limits   namespace: my-test-namespace spec:   limits:   - default :       cpu: 100m       memory: 100Mi     defaultRequest:       cpu: 100m       memory: 100Mi     type: Container 2. Create a new AMQ cluster applying a ActiveMQArtemis CR with: spec.deploymentPlan.size = 2 spec.deploymentPlan.messageMigration: true spec.deploymentPlan.persistenceEnabled: true 3. Connect to broker POD number "1" and produce some messages into a queue. 4. Upgrade your ActiveMQArtemis and set spec.deploymentPlan.size=1 5. Wait and check the log for the drainer POD started. An output log similar to the one attached should appear. One of the log lines show the amount of memory set for the JVM, which is a very small value: -Xms13m -Xmx50m 6. The drainer pod is Killed and the messages are not moved into the other AMQ node in the cluster.          

      When a new Drainer POD is created for moving messages from an unused persistence storage into the remaining AMQ brokers in the cluster, the memory and CPU limits are not set by the Operator. Therefore, the default limits will apply.

      In the case of a limit range policy is defined in the namespace, the values defined will apply for the drainer container. When the memory value is very low, the drainer will be terminated due to an Out Of Memory error (OOMKilled).

      The drainer POD should have the same limits (memory and cpu) that are applied for the running brokers in the cluster (those values defined in the ActiveMQArtemis custom resource).

              rhn-support-rkieley Roderick Kieley
              ryanezil Rafael Yáñez Illescas
              Roman Vais Roman Vais (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: