-
Bug
-
Resolution: Done
-
Critical
-
7.5.0.GA
-
None
-
OCP 3.11 with GlusterFS (QE provisioned instance)
Images: RHBA images from David Wardbased on nightlies from 14th May 2019: docker-registry.engineering.redhat.com/dward/rhpam74-XXX-openshift:1.0
Datagrid: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/jboss-datagrid-7/datagrid73-openshift:jb-datagrid-7.3-openshift-dev-rhel-7-containers-candidate-25282-20190411224552
AMQ: registry.redhat.io/amq-broker-7/amq-broker-72-openshift:1.1
AMQ broker: registry.access.redhat.com/amq-broker-7/amq-broker-72-openshift:1.1 DB (for Kie Server): registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7Template: https://github.com/jboss-container-images/rhpam-7-openshift-image/blob/master/templates/rhpam74-authoring-ha-datagrid.yaml with adjustments (datagrid image, sso)
OCP 3.11 with GlusterFS (QE provisioned instance) Images: RHBA images from David Wardbased on nightlies from 14th May 2019: docker-registry.engineering.redhat.com/dward/rhpam74-XXX-openshift:1.0 Datagrid: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/jboss-datagrid-7/datagrid73-openshift:jb-datagrid-7.3-openshift-dev-rhel-7-containers-candidate-25282-20190411224552 AMQ: registry.redhat.io/amq-broker-7/amq-broker-72-openshift:1.1 AMQ broker: registry.access.redhat.com/amq-broker-7/amq-broker-72-openshift:1.1 DB (for Kie Server): registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7 Template: https://github.com/jboss-container-images/rhpam-7-openshift-image/blob/master/templates/rhpam74-authoring-ha-datagrid.yaml with adjustments (datagrid image, sso)
-
Release Notes
-
-
-
-
-
-
CR1
-
- Create a space.
- Open the space by two users.
- Import all sample projects by a user. Or import more samples projects one by one
- Check that in log /or in Alerts view in BC is indexing starting repeatedly
Indexing created new threads repeatedly and it cause that the BC pod runs out of resources. During the indexing user can work with the BC (mainly on other pods), but spaces sometimes cannot be loaded. When BC running out of memory in BC on pod witch indexing is visible exception like this:
| Uncaught exception: java.lang.OutOfMemoryError:unable to create native thread: possibly out of memory or process/resource limits reached
OR this
| Uncaught exception: Error parsing JSON: SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data
So I added to the Deployment config properties for debuging so I could connect with VisualVM
spec: containers: - env: - name: JAVA_TOOL_OPTIONS value: >- -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
With this properties I could forward port to my local and connect VisualVM.
port forwarding: $ oc port-forward myapp-rhpamcentr-2-lcx5q 8000 3000 3001
When I opened VisualVM there were around 800 Threads created. After few minutes (5-10) there was over 2500 threads. When the pod failed, run out of memory, there was over 4000 threads.
I also check thread on other BC pods and there was around 200 threads
$ oc rsh myapp-rhpamcentr-2-c8lcv sh-4.4$ cat /proc/<Process-id>(for this pod = 1042)/status | grep Threads Threads: 209
In attachemnt I added log from the indexing pod.