1. Introduction
1.1. What is Red Hat JBoss Enterprise Application Platform (JBoss EAP)?
Red Hat JBoss Enterprise Application Platform 7 (JBoss EAP) is a middleware platform built on open standards and compliant with the Java Enterprise Edition 7 specification. It provides preconfigured options for features such as high-availability clustering, messaging, and distributed caching. It includes a modular structure that allows you to enable services only when required, which results in improved startup speed.
The web-based management console and management command line interface (CLI) make editing XML configuration files unnecessary and add the ability to script and automate tasks. In addition, JBoss EAP includes APIs and development frameworks that allow you to quickly develop, deploy, and run secure and scalable Java EE applications. JBoss EAP 7 is a certified implementation of the Java EE 7 full and web profile specifications.
1.2. How Does JBoss EAP Work on OpenShift?
Red Hat offers a containerized image for JBoss EAP that is designed for use with OpenShift. Using this image, developers can quickly and easily build, scale, and test applications that are deployed across hybrid environments.
1.3. Comparison: JBoss EAP and JBoss EAP for OpenShift
There are some notable differences when comparing the JBoss EAP product with the JBoss EAP for OpenShift image. The following table describes these differences and notes which features are included or supported in the current version of JBoss EAP for OpenShift.
Feature | Status | Description |
---|---|---|
JBoss EAP management console |
Not included |
The JBoss EAP management console is not included in this release of JBoss EAP for OpenShift. |
Managed domain |
Not supported |
Although a JBoss EAP managed domain is not supported, creation and distribution of applications are managed in the containers on OpenShift. |
Default root page |
Disabled |
The default root page is disabled, but you can deploy your own application to the root context as |
Remote messaging |
Supported |
ActiveMQ Artemis for inter-pod and remote messaging is supported. HornetQ is only supported for intra-pod messaging and only enabled when ActiveMQ Artemis is absent. JBoss EAP for OpenShift 7 includes ActiveMQ Artemis as a replacement for HornetQ. |
1.4. Version Compatibility and Support
JBoss EAP for OpenShift is updated frequently. Therefore, it is important to understand which versions of the images are compatible with which versions of OpenShift. Not all images are compatible with all OpenShift 3.x versions. See OpenShift and Atomic Platform Tested Integrations on the Red Hat Customer Portal for more information on version compatibility and support.
1.5. Persistent Templates
The JBoss EAP database templates, which deploy JBoss EAP and database pods, have both ephemeral and persistent variations. For example, for a JBoss EAP application backed by a MongoDB database, there are eap71-mongodb-s2i
and eap71-mongodb-persistent-s2i
templates.
Persistent templates include an environment variable to provision a persistent volume claim, which binds with an available persistent volume to be used as a storage volume for the JBoss EAP for OpenShift deployment. Information, such as timer schema, log handling, or data updates, is stored on the storage volume, rather than in ephemeral container memory. This information persists if the pod goes down for any reason, such as project upgrade, deployment rollback, or an unexpected error.
Without a persistent storage volume for the deployment, this information is stored in the container memory only, and is lost if the pod goes down for any reason.
For example, an EE timer backed by persistent storage continues to run if the pod is restarted. When the pod is running again, any events triggered by the timer are enacted when the application is running again.
Conversely, if the EE timer is running in the container memory, the timer status is lost if the pod is restarted, and starts from the beginning when the pod is running again.
2. Installation and Configuration
2.1. Overview
Before installing JBoss EAP for OpenShift on your OpenShift instance, you must first determine whether you are installing JBoss EAP for OpenShift in a production or a non-production environment. Production environments require Secure Sockets Layer (SSL) encryption for network communication for general public access, which is also known as a HTTPS connection. In this case you must use a signed certificate from a Certificate Authority (CA).
However, if you are installing JBoss EAP for OpenShift for demonstration purposes, proof-of-concept (POC) designs, or environments with internal access only, unencrypted and insecure communication might be sufficient. The instructions referenced here describe how to create the required keystore for JBoss EAP for OpenShift with a self-signed or a purchased SSL certificate.
Using a self-signed SSL certificate to create a keystore is not intended for production environments. For production environments or where SSL encrypted communication is required, you must use a SSL certificate that is purchased from a verified CA. |
2.2. Key Terms
The following table describes the various terms that are used within the context of this section.
Key Term | Description |
---|---|
SSL |
Secure Sockets Layer encrypts network traffic between the client and the JBoss EAP web server, providing a HTTPS connection between them. |
HTTPS |
HTTPS is a protocol that provides an SSL-encrypted connection between a client and a server. |
Keystore |
A Java keystore is a repository used to store SSL/TLS certificates and distribute them to applications for encrypted communication. |
Secrets |
A secret contains the Java keystore that gets passed to JBoss EAP for OpenShift along with a password to access it. This information is used in scripts to configure HTTPS access. |
2.3. Initial Setup
The instructions in this guide follow on from and assume an OpenShift instance similar to that created in the OpenShift Primer.
2.4. Getting Started
After you have completed the initial setup instructions, this section helps you get started with JBoss EAP for OpenShift by performing the required preliminary steps before you can install the image on OpenShift. This process consists of the following steps:
2.4.1. Create a New Project in OpenShift
A project allows a group of users to organize and manage content separately from other groups. Create a project in OpenShift by using the following command.
$ oc new-project PROJECT_NAME
You can then make this new project to be the current project using the following command:
$ oc project PROJECT_NAME
2.4.2. Create a JBoss EAP Service Account in Your Project
Service accounts are API objects that exist within each OpenShift project. Create a service account named eap-service-account
in the OpenShift project that you created.
$ oc create serviceaccount eap-service-account -n PROJECT_NAME
After creating the service account, configure the access permissions for it using the following command, specifying the correct name depending on the JBoss EAP image version.
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
The service account that you create must be configured with the correct permissions with the ability to view pods in Kubernetes. This is required in order for clustering with to work. You can view the top of the log files to see whether the correct service account permissions have been configured. |
2.4.3. Create a Keystore
JBoss EAP for OpenShift requires a keystore to be imported to properly install and configure the image on your OpenShift instance. Note that self-signed certificates do not provide secure communication and are intended for internal testing purposes.
For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS). |
The following command uses the JDK keytool
to generate a keystore.
$ keytool -genkey -keyalg RSA -alias ALIAS_NAME -keystore KEYSTORE_FILENAME.jks -validity 360 -keysize 2048
2.4.4. Create a Secret from the Keystore
Next, create a secret from the previously created keystore using the following command.
$ oc secret new SECRET_NAME KEYSTORE_FILENAME.jks
2.4.5. Add the Secret to Your Service Account
Now add the secret to the eap-service-account
you previously created. You can do this with the following command.
$ oc secrets add serviceaccount/eap-service-account secret/SECRET_NAME
2.4.6. Create and Deploy the JBoss EAP Application
You can now create a JBoss EAP application using the defined image, or you can use the basic S2I template.
To create a JBoss EAP application using the defined image, run the following command.
$ oc new-app jboss-eap-7/eap71-openshift
Alternatively, you can create a JBoss EAP application using the basic S2I template with the following command.
$ oc new-app eap7-basic-s2i
2.5. Configuring JBoss EAP for OpenShift
The recommended method to run and configure JBoss EAP for OpenShift is to use the OpenShift S2I process together with the application template parameters and environment variables.
The variable |
The S2I process for JBoss EAP for OpenShift works as follows:
-
If a
pom.xml
file is present in the source repository, a Maven build process is triggered that uses the contents of the$MAVEN_ARGS
environment variable. Although you can specify arguments or options with the$MAVEN_ARGS
environment variable, Red Hat recommends that you use the$MAVEN_ARGS_APPEND
environment variable to do this. The$MAVEN_ARGS_APPEND
variable takes the default arguments from$MAVEN_ARGS
and appends the options from$MAVEN_ARGS_APPEND
to it. By default, the OpenShift profile uses the Mavenpackage
goal, which includes system properties for skipping tests (-DskipTests
) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo
). The results of a successful Maven build are copied toEAP_HOME/standalone/deployments
. This includes all JAR, WAR, and EAR files from the source repository specified by the$ARTIFACT_DIR
environment variable. The default value of$ARTIFACT_DIR
is the target directory.To use Maven behind a proxy on JBoss EAP for OpenShift image, set the
$HTTP_PROXY_HOST
and$HTTP_PROXY_PORT
environment variables. Optionally, you can also set the$HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
, andHTTP_PROXY_NONPROXYHOSTS
variables. -
EAP_HOME/standalone/deployments
is the artifacts directory, which is specified with the$ARTIFACT_DIR
environment variable. -
All files in the configuration source repository directory are copied to
EAP_HOME/standalone/configuration
. If you want to use a custom JBoss EAP configuration file, it should be namedstandalone-openshift.xml
. -
All files in the modules source repository directory are copied to
EAP_HOME/modules
.
See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror.
2.6. Build Extensions and Project Artifacts
The JBoss EAP for OpenShift image extends database support in OpenShift using various artifacts. These artifacts are included in the built image through different mechanisms:
-
S2I artifacts that are injected into the image during the S2I process.
-
Runtime artifacts from environment files provided through the OpenShift Secret mechanism.
2.6.1. S2I Artifacts
The S2I artifacts include modules, drivers, and additional generic deployments that provide the necessary configuration infrastructure required for the deployment. This configuration is built into the image during the S2I process so that only the datasources and associated resource adapters need to be configured at runtime.
See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror.
Modules, Drivers, and Generic Deployments
There are a few options for including these S2I artifacts in the JBoss EAP for OpenShift image:
-
Include the artifact in the application source deployment directory. The artifact is downloaded during the build and injected into the image. This is similar to deploying an application on the JBoss EAP for OpenShift image.
-
Include the
CUSTOM_INSTALL_DIRECTORIES
environment variable, a list of comma-separated list of directories used for installation and configuration of artifacts for the image during the S2I process. There are two methods for including this information in the S2I:-
An
install.sh
script in the nominated installation directory. The install script executes during the S2I process and operates with impunity.install.sh
Script Example#!/bin/bash injected_dir=$1 source /usr/local/s2i/install-common.sh install_deployments ${injected_dir}/injected-deployments.war install_modules ${injected_dir}/modules configure_drivers ${injected_dir}/drivers.env
The
install.sh
script is responsible for customizing the base image using APIs provided byinstall-common.sh
.install-common.sh
contains functions that are used by theinstall.sh
script to install and configure the modules, drivers, and generic deployments.Functions contained within
install-common.sh
:-
install_modules
-
configure_drivers
-
install_deployments
ModulesA module is a logical grouping of classes used for class loading and dependency management. Modules are defined in the
EAP_HOME/modules/
directory of the application server. Each module exists as a subdirectory, for exampleEAP_HOME/modules/org/apache/
. Each module directory then contains a slot subdirectory, which defaults to main and contains themodule.xml
configuration file and any required JAR files.Examplemodule.xml
File<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="org.apache.derby"> <resources> <resource-root path="derby-10.12.1.1.jar"/> <resource-root path="derbyclient-10.12.1.1.jar"/> </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module>
The
install_modules
function ininstall.sh
copies the respective JAR files to the modules directory in JBoss EAP, along with themodule.xml
.DriversDrivers are installed as modules. The driver is then configured in
install.sh
by theconfigure_drivers
function, the configuration properties for which are defined in a runtime artifact environment file.Exampledrivers.env
File#DRIVER DRIVERS=DERBY DERBY_DRIVER_NAME=derby DERBY_DRIVER_MODULE=org.apache.derby DERBY_DRIVER_CLASS=org.apache.derby.jdbc.EmbeddedDriver DERBY_XA_DATASOURCE_CLASS=org.apache.derby.jdbc.EmbeddedXADataSource
Generic DeploymentsDeployable archive files, such as JARs, WARs, RARs, or EARs, can be deployed from an injected image using the
install_deployments
function supplied by the API ininstall-common.sh
.
-
-
If the
CUSTOM_INSTALL_DIRECTORIES
environment variable has been declared but noinstall.sh
scripts are found in the custom installation directories, the following artifact directories will be copied to their respective destinations in the built image:-
modules/*
copied to$JBOSS_HOME/modules/system/layers/openshift
-
configuration/*
copied to$JBOSS_HOME/standalone/configuration
-
deployments/*
copied to$JBOSS_HOME/standalone/deployments
This is a basic configuration approach compared to the
install.sh
alternative, and requires the artifacts to be structured appropriately. -
-
2.6.2. Runtime Artifacts
Datasources
There are three types of datasources:
-
Default internal datasources. These are PostgreSQL, MySQL, and MongoDB. These datasources are available on OpenShift by default through the Red Hat Registry and do not require additional environment files to be configured. Set the DB_SERVICE_PREFIX_MAPPING environment variable to the name of the OpenShift service for the database to be discovered and used as a datasource.
-
Other internal datasources. These are datasources not available by default through the Red Hat Registry but run on OpenShift. Configuration of these datasources is provided by environment files added to OpenShift Secrets.
-
External datasources that are not run on OpenShift. Configuration of external datasources is provided by environment files added to OpenShift Secrets.
# derby datasource ACCOUNTS_DERBY_DATABASE=accounts ACCOUNTS_DERBY_JNDI=java:/accounts-ds ACCOUNTS_DERBY_DRIVER=derby ACCOUNTS_DERBY_USERNAME=derby ACCOUNTS_DERBY_PASSWORD=derby ACCOUNTS_DERBY_TX_ISOLATION=TRANSACTION_READ_UNCOMMITTED ACCOUNTS_DERBY_JTA=true # Connection info for xa datasource ACCOUNTS_DERBY_XA_CONNECTION_PROPERTY_DatabaseName=/home/jboss/source/data/databases/derby/accounts # _HOST and _PORT are required, but not used ACCOUNTS_DERBY_SERVICE_HOST=dummy ACCOUNTS_DERBY_SERVICE_PORT=1527
The DATASOURCES
property is a comma-separated list of datasource property prefixes. These prefixes are then appended to all properties for that datasource. Multiple datasources can then be included in a single environment file. Alternatively, each datasource can be provided in separate environment files.
Datasources contain two types of properties: connection pool-specific properties and database driver-specific properties. Database driver-specific properties use the generic XA_CONNECTION_PROPERTY
, because the driver itself is configured as a driver S2I artifact. The suffix of the driver property is specific to the particular driver for the datasource.
In the above example, ACCOUNTS
is the datasource prefix, XA_CONNECTION_PROPERTY
is the generic driver property, and DatabaseName
is the property specific to the driver.
The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the ENV_FILES
environment property, the value of which is a comma-separated list of fully qualified environment files as shown below.
{ “Name”: “ENV_FILES”, “Value”: “/etc/extensions/datasources1.env,/etc/extensions/datasources2.env” }
Resource Adapters
Configuration of resource adapters is provided by environment files added to OpenShift Secrets.
Attribute | Description |
---|---|
PREFIX_ID |
The identifier of the resource adapter as specified in the server configuration file. |
PREFIX_ARCHIVE |
The resource adapter archive. |
PREFIX_MODULE_SLOT |
The slot subdirectory, which contains the |
PREFIX_MODULE_ID |
The JBoss Module ID where the object factory Java class can be loaded from. |
PREFIX_CONNECTION_CLASS |
The fully qualified class name of a managed connection factory or admin object. |
PREFIX_CONNECTION_JNDI |
The JNDI name for the connection factory. |
PREFIX_PROPERTY_ParentDirectory |
Directory where the data files are stored. |
PREFIX_PROPERTY_AllowParentPaths |
Set |
PREFIX_POOL_MAX_SIZE |
The maximum number of connections for a pool. No more connections will be created in each sub-pool. |
PREFIX_POOL_MIN_SIZE |
The minimum number of connections for a pool. |
PREFIX_POOL_PREFILL |
Specifies if the pool should be prefilled. Changing this value requires a server restart. |
PREFIX_POOL_FLUSH_STRATEGY |
How the pool should be flushed in case of an error. Valid values are: |
The RESOURCE_ADAPTERS
property is a comma-separated list of resource adapter property prefixes. These prefixes are then appended to all properties for that resource adapter. Multiple resource adapter can then be included in a single environment file. In the example below, MYRA
is used as the prefix for a resource adapter. Alternatively, each resource adapter can be provided in separate environment files.
#RESOURCE_ADAPTER RESOURCE_ADAPTERS=MYRA MYRA_ID=myra MYRA_ARCHIVE=myra.rar MYRA_CONNECTION_CLASS=org.javaee7.jca.connector.simple.connector.outbound.MyManagedConnectionFactory MYRA_CONNECTION_JNDI=java:/eis/MySimpleMFC
The resource adapter environment files are added to the OpenShift Secret for the project namespace. These environment files are then called within the template using the ENV_FILES
environment property, the value of which is a comma-separated list of fully qualified environment files as shown below.
{ "Name": "ENV_FILES", "Value": "/etc/extensions/resourceadapter1.env,/etc/extensions/resourceadapter2.env" }
3. Tutorials
3.1. Example Workflow: Using Maven to Build and Run a Java EE 7 Application on the JBoss EAP for OpenShift Image
This tutorial focuses on building and running a Java EE 7 application on OpenShift using the JBoss EAP for OpenShift image. The kitchensink
quickstart example is used here, which demonstrates a Java EE 7 web-enabled database application using JSF, CDI, EJB, JPA, and Bean Validation. See the kitchensink
quickstart that ships with JBoss EAP 7 for more information.
The |
3.1.1. Prepare for Deployment
-
Log in to your OpenShift instance using the
oc login
command. -
Create a new project.
$ oc new-project eap-demo
-
Create a service account to be used for this deployment.
$ oc create serviceaccount eap-service-account
-
Add the view role to the service account. This enables the service account to view all the resources in the
eap-demo
namespace, which is necessary for managing the cluster.$ oc policy add-role-to-user view system:serviceaccount:eap-demo:eap-service-account
-
Generate a self-signed certificate keystore. This example uses the JDK
keytool
to generate dummy credentials for use with the keystore.$ keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -validity 360 -keysize 2048
OpenShift does not permit login authentication from self-signed certificates. For demonstration purposes, this example uses OpenSSL to generate a CA certificate to sign the SSL keystore and create a truststore. This truststore is also included in the creation of the secret, and specified in the SSO template.
For production environments, its recommended that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS).
-
Use the generated keystore file to create the secret.
$ oc secrets new eap-app-secret keystore.jks
-
Add the secret to the service account created earlier.
$ oc secrets link eap-service-account eap-app-secret
3.1.2. Deployment
-
Create a new application using the JBoss EAP for OpenShift image and Java source code.
$ oc new-app jboss-eap71-openshift~https://github.com/jboss-developer/jboss-eap-quickstarts.git#7.1.0.GA --context-dir=kitchensink
-
Retrieve the name of the build config.
$ oc get bc -o name buildconfigs/jboss-eap-quickstarts
-
View the Maven build logs for the example repository by running the following command:
$ oc logs -f buildconfig/jboss-eap-quickstarts
3.1.3. Post Deployment
-
Get the service name.
$ oc get service NAME ... jboss-eap-quickstarts ...
-
Expose the service as a route to be able to use it from the browser.
$ oc expose service/jboss-eap-quickstarts --port=8080
-
Get the route.
$ oc get route
-
Access the application in your browser using the URL. The URL is the value of the
HOST/PORT
field from previous command’s output. -
Optionally, you can also scale up the application instance by running the following command:
$ oc scale dc eap-demo --replicas=3
3.1.4. Using the Kitchensink Application
-
Navigate to the service address for the
kitchensink
application, using the value of theHOST
/PORT
field fromoc get route
command output. The title of the page readsWelcome to JBoss!
. -
Use the
Member Registration
section to add members to the database. The application provides some constraints to theName
,Email
, andPhone #
fields. Once completed, anId
is generated and the new user appears in theMembers
table. -
Click the
/rest/members
link beneath the table to display the REST API response information for the registered members.
You can close the browser and open it again later, or you can open the application in a different browser, and the member data is retained as long as the pod remains active.
4. Reference Information
The content in this section is derived from the engineering documentation for this image. It is provided for reference as it can be useful for development purposes and for testing beyond the scope of the product documentation. |
4.1. Information Environment Variables
The following environment variables are designed to provide information to the image and should not be modified by the user:
Variable Name | Description and Value |
---|---|
JBOSS_IMAGE_NAME |
The image name. Value: |
JBOSS_IMAGE_RELEASE |
The image release label. Value: |
JBOSS_IMAGE_VERSION |
The image version. Value: |
JBOSS_MODULES_SYSTEM_PKGS |
A comma-separated list of JBoss EAP system modules packages that are available to applications. Value: |
STI_BUILDER |
Provides OpenShift S2I support for Value: |
4.2. Configuration Environment Variables
You can configure the following environment variables to adjust the image without requiring a rebuild.
Variable Name | Description |
---|---|
AB_JOLOKIA_AUTH_OPENSHIFT |
Switch on client authentication for OpenShift TLS communication. The value of this parameter can be
|
AB_JOLOKIA_CONFIG |
If set, uses this fully qualified file path for the Jolokia JVM agent properties, which are described in the Jolokia reference documentation. If you set your own Jolokia properties config file, the rest of the Jolokia settings in this document are ignored. If not set, Example value: |
AB_JOLOKIA_DISCOVERY_ENABLED |
Enable Jolokia discovery. Defaults to |
AB_JOLOKIA_HOST |
Host address to bind to. Defaults to Example value: |
AB_JOLOKIA_HTTPS |
Switch on secure communication with HTTPS. By default self-signed server certificates are generated if no Example value: |
AB_JOLOKIA_ID |
Agent ID to use. The default value is the Example value: |
AB_JOLOKIA_OFF |
If set to Jolokia is enabled by default. |
AB_JOLOKIA_OPTS |
Additional options to be appended to the agent configuration. They should be given in the format Example value: |
AB_JOLOKIA_PASSWORD |
The password for basic authentication. By default, authentication is switched off. Example value: |
AB_JOLOKIA_PASSWORD_RANDOM |
Determines if a random Set to |
AB_JOLOKIA_PORT |
The port to listen to. Defaults to Example value: |
AB_JOLOKIA_USER |
The name of the user to use for basic authentication. Defaults to Example value: |
CLI_GRACEFUL_SHUTDOWN |
If set to any non-zero length value, the image will prevent shutdown with the Example value: |
CONTAINER_HEAP_PERCENT |
Set the maximum Java heap size, as a percentage of available container memory. Example value: |
CUSTOM_INSTALL_DIRECTORIES |
A list of comma-separated directories used for installation and configuration of artifacts for the image during the S2I process. Example value: |
DEFAULT_JMS_CONNECTION_FACTORY |
This value is used to specify the default JNDI binding for the JMS connection factory, for example Example value: |
ENABLE_ACCESS_LOG |
Enable logging of access messages to the standard output channel. Logging of access messages is implemented using following methods:
Defaults to |
INITIAL_HEAP_PERCENT |
Set the initial Java heap size, as a percentage of the maximum heap size. Example value: |
JAVA_OPTS_APPEND |
Server startup options. Example value: |
JBOSS_MODULES_SYSTEM_PKGS_APPEND |
A comma-separated list of package names that will be appended to the Example value: |
MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION |
For backwards compatibility, set to |
OPENSHIFT_KUBE_PING_LABELS |
Clustering labels selector. Example value: |
OPENSHIFT_KUBE_PING_NAMESPACE |
Clustering project namespace. Example value: |
SCRIPT_DEBUG |
If set to |
Other environment variables not listed above that can influence the product can be found in the JBoss EAP documentation. |
4.3. Application Templates
Variable Name | Description |
---|---|
AUTO_DEPLOY_EXPLODED |
Controls whether exploded deployment content should be automatically deployed. Example value: |
4.4. Exposed Ports
Port Number | Description |
---|---|
8443 |
HTTPS |
8778 |
Jolokia Monitoring |
4.5. Datasources
Datasources are automatically created based on the value of some of the environment variables.
The most important environment variable is DB_SERVICE_PREFIX_MAPPING
, as it defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX
triplets, where:
-
POOLNAME
is used as thepool-name
in the datasource. -
DATABASETYPE
is the database driver to use. -
PREFIX
is the prefix used in the names of environment variables that are used to configure the datasource.
4.5.1. JNDI Mappings for Datasources
For each POOLNAME-DATABASETYPE=PREFIX
triplet defined in the DB_SERVICE_PREFIX_MAPPING
environment variable, the launch script creates a separate datasource, which is executed when running the image.
The first part (before the equal sign) of the |
The DATABASETYPE
determines the driver for the datasource. Currently, only postgresql
and mysql
are supported.
Do not use any special characters for the |
Database Drivers
Every image contains Java drivers for MySQL, PostgreSQL and MongoDB databases deployed. Datasources are generated only for MySQL and PostgreSQL databases.
For MongoDB database there are no JNDI mappings created because MongoDB is not a SQL database. |
Datasource Configuration Environment Variables
To configure other datasource properties, use the following environment variables.
Be sure to replace the values for |
Variable Name | Description |
---|---|
POOLNAME_DATABASETYPE_SERVICE_HOST |
Defines the database server’s host name or IP address to be used in the datasource’s Example value: |
POOLNAME_DATABASETYPE_SERVICE_PORT |
Defines the database server’s port for the datasource. Example value: |
PREFIX_BACKGROUND_VALIDATION |
When set to |
PREFIX_BACKGROUND_VALIDATION_MILLIS |
Specifies frequency of the validation, in milliseconds, when the |
PREFIX_CONNECTION_CHECKER |
Specifies a connection checker class that is used to validate connections for the particular database in use. Example value: |
PREFIX_DATABASE |
Defines the database name for the datasource. Example value: |
PREFIX_DRIVER |
Defines Java database driver for the datasource. Example value: |
PREFIX_EXCEPTION_SORTER |
Specifies the exception sorter class that is used to properly detect and clean up after fatal database connection exceptions. Example value: |
PREFIX_JNDI |
Defines the JNDI name for the datasource. Defaults to Example value: |
PREFIX_JTA |
Defines Java Transaction API (JTA) option for the non-XA datasource. The XA datasources are already JTA capable by default. Defaults to |
PREFIX_MAX_POOL_SIZE |
Defines the maximum pool size option for the datasource. Example value: |
PREFIX_MIN_POOL_SIZE |
Defines the minimum pool size option for the datasource. Example value: |
PREFIX_NONXA |
Defines the datasource as a non-XA datasource. Defaults to |
PREFIX_PASSWORD |
Defines the password for the datasource. Example value: |
PREFIX_TX_ISOLATION |
Defines the java.sql.Connection transaction isolation level for the datasource. Example value: |
PREFIX_URL |
Defines connection URL for the datasource. Example value: |
PREFIX_USERNAME |
Defines the username for the datasource. Example value: |
When running this image in OpenShift, the POOLNAME_DATABASETYPE_SERVICE_HOST
and POOLNAME_DATABASETYPE_SERVICE_PORT
environment variables are set up automatically from the database service definition in the OpenShift application template, while the others are configured in the template directly as env
entries in container definitions under each pod template.
Examples
These examples show how value of the DB_SERVICE_PREFIX_MAPPING
environment
variable influences datasource creation.
Single Mapping
Consider value test-postgresql=TEST
.
This creates a datasource with java:jboss/datasources/test_postgresql
name.
Additionally, all the required settings like password and username are expected to be provided as environment variables with the TEST_
prefix, for example TEST_USERNAME
and TEST_PASSWORD
.
Multiple Mappings
You can specify multiple database mappings.
Always separate multiple datasource mappings with a comma. |
Consider the following value for the DB_SERVICE_PREFIX_MAPPING
environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL
.
This creates the following two datasources:
-
java:jboss/datasources/test_mysql
-
java:jboss/datasources/cloud_postgresql
Then you can use TEST_MYSQL
prefix for configuring things like the username and password for the MySQL datasource, for example TEST_MYSQL_USERNAME
. And for the PostgreSQL datasource, use the CLOUD_
prefix, for example CLOUD_USERNAME
.
4.6. Clustering
Clustering is achieved through one of two discovery mechanisms: Kubernetes or DNS. This is done by configuring the JGroups protocol stack in standalone-openshift.xml
with either the <openshift.KUBE_PING/>
or <openshift.DNS_PING/>
elements. Out of the box, KUBE_PING
is the preconfigured and supported protocol.
For KUBE_PING
to work, however, the following steps must be taken:
-
The
OPENSHIFT_KUBE_PING_NAMESPACE
environment variable must be set. If not set, the server behaves as a single-node cluster (a "cluster of one"). For example:OPENSHIFT_KUBE_PING_NAMESPACE=PROJECT_NAME
-
The
OPENSHIFT_KUBE_PING_LABELS
environment variables should be set. This should match the label set at the service level. If not set, pods outside of your application (albeit in your namespace) will try to join. For example:OPENSHIFT_KUBE_PING_LABELS=app=APP_NAME
-
Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST api. This is done on the command line. The following example uses the default service account in the
myproject
namespace:oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
Using the eap-service-account in the project namespace:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
See Getting Started for more information on adding policies to service accounts. |
4.7. Security Domains
To configure a new Security Domain, the user must define the SECDOMAIN_NAME
environment variable.
This results in the creation of a security domain named after the environment variable. The user may also define the following environment variables to customize the domain:
Variable name | Description |
---|---|
SECDOMAIN_NAME |
Defines an additional security domain. Example value: |
SECDOMAIN_PASSWORD_STACKING |
If defined, the Example value: |
SECDOMAIN_LOGIN_MODULE |
The login module to be used. Defaults to |
SECDOMAIN_USERS_PROPERTIES |
The name of the properties file containing user definitions. Defaults to |
SECDOMAIN_ROLES_PROPERTIES |
The name of the properties file containing role definitions. Defaults to |
4.8. HTTPS Environment Variables
Variable name | Description |
---|---|
HTTPS_NAME |
If defined along with Example value: |
HTTPS_PASSWORD |
If defined along with Example value: |
HTTPS_KEYSTORE |
If defined along with Example value: |
4.9. Administration Environment Variables
Variable name | Description |
---|---|
ADMIN_USERNAME |
If both this and Example value: |
ADMIN_PASSWORD |
The password for the specified Example value: |
4.10. S2I
The image includes S2I scripts and Maven.
Maven is currently only supported as a build tool for applications that are supposed to be deployed on JBoss EAP-based containers (or related/descendant images) on OpenShift.
Only WAR deployments are supported at this time.
4.10.1. Custom Configuration
It is possible to add custom configuration files for the image. All files put into configuration/
directory will be copied into EAP_HOME/standalone/configuration/
. For example to override the
default configuration used in the image, just add a custom standalone-openshift.xml
into the configuration/
directory. See example for such a deployment.
Custom Modules
It is possible to add custom modules. All files from the modules/
directory will be copied into EAP_HOME/modules/
. See example for such a deployment.
4.10.2. Deployment Artifacts
By default, artifacts from the source target
directory will be deployed. To deploy from different directories set the ARTIFACT_DIR
environment variable in the BuildConfig definition. ARTIFACT_DIR
is a comma-delimited list. For example: ARTIFACT_DIR=app1/target,app2/target,app3/target
4.10.3. Artifact Repository Mirrors
A repository in Maven holds build artifacts and dependencies of various types, for example, all of the project JARs, library JARs, plug-ins, or any other project specific artifacts. It also specifies locations from where to download artifacts while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom mirror repository.
Benefits of using a mirror are:
-
Availability of a synchronized mirror, which is geographically closer and faster.
-
Ability to have greater control over the repository content.
-
Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
-
Improved build times.
Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/
, the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL
environment variable to the build configuration of the application as follows:
-
Identify the name of the build configuration to apply
MAVEN_MIRROR_URL
variable against.oc get bc -o name buildconfig/eap
-
Update build configuration of
eap
with aMAVEN_MIRROR_URL
environment variable.oc env bc/eap MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "eap" updated
-
Verify the setting.
oc env bc/eap --list # buildconfigs eap MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
-
Schedule new build of the application.
During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build. |
4.10.4. Scripts
run
-
This script uses the
openshift-launch.sh
script that configures and starts JBoss EAP with thestandalone-openshift.xml
configuration. assemble
-
This script uses Maven to build the source, create a package (WAR), and move it to the
EAP_HOME/standalone/deployments
directory.
4.10.5. Environment Variables
You can influence the way the build is executed by supplying environment variables to the s2i build
command. The environment variables that can be supplied are:
Variable name | Description |
---|---|
ARTIFACT_DIR |
The Example value: |
HTTP_PROXY_HOST |
Host name or IP address of a HTTP proxy for Maven to use. Example value: |
HTTP_PROXY_PORT |
TCP Port of a HTTP proxy for Maven to use. Example value: |
HTTP_PROXY_USERNAME |
If supplied with Example value: |
HTTP_PROXY_PASSWORD |
If supplied with Example value: |
HTTP_PROXY_NONPROXYHOSTS |
If supplied, a configured HTTP proxy will ignore these hosts. Example value: |
MAVEN_ARGS |
Overrides the arguments supplied to Maven during build. Example value: |
MAVEN_ARGS_APPEND |
Appends user arguments supplied to Maven during build. Example value: |
MAVEN_MIRROR_URL |
URL of a Maven Mirror/repository manager to configure. Example value: |
MAVEN_CLEAR_REPO |
Optionally clear the local Maven repository after the build. Example value: |
APP_DATADIR |
If defined, directory in the source from where data files are copied. Example value: |
DATA_DIR |
Directory in the image where data from Example value: |
For more information, see Example Workflow: Using Maven to Build and Run a Java EE 7 Application on JBoss EAP for OpenShift Image, which uses Maven and the S2I scripts included in the JBoss EAP for OpenShift image. |
4.11. SSO
This image contains support for Red Hat JBoss SSO-enabled applications.
See the Red Hat JBoss SSO for OpenShift documentation for more information on how to deploy the Red Hat JBoss SSO for OpenShift image with the JBoss EAP for OpenShift image. |
Variable name | Description |
---|---|
SSO_URL |
URL of the SSO server. |
SSO_REALM |
SSO realm for the deployed applications. |
SSO_PUBLIC_KEY |
Public key of the SSO Realm. This field is optional but if omitted can leave the applications vulnerable to man-in-middle attacks. |
SSO_USERNAME |
SSO User required to access the SSO REST API. Example value: |
SSO_PASSWORD |
Password for the SSO user defined by the Example value: |
SSO_SAML_KEYSTORE |
Keystore location for SAML. Defaults to |
SSO_SAML_KEYSTORE_PASSWORD |
Keystore password for SAML. Defaults to |
SSO_SAML_CERTIFICATE_NAME |
Alias for keys/certificate to use for SAML. Defaults to |
SSO_BEARER_ONLY |
SSO Client Access Type. (Optional) Example value: |
SSO_CLIENT |
Path for SSO redirects back to the application. Defaults to match |
SSO_ENABLE_CORS |
If |
SSO_SECRET |
The SSO Client Secret for Confidential Access. Example value: |
SSO_SECURE_SSL_CONNECTIONS |
If |
4.12. Transaction Recovery
4.12.1. Purpose
When a pod is scaled down it is possible for transaction branches to be in doubt. There is an automated recovery pod that is meant to complete these branches but there are rare scenarios (such as a network split) where such recovery may fail. The goal for the following procedure is to find and manually resolve in doubt branches.
4.12.2. Caveats
This document only describes how to to manually recover transactions that were wholly self contained within a single JVM, ie the procedure does not describe how to recover JTA transactions that have been propagated to other JVMs.
There are various network partition scenarios whereby OpenShift might decide to start multiple instances of the same pod with the same IP address and same node name where, due to the partition, the old pod is still running. This could result in a situation where during manual recovery you may be connected to a pod with what amounts to a stale view of the object store. If you think you are in this scenario then we advise that all pods running EAP instances are shut down to ensure that none of the resource managers or object stores are in use.
When you enlist a resource in an XA transaction it is your responsibility to ensure that each such resource type is supported with respect to recovery, for example it is know that PostgreSQL and MySQL are well behaved with respect to recovery but for others (such as A-MQ and JDV) you should check the OpenShift release specific documentation.
Other Caveats:
-
The deployment must use the JDBC object store (please refer to the prerequisites section below for details).
-
JTS transactions - we know recovery will not work reliably in automated scenarios because the network endpoint of the parent is encoded in recovery coordinator IORs so bottom-up recovery cannot work reliably if the parent node recovers with either a new IP address or indeed if the parent is intended to be accessed via a virtualized IP address.
-
XTS transactions - XTS does not work in a cluster scenario for recovery purposes: https://issues.jboss.org/browse/JBTM-2742.
-
XATerminator imported transactions - not clear yet whether this is even feasible to perform in OpenShift. Normally you would have the EIS strongly coupled to an instance of EAP so it would be difficult to consider how a valid OpenShift deployment scenario could be configured (needs analysis from someone more familiar with EIS configuration in OpenShift).
-
JBoss Remoting with propagated transactions is under investigation.
4.12.3. Prerequisite
It is assumed the OpenShift instance has been configured with a JDBC store and that the store tables are partitioned using a table prefix corresponding to the pod name - this should be automatic whenever an EAP deployment is in use. You can verify that your EAP instance is using the JDBC store by looking at the transactions subsystem config in a running pod for this deployment configuration. There should exist a file called /opt/eap/standalone/configuration/openshift-standalone.xml which contains an element for the transaction subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:3.0">
and if the JDBC object store is in use there should be an entry similar to:
<jdbc-store datasource-jndi-name="java:jboss/datasources/jdbcstore_postgresql"/>
where the jndi name identifies the datasource used to store transaction logs.
4.12.4. Procedure
Briefly, the procedure (for datasources only) is to use the database vendor tooling to list the Xid’s for in doubt branches (for all datasources that were in use by any deployments running on the failed or scaled down pod). You may need to refer to the vendor documentation to know where to look. Then for each Xid determine which pod created the transaction and then check to see if that pod is still running. If it is running then we leave the branch alone. If the pod is not running then we assume it has been removed from the cluster and we need to the apply the manual resolution procedure described in this document, ie to look in the transaction log storage that was used by the failed pod to see if there is a corresponding transaction log:
-
if there is a log then we manually commit the Xid using the vendor tooling;
-
if there is not a log we assume it is an orphaned branch and rollback the Xid using the vendor tooling.
The rest of this document explains in detail how to carry out each of these steps.
How to resolve in-doubt branches
First find all the resources that the deployment is using. Although these should be defined in the EAP configuration script (standalone-openshift.xml) there may be other ways these were made available to the transaction subsystem within the application server (such as via a file in a deployment or dynamically at runtime). In those cases you are responsible for locating and identifying these resources (although normally runtime resources are expected to show up in the EAP Command Line Interface).
Open a terminal on a pod running an EAP instance in the cluster of the failed pod and read the configuration file or use the EAP CLI, if there is no such pod scale up to one. This works since every pod in a cluster will be identically configured. For example the file /opt/eap/standalone/configuration/openshift-standalone.xml lists the resources used in the current OpenShift configuration. The JDBC connection URL for application resources is listed in the <connection-url> element of each of the <datasource> entries in the datasources subsystem <subsystem xmlns="urn:jboss:domain:datasources:4.0">. The config file may not expose all runtime resources so we recommend using the EAP CLI to obtain the same information.
To use the CLI, first create a Management user using the command /opt/eap/bin/add-user.sh and log into the CLI using the command /opt/eap/bin/jboss-cli.sh and then list the datasources configured on the server (these are the ones that may contain in-doubt transaction branches):
[standalone@localhost:9990 /] /subsystem=datasources:read-resource { "outcome" => "success", "result" => { "data-source" => { "ExampleDS" => undefined, ... }, ... }
Here the name of just one of the datasources is shown, namely ExampleDS. Once you have the list find the connection url for each of them, for example:
[standalone@localhost:9990 /] /subsystem=datasources/data-source=ExampleDS:read-attribute(name=connection-url) { "outcome" => "success", "result" => "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE", "response-headers" => {"process-state" => "restart-required"} }
Now that you have the connection URL of each datasource used by the application you need to connect to the datasource and list any in-doubt transaction branches. Note that the table name that stores in-doubt branches will be different for each datasource vendor. The application server comes with a default SQL query tool (H2) that you can now use to check each database:
java -cp /opt/eap/modules/system/layers/base/com/h2database/h2/main/h2-1.3.173.jar -url "jdbc:postgresql://localhost:5432/postgres" -user sa -password sa -sql "select gid from pg_prepared_xacts;"
or you can use the resource managers native tooling, so for example, for a PostGreSQL datasource (called sampledb) you would use the OpenShift client tools to remotely login to the pod and query the in-doubt transaction table as follows:
$ oc rsh postgresql2-2-c4q0x # rsh to the named pod sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. sampledb=# select gid from pg_prepared_xacts; 131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRhLWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA
A-MQ Resources
Recovery of A-MQ resources is covered by a separate procedure
Red Hat JBoss Data Virtualization (JDV) Resources
Recovery of Red Hat JBoss Data Virtualization resources is covered by a separate procedure
How to extract the Global Transaction id and node identifier from each Xid
You then need to convert the Xids found in the previous sections using various tools into a format that we can compare to the logs stored in the transaction managers transaction tables. For example, using bash, if the variable $PG_XID holds the Xid from the select statement above then the EAP transaction id can be obtained using the following script:
PG_XID="$1" IFS='_' read -ra lines <<< "$PG_XID" [[ "${lines[0]}" = 131077 ]] || exit 0; # this script only works for our own FORMAT ID PG_TID=${lines[1]} a=($(echo "$PG_TID"| base64 -d | xxd -ps |tr -d '\n' | while read -N16 i ; do echo 0x$i ; done)) b=($(echo "$PG_TID"| base64 -d | xxd -ps |tr -d '\n' | while read -N8 i ; do echo 0x$i ; done)) c=("${b[@]:4}") # put the last 3 32-bit hexadecimal numbers into array c # the negative elements of c need special handling since printf below only works with positive # hexadecimal numbers for i in "${!c[@]}"; do arg=${c[$i]} # inspect the MSB to see if arg is negative - if so convert it from a 2’s complement number [[ $(($arg>>31)) = 1 ]] && x=$(echo "obase=16; $(($arg - 0x100000000 ))" | bc) || x=$arg if [[ ${x:0:1} = \- ]] ; then # see if the first character is a minus sign neg[$i]="-"; c[$i]=0x${x:1} # strip the minus sign and make it hex for use with printf below else neg[$i]="" c[$i]=$x fi done EAP_TID=$(printf %x:%x:${neg[0]}%x:${neg[1]}%x:${neg[2]}%x ${a[0]} ${a[1]} ${c[0]} ${c[1]} ${c[2]})
EAP_TID holds the global transaction id of the transaction that created this Xid and the node identifier of the pod that started the transaction is given by the output of the following bash command (it is know to start from the 29th character of the PostgreSQL global transaction id field):
echo "$PG_TID"| base64 -d | tail -c +29
If this pod is still running (see the next section to find out how to tell) then leave this in-doubt branch alone since the transaction is still in flight.
If this pod is not running then you need to search the relevant transaction log storage for the transaction log. The log storage is in JDBC table called “<node identifier>JBOSSTSTXTABLE”. If there is no such table leave the branch alone since it is owned by some other transaction manager. The url for the datasource containing this table is defined in the transaction subsystem description (see below). If there is an entry in the table that matches the global transaction id then the in-doubt branch needs to be committed (using the datasource vendor tooling as described below) and if there is no such entry then the branch is an orphan and can safely be rolled back.
An example of how to commit an in-doubt PostgreSQL branch is as follows:
$ oc rsh postgresql2-2-c4q0x sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. psql sampledb commit prepared '131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRh ---- LWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA';
Repeat this procedure for all datasources and in-doubt branches.
How to obtain the list of node identifiers of all running EAP instances in any cluster than can contact the resource managers referred to in the previous section.
Node identifiers should be configured to be the same as the pod name. You can obtain the pod names in use using the oc command line tools to list the running pods:
$ oc get pods | grep Running NAME READY STATUS RESTARTS AGE eap-jta-crash-rec-26-m0jwh 1/1 Running 0 3m postgresql-1-kmk4b 1/1 Running 2 4d postgresql2-2-c4q0x 1/1 Running 1 1d
For each such pod look in the pods log output for the node name, for example for the first pod:
$ oc logs eap-jta-crash-rec-26-m0jwh | grep "jboss.node.name = jta-crash-rec2-26-m0jwh"
How to find the transaction logs
The transaction logs reside in a JDBC backed object store. The JNDI name of this store is defined in the transaction subsystem definition of the EAP config file. Look in the same config file to find the datasource definition corresponding to that JNDI name from which you can read off the connection URL. You can use this URL to connect to the database and issue a select query on the relevant in-doubt transaction table, or if you know which pod the database is running on and you know the name of the database it might be easier to just rsh into the pod and use the database tooling directly. For example, if the JDBC store is hosted by a PostgreSQL database called sampledb running on pod postgresql-1-kmk4b, then you can find the transaction logs using the following commands:
$ oc rsh postgresql-1-kmk4b sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. sampledb=# select uidstring from postgresql-1-kmk4bJBOSSTSTXTABLE where TYPENAME='StateManager/BasicAction/TwoPhaseCoordinator/AtomicAction' ; uidstring ----------- 0:ffffac110007:-4ab8c265:5a259b9b:13 | 0:ffffac110007:-4ab8c265:5a259b9b:1d 0:ffffac110007:-4ab8c265:5a259b9b:17 (3 rows)
Cleaning up transaction logs for reconciled in-doubt branches
When all the branches for a given transaction are complete (ie all potential resources managers have been checked, including A-MQ and JDV) it is safe to delete the transaction log.
DO NOT DELETE THE LOG UNLESS YOU ARE CERTAIN THAT THERE ARE NO MORE INDOUBT BRANCHES!
… DELETE FROM ${jboss.node.name}JBOSSTSTXTABLE where uidstring = …
The impact of not deleting the log is that completed transactions which failed after prepare but which have now been resolved will never be removed from the transaction log storage. The consequence of this is that unnecessary storage will be used and that future manual reconciliation will be more cumbersome.
4.13. Included JBoss Modules
The table below lists included JBoss Modules in the JBoss EAP for OpenShift image.
JBoss Module |
---|
org.jboss.as.clustering.common |
org.jboss.as.clustering.jgroups |
org.jboss.as.ee |
org.jboss.logmanager.ext |
org.jgroups |
org.mongodb |
org.openshift.ping |
org.postgresql |
com.mysql |
net.oauth.core |
Revised on 2017-12-21 15:27:54 GMT