Red Hat JBoss Enterprise Application Platform-7.0-Development Guide-En-US
Red Hat JBoss Enterprise Application Platform-7.0-Development Guide-En-US
Platform 7.0
Development Guide
For Use with Red Hat JBoss Enterprise Application Platform 7.0
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This book provides references and examples for Java EE developers using Red Hat JBoss
Enterprise Application Platform 7.0 and its patch releases.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . .1.. .GET
. . . .STARTED
. . . . . . . . .DEVELOPING
. . . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
...........
1.1. INTRODUCTION 13
1.1.1. About Red Hat JBoss Enterprise Application Platform 7 13
1.2. BECOME FAMILIAR WITH JAVA ENTERPRISE EDITION 7 13
1.2.1. Overview of EE 7 Profiles 13
Java Enterprise Edition 7 Web Profile 13
Java Enterprise Edition 7 Full Profile 14
1.3. SETTING UP THE DEVELOPMENT ENVIRONMENT 15
1.3.1. Download JBoss Developer Studio 15
1.3.2. Install JBoss Developer Studio 16
1.3.3. Start JBoss Developer Studio 16
1.3.4. Add the JBoss EAP Server to JBoss Developer Studio 17
1.4. USING THE QUICKSTART EXAMPLES 21
1.4.1. About Maven 21
1.4.1.1. Using Maven with the Quickstarts 22
1.4.2. Download and Run the Quickstart Code Examples 22
1.4.2.1. Download the Quickstarts 22
1.4.2.2. Run the Quickstarts in JBoss Developer Studio 23
1.4.2.3. Run the Quickstarts from the Command Line 28
1.4.3. Review the Quickstart Tutorials 28
1.4.3.1. Explore the helloworld Quickstart 28
Prerequisites 29
Examine the Directory Structure 29
Examine the Code 29
1.4.3.2. Explore the numberguess Quickstart 30
Prerequisites 31
Examine the Configuration Files 31
1.4.3.2.1. Examine the JSF Code 31
1.4.3.2.2. Examine the Class Files 33
1.5. CONFIGURE THE DEFAULT WELCOME WEB APPLICATION 37
Changing the welcome-content File Handler 37
Changing the default-web-module 37
Disabling the Default Welcome Web Application 38
.CHAPTER
. . . . . . . . .2.. .USING
. . . . . .MAVEN
. . . . . . .WITH
. . . . . JBOSS
. . . . . . .EAP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
...........
2.1. LEARN ABOUT MAVEN 39
2.1.1. About the Maven Repository 39
2.1.2. About the Maven POM File 39
Minimum Requirements of a Maven POM File 39
2.1.3. About the Maven Settings File 40
2.1.4. About Maven Repository Managers 41
Commonly used Maven repository managers 41
2.2. INSTALL MAVEN AND THE JBOSS EAP MAVEN REPOSITORY 41
2.2.1. Download and Install Maven 42
2.2.2. Install the JBoss EAP Maven Repository 42
2.2.3. Install the JBoss EAP Maven Repository Locally 42
2.2.4. Install the JBoss EAP Maven Repository for Use with Apache httpd 43
2.3. USE THE MAVEN REPOSITORY 43
2.3.1. Configure the JBoss EAP Maven Repository 43
Configure the JBoss EAP Maven Repository Using the Maven Settings 43
Configure the JBoss EAP Maven Repository Using the Project POM 46
1
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
.CHAPTER
. . . . . . . . .3.. .CLASS
. . . . . . LOADING
. . . . . . . . . AND
. . . . .MODULES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
...........
3.1. INTRODUCTION 56
3.1.1. Overview of Class Loading and Modules 56
3.1.2. Modules 56
Static Modules 56
Dynamic Modules 57
3.1.3. Module Dependencies 57
Optional Dependencies 58
Export a Dependency 58
Global Modules 58
3.1.3.1. Display Module Dependencies Using the Management CLI 58
3.1.4. Class Loading in Deployments 59
3.1.5. Class Loading Precedence 60
3.1.6. Dynamic Module Naming Conventions 60
3.1.7. jboss-deployment-structure.xml 61
3.2. ADD AN EXPLICIT MODULE DEPENDENCY TO A DEPLOYMENT 61
Prerequisites 61
Add a Dependency Configuration to MANIFEST.MF 61
Add a Dependency Configuration to the jboss-deployment-structure.xml 62
Creating a Jandex Index 63
3.3. GENERATE MANIFEST.MF ENTRIES USING MAVEN 64
Generate a MANIFEST.MF File Containing Module Dependencies 64
3.4. PREVENT A MODULE BEING IMPLICITLY LOADED 65
3.5. EXCLUDE A SUBSYSTEM FROM A DEPLOYMENT 66
3.6. USE THE CLASS LOADER PROGRAMMATICALLY IN A DEPLOYMENT 68
3.6.1. Programmatically Load Classes and Resources in a Deployment 68
3.6.2. Programmatically Iterate Resources in a Deployment 69
3.7. CLASS LOADING AND SUBDEPLOYMENTS 72
3.7.1. Modules and Class Loading in Enterprise Archives 72
3.7.2. Subdeployment Class Loader Isolation 72
3.7.3. Enable Subdeployment Class Loader Isolation Within a EAR 72
3.7.4. Configuring Session Sharing between Subdeployments in Enterprise Archives 73
3.7.4.1. Reference of Shared Session Configuration Options 73
3.8. DEPLOY TAG LIBRARY DESCRIPTORS (TLDS) IN A CUSTOM MODULE 76
Deploy TLDs in a Custom Module 76
3.9. REFERENCE 77
3.9.1. Implicit Module Dependencies 77
3.9.2. Included Modules 86
3.9.3. JBoss Deployment Structure Deployment Descriptor Reference 86
.CHAPTER
. . . . . . . . .4.. .LOGGING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
...........
4.1. ABOUT LOGGING 87
4.1.1. Supported Application Logging Frameworks 87
4.2. LOGGING WITH THE JBOSS LOGGING FRAMEWORK 87
2
Table of Contents
.CHAPTER
. . . . . . . . .5.. .REMOTE
. . . . . . . .JNDI
. . . . .LOOKUP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
............
5.1. REGISTERING OBJECTS TO JNDI 115
5.2. CONFIGURING REMOTE JNDI 115
. . . . . . . . . .6.. .CLUSTERING
CHAPTER . . . . . . . . . . . .IN
. . WEB
. . . . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
............
6.1. SESSION REPLICATION 116
6.1.1. About HTTP Session Replication 116
6.1.2. Enable Session Replication in Your Application 116
Make your Application Distributable 116
Immutable Session Attributes 117
6.2. HTTP SESSION PASSIVATION AND ACTIVATION 118
6.2.1. About HTTP Session Passivation and Activation 118
6.2.2. Configure HTTP Session Passivation in Your Application 118
6.3. PUBLIC API FOR CLUSTERING SERVICES 119
6.4. HA SINGLETON SERVICE 119
HA Singleton ServiceBuilder API 120
HA Singleton Service Election Policies 120
Create an HA Singleton Service Application 120
3
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
. . . . . . . . . .7.. .CONTEXTS
CHAPTER . . . . . . . . . . AND
. . . . DEPENDENCY
. . . . . . . . . . . . . .INJECTION
. . . . . . . . . .(CDI)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
............
7.1. INTRODUCTION TO CDI 130
7.1.1. About Contexts and Dependency Injection (CDI) 130
Benefits of CDI 130
7.1.2. Relationship Between Weld, Seam 2, and JavaServer Faces 130
7.2. USE CDI TO DEVELOP AN APPLICATION 130
7.2.1. Default Bean Discovery Mode 131
Bean Defining Annotations 131
7.2.2. Exclude Beans From the Scanning Process 132
7.2.3. Use an Injection to Extend an Implementation 133
7.3. AMBIGUOUS OR UNSATISFIED DEPENDENCIES 134
7.3.1. Qualifiers 134
'@Any' 135
7.3.2. Use a Qualifier to Resolve an Ambiguous Injection 135
Resolve an Ambiguous Injection with a Qualifier 136
7.4. MANAGED BEANS 136
7.4.1. Types of Classes That are Beans 136
@Vetoed 137
7.4.2. Use CDI to Inject an Object Into a Bean 137
Inject Objects into Other Objects 137
7.5. CONTEXTS AND SCOPES 138
7.6. NAMED BEANS 139
7.6.1. Use Named Beans 139
Configure Bean Names Using the @Named Annotation 139
7.7. BEAN LIFECYCLE 140
Manage Bean Lifecycles 140
7.7.1. Use a Producer Method 140
7.8. ALTERNATIVE BEANS 142
Declaring Selected Alternatives 142
7.8.1. Override an Injection with an Alternative 143
Override an Injection 143
7.9. STEREOTYPES 143
7.9.1. Use Stereotypes 144
Define and Use Stereotypes 144
7.10. OBSERVER METHODS 144
7.10.1. Fire and Observe Events 145
7.10.2. Transactional Observers 145
7.11. INTERCEPTORS 147
Enabling Interceptors 147
7.11.1. Use Interceptors with CDI 148
Use Interceptors with CDI 149
7.12. DECORATORS 149
7.13. PORTABLE EXTENSIONS 150
7.14. BEAN PROXIES 151
4
Table of Contents
. . . . . . . . . .8.. .JBOSS
CHAPTER . . . . . . EAP
. . . . .MBEAN
. . . . . . .SERVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
............
8.1. WRITING JBOSS MBEAN SERVICES 153
8.1.1. A Standard MBean Example 153
8.2. DEPLOYING JBOSS MBEAN SERVICES 155
. . . . . . . . . .9.. .CONCURRENCY
CHAPTER . . . . . . . . . . . . . . UTILITIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
............
9.1. CONTEXT SERVICE 157
9.2. MANAGED THREAD FACTORY 157
9.3. MANAGED EXECUTOR SERVICE 158
9.4. MANAGED SCHEDULED EXECUTOR SERVICE 159
. . . . . . . . . .10.
CHAPTER . . .UNDERTOW
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161
............
10.1. INTRODUCTION TO UNDERTOW HANDLER 161
Request Lifecycle 161
Ending the Exchange 162
10.2. USING EXISTING UNDERTOW HANDLERS WITH A DEPLOYMENT 162
10.3. CREATING CUSTOM HANDLERS 163
. . . . . . . . . .11.
CHAPTER . . .JAVA
. . . . . TRANSACTION
. . . . . . . . . . . . . .API
. . . (JTA)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
............
11.1. OVERVIEW 166
11.1.1. Overview of Java Transactions API (JTA) 166
11.2. TRANSACTION CONCEPTS 166
11.2.1. About Transactions 166
11.2.2. About ACID Properties for Transactions 166
11.2.3. About the Transaction Coordinator or Transaction Manager 166
11.2.4. About Transaction Participants 167
11.2.5. About Java Transactions API (JTA) 167
11.2.6. About Java Transaction Service (JTS) 167
11.2.7. About XML Transaction Service 168
11.2.7.1. Overview of Protocols Used by XTS 168
11.2.7.2. Web Services-Atomic Transaction Process 168
11.2.7.2.1. Atomic Transaction Process 168
11.2.7.3. Web Services-Business Activity Process 169
11.2.7.3.1. WS-BA Process 169
11.2.7.4. Transaction Bridging Overview 170
11.2.8. About XA Resources and XA Transactions 170
11.2.9. About XA Recovery 170
11.2.10. Limitations of the XA Recovery Process 170
11.2.11. About the 2-Phase Commit Protocol 172
Phase 1: Prepare 172
Phase 2: Commit 172
11.2.12. About Transaction Timeouts 172
11.2.13. About Distributed Transactions 172
11.2.14. About the ORB Portability API 172
11.3. TRANSACTION OPTIMIZATIONS 173
11.3.1. Overview of Transaction Optimizations 173
11.3.2. About the LRCO Optimization for Single-phase Commit (1PC) 173
Single-phase Commit (1PC) 173
Last Resource Commit Optimization (LRCO) 174
11.3.2.1. Commit Markable Resource 174
Summary 174
Create Tables in Database 174
5
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
.CHAPTER
. . . . . . . . .12.
. . .JAVA
. . . . . PERSISTENCE
. . . . . . . . . . . . . API
. . . .(JPA)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189
............
12.1. ABOUT JAVA PERSISTENCE API (JPA) 189
12.2. ABOUT HIBERNATE CORE 189
12.3. HIBERNATE ENTITYMANAGER 189
12.4. CREATE A SIMPLE JPA APPLICATION 190
12.5. HIBERNATE CONFIGURATION 193
12.6. SECOND-LEVEL CACHES 194
12.6.1. About Second-Level Caches 194
12.6.2. Configure a Second-level Cache for Hibernate 194
Configuring a Second-level Cache for Hibernate Using JPA Applications 194
Configuring a Second-level Cache for Hibernate Using Hibernate Native Applications 195
12.7. HIBERNATE ANNOTATIONS 195
12.8. HIBERNATE QUERY LANGUAGE 202
12.8.1. About Hibernate Query Language 202
Introduction to JPQL 202
Introduction to HQL 202
12.8.2. About HQL Statements 203
12.8.3. About the INSERT Statement 203
12.8.4. About the FROM Clause 204
12.8.5. About the WITH Clause 204
12.8.6. About HQL Ordering 205
6
Table of Contents
.CHAPTER
. . . . . . . . .13.
. . .HIBERNATE
. . . . . . . . . . .SEARCH
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
............
13.1. GETTING STARTED WITH HIBERNATE SEARCH 233
13.1.1. About Hibernate Search 233
13.1.2. Overview 233
13.1.3. About the Directory Provider 233
13.1.4. About the Worker 234
13.1.5. Back End Setup and Operations 234
13.1.5.1. Back End 234
13.1.5.2. Lucene 234
7
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
8
Table of Contents
9
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
.CHAPTER
. . . . . . . . .14.
. . .BEAN
. . . . . VALIDATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
............
14.1. ABOUT BEAN VALIDATION 327
14.2. VALIDATION CONSTRAINTS 327
14.2.1. About Validation Constraints 327
14.2.2. Hibernate Validator Constraints 327
14.2.3. Bean Validation Using Custom Constraints 330
14.2.3.1. Creating A Constraint Annotation 330
14.2.3.2. Implementing A Constraint Validator 332
14.3. VALIDATION CONFIGURATION 333
. . . . . . . . . .15.
CHAPTER . . .CREATING
. . . . . . . . . .WEBSOCKET
. . . . . . . . . . . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
............
Create the WebSocket Application 335
.CHAPTER
. . . . . . . . .16.
. . .JAVA
. . . . . AUTHORIZATION
. . . . . . . . . . . . . . . .CONTRACT
. . . . . . . . . . FOR
. . . . CONTAINERS
. . . . . . . . . . . . .(JACC)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339
............
16.1. ABOUT JAVA AUTHORIZATION CONTRACT FOR CONTAINERS (JACC) 339
16.2. CONFIGURE JAVA AUTHORIZATION CONTRACT FOR CONTAINERS (JACC) SECURITY 339
.CHAPTER
. . . . . . . . .17.
. . .JAVA
. . . . . AUTHENTICATION
. . . . . . . . . . . . . . . . .SPI
. . .FOR
. . . . CONTAINERS
. . . . . . . . . . . . .(JASPI)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
............
17.1. ABOUT JAVA AUTHENTICATION SPI FOR CONTAINERS (JASPI) SECURITY 341
17.2. CONFIGURE JAVA AUTHENTICATION SPI FOR CONTAINERS (JASPI) SECURITY 341
. . . . . . . . . .18.
CHAPTER . . .JAVA
. . . . . BATCH
. . . . . . .APPLICATION
. . . . . . . . . . . . .DEVELOPMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
............
18.1. REQUIRED BATCH DEPENDENCIES 342
18.2. JOB SPECIFICATION LANGUAGE (JSL) INHERITANCE 342
Example: Inherit Step and Flow Within the Same Job XML File 342
Example: Inherit a Step from a Different Job XML File 343
18.3. BATCH PROPERTY INJECTIONS 344
Example: Injecting a Number into a Batchlet Class as Various Types 346
Example: Injecting a Number Sequence into a Batchlet Class as Various Arrays 346
Example: Injecting a Class Property into a Batchlet Class 347
Example: Assigning a Default Value to a Field Annotated for Property Injection 348
.APPENDIX
. . . . . . . . . A.
. . .REFERENCE
. . . . . . . . . . . MATERIAL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
............
A.1. PROVIDED UNDERTOW HANDLERS 349
AccessControlListHandler 349
AccessLogHandler 349
AllowedMethodsHandler 351
BlockingHandler 351
ByteRangeHandler 351
CanonicalPathHandler 351
DisableCacheHandler 352
DisallowedMethodsHandler 352
EncodingHandler 352
FileErrorPageHandler 352
HttpTraceHandler 353
IPAddressAccessControlHandler 353
10
Table of Contents
JDBCLogHandler 353
LearningPushHandler 354
LocalNameResolvingHandler 354
PathSeparatorHandler 355
PeerNameResolvingHandler 355
ProxyPeerAddressHandler 355
RedirectHandler 355
RequestBufferingHandler 355
RequestDumpingHandler 356
RequestLimitingHandler 356
ResourceHandler 356
ResponseRateLimitingHandler 357
SetHeaderHandler 357
SSLHeaderHandler 357
StuckThreadDetectionHandler 358
URLDecodingHandler 358
A.2. HIBERNATE PROPERTIES 358
11
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
12
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
1.1. INTRODUCTION
JBoss EAP includes a modular structure that allows service enabling only when required, improving
startup speed.
The management console and management command-line interface (CLI) make editing XML
configuration files unnecessary and add the ability to script and automate tasks.
JBoss EAP provides two operating modes for JBoss EAP instances: standalone server or managed
domain. The standalone server operating mode represents running JBoss EAP as a single server
instance. The managed domain operating mode allows for the management of multiple JBoss EAP
instances from a single control point.
In addition, JBoss EAP includes APIs and development frameworks for quickly developing secure and
scalable Java EE applications.
EE 7 Full Profile includes all APIs and specifications included in the EE 7 specification. EE 7 Web Profile
includes a selected subset of APIs, which are designed to be useful to web developers.
JBoss EAP is a certified implementation of the Java Enterprise Edition 7 Full Profile and Web Profile
specifications.
JSP 2.3
13
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
A known security risk in JBoss EAP exists where the Java Standard Tag
Library (JSTL) allows the processing of external entity references in untrusted
XML documents which could access resources on the host system and,
potentially, allow arbitrary code execution.
To avoid this, the JBoss EAP server has to be run with system property
org.apache.taglibs.standard.xml.accessExternalEntity
correctly set, usually with an empty string as value. This can be done in two
ways:
org.apache.taglibs.standard.xml.accessExternalEntit
y
Passing -
Dorg.apache.taglibs.standard.xml.accessExternalEntity=
"" as an argument to the standalone.sh or domain.sh scripts.
The other profile defined by the Java EE 7 specification is the Full Profile, and includes several more
APIs.
Batch 1.0
14
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
JSON-P 1.0
Concurrency 1.0
WebSocket 1.1
JMS 2.0
JPA 2.1
JCA 1.7
JAX-RS 2.0
JAX-WS 2.2
Servlet 3.1
JSF 2.2
JSP 2.3
EL 3.0
CDI 1.1
CDI Extensions
JTA 1.2
Interceptors 1.2
EJB 3.2
2. Click Downloads.
3. In the Product Downloads list, click Red Hat JBoss Developer Studio.
15
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
5. Find the Red Hat JBoss Developer Studio 9.x.x Stand-alone Installer entry in the table and
click Download.
NOTE
Alternatively, you may be able to double-click the JAR file to launch the
installation program.
4. Select I accept the terms of this license agreement and click Next.
NOTE
If the installation path folder does not exist, a prompt will appear. Click OK to
create the folder.
6. Choose a JVM, or leave the default JVM selected, and click Next.
10. Configure the desktop shortcuts for JBoss Developer Studio, and click Next.
1. Open a terminal and navigate to the JBoss Developer Studio installation directory.
16
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
$ ./jbdevstudio
NOTE
NOTE
If the Servers tab is not shown, add it to the panel by selecting Window → Show
View → Servers.
2. Click on the No servers are available. Click this link to create a new server link.
3. Expand Red Hat JBoss Middleware and choose JBoss Enterprise Application Platform 7.0.
Enter a server name, for example, JBoss EAP 7.0, then click Next.
17
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
4. Create a server adapter to manage starting and stopping the server. Keep the defaults and click
Next.
18
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
5. Enter a name, for example JBoss EAP 7.0 Runtime. Click Browse next to Home Directory
and navigate to your JBoss EAP installation directory. Then click Next.
19
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
Some quickstarts require that you run the server with a different profile or
additional arguments. For example, to deploy a quickstart that requires the full
profile, you must define a new server and specify standalone-full.xml in the
Configuration file field. Be sure to give the new server a descriptive name.
6. Configure existing projects for the new server. Because you do not have any projects at this
point, click Finish.
20
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
The JBoss EAP 7.0 server is now listed in the Servers tab.
21
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Apache Maven is a distributed build automation tool used in Java application development to create,
manage, and build software projects. Maven uses standard configuration files called Project Object
Model (POM) files to define projects and manage the build process. POMs describe the module and
component dependencies, build order, and targets for the resulting project packaging and output using
an XML file. This ensures that the project is built in a correct and uniform manner.
Maven achieves this by using a repository. A Maven repository stores Java libraries, plug-ins, and other
build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be
private and internal within a company with a goal to share common artifacts among development teams.
Repositories are also available from third-parties. For more information, see the Apache Maven project
and the Introduction to Repositories guide.
JBoss EAP includes a Maven repository that contains many of the requirements that Java EE developers
typically use to build applications on JBoss EAP.
The artifacts and dependencies needed to build and deploy applications to JBoss EAP 7 are hosted on a
public repository. Starting with the JBoss EAP 7 quickstarts, it is no longer necessary to configure your
Maven settings.xml file to use these repositories when building the quickstarts. The Maven
repositories are now configured in the quickstart project POM files. This method of configuration is
provided to make it easier to get started with the quickstarts, however, is generally not recommended for
production projects because it can slow down your build.
Red Hat JBoss Developer Studio includes Maven, so there is no need to download and install it
separately. It is recommended to use JBoss Developer Studio version 9.1 or later.
If you plan to use the Maven command line to build and deploy your applications, then you must first
download Maven from the Apache Maven project and install it using the instructions provided in the
Maven documentation.
JBoss EAP comes with a comprehensive set of quickstart code examples designed to help users begin
writing applications using various Java EE 7 technologies. The quickstarts can be downloaded from the
Red Hat Customer Portal.
2. Click Downloads.
3. In the Product Downloads list, click Red Hat JBoss Enterprise Application Platform.
5. Find the Red Hat JBoss Enterprise Application Platform 7.0.0 Quickstarts entry in the table
and click Download.
22
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
Once the quickstarts have been downloaded, they can be imported into JBoss Developer Studio and
deployed to JBoss EAP.
IMPORTANT
If your quickstart project folder is located within the IDE workspace when you import it into
JBoss Developer Studio, the IDE generates an invalid project name and WAR archive
name. Be sure your quickstart project folder is located outside the IDE workspace before
you begin.
23
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
4. Browse to the desired quickstart’s directory (for example the helloworld quickstart), and click
OK. The Projects list box is populated with the pom.xml file of the selected quickstart project.
5. Click Finish.
1. If you have not yet defined a server, add the JBoss EAP server to JBoss Developer Studio.
2. Right-click the jboss-helloworld project in the Project Explorer tab and select Run As → Run
on Server.
24
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
3. Select JBoss EAP 7.0 from the server list and click Next.
4. The jboss-helloworld quickstart is already listed to be configured on the server. Click Finish to
deploy the quickstart.
25
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
In the Server tab, the JBoss EAP 7.0 server status changes to Started .
The Console tab shows messages detailing the JBoss EAP server start and the
helloworld quickstart deployment.
26
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
2. In the Servers tab, right-click on the server and choose Start to start the JBoss EAP server. If
you do not see a Servers tab or have not yet defined a server, add the JBoss EAP server to Red
Hat JBoss Developer Studio.
3. Right-click on the jboss-bean-validation project in the Project Explorer tab and select
Run As → Maven Build.
4. Enter the following in the Goals input field and then click Run.
27
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running
org.jboss.as.quickstarts.bean_validation.test.MemberValidationTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
2.189 sec
Results :
[INFO] ----------------------------------------------------------
--------------
[INFO] BUILD SUCCESS
[INFO] ----------------------------------------------------------
--------------
You can easily build and deploy the quickstarts from the command line using Maven. If you do not yet
have Maven installed, see the Apache Maven project to download and install it.
A README.md file is provided at the root directory of the quickstarts that contains general information
about system requirements, configuring Maven, adding users, and running the quickstarts.
Each quickstart also contains its own README.md file that provides the specific instructions and Maven
commands to run that quickstart.
1. Review the README.md file in the root directory of the helloworld quickstart.
$ EAP_HOME/bin/standalone.sh
4. Build and deploy the quickstart using the Maven command provided in the quickstart’s
README.md file.
The helloworld quickstart shows you how to deploy a simple servlet to JBoss EAP. The business logic
is encapsulated in a service, which is provided as a Contexts and Dependency Injection (CDI) bean and
28
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
injected into the Servlet. This quickstart is a starting point to be sure you have configured and started
your server properly.
Detailed instructions to build and deploy this quickstart using the command line can be found in the
README.html file at the root of the helloworld quickstart directory. This topic shows you how to use
Red Hat JBoss Developer Studio to run the quickstart and assumes you have installed Red Hat JBoss
Developer Studio, configured Maven, and imported and successfully run the helloworld quickstart.
Prerequisites
Verify that the helloworld quickstart was successfully deployed to JBoss EAP by opening a
web browser and accessing the application at http://localhost:8080/jboss-helloworld
The src/main/webapp/ directory contains the files for the quickstart. All the configuration files for this
example are located in the WEB-INF/ directory within src/main/webapp/, including the beans.xml
file. The src/main/webapp/ directory also includes an index.html file, which uses a simple meta
refresh to redirect the user’s browser to the Servlet, which is located at http://localhost:8080/jboss-
helloworld/HelloWorld. The quickstart does not require a web.xml file.
42 @SuppressWarnings("serial")
43 @WebServlet("/HelloWorld")
44 public class HelloWorldServlet extends HttpServlet {
45
46 static String PAGE_HEADER = "<html><head>
<title>helloworld</title></head><body>";
47
48 static String PAGE_FOOTER = "</body></html>";
49
50 @Inject
51 HelloService helloService;
52
53 @Override
54 protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
29
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
55 resp.setContentType("text/html");
56 PrintWriter writer = resp.getWriter();
57 writer.println(PAGE_HEADER);
58 writer.println("<h1>" +
helloService.createHelloMessage("World") + "</h1>");
59 writer.println(PAGE_FOOTER);
60 writer.close();
61 }
62
63 }
Line Note
46-48 Every web page needs correctly formed HTML. This quickstart uses static
Strings to write the minimum header and footer output.
50-51 These lines inject the HelloService CDI bean which generates the actual
message. As long as we don’t alter the API of HelloService, this approach
allows us to alter the implementation of HelloService at a later date without
changing the view layer.
58 This line calls into the service to generate the message "Hello World", and
write it out to the HTTP request.
The numberguess quickstart shows you how to create and deploy a simple non-persistant application
to JBoss EAP. Information is displayed using a JSF view and business logic is encapsulated in two CDI
beans. In the numberguess quickstart, you have ten attempts to guess a number between 1 and 100.
After each attempt, you’re told whether your guess was too high or too low.
The code for the numberguess quickstart can be found in the QUICKSTART_HOME/numberguess
directory where QUICKSTART_HOME is the directory where you downloaded and unzipped the JBoss
30
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
EAP quickstarts. The numberguess quickstart is comprised of a number of beans, configuration files,
and Facelets (JSF) views, and is packaged as a WAR module.
Detailed instructions to build and deploy this quickstart using the command line can be found in the
README.html file at the root of the numberguess quickstart directory. The following examples use Red
Hat JBoss Developer Studio to run the quickstart.
Prerequisites
Follow the instructions to run the quickstarts in Red Hat JBoss Developer Studio, replacing
helloworld with the numberguess quickstart in the instructions.
Verify the numberguess quickstart was deployed successfully to JBoss EAP by opening a web
browser and accessing the application at this URL: http://localhost:8080/jboss-numberguess
<faces-config version="2.2"
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_2.xsd">
</faces-config>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
bean-discovery-mode="all">
</beans>
NOTE
31
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
JSF uses the .xhtml file extension for source files, but delivers the rendered views with the .jsf
extension. The home.xhtml file is located in the src/main/webapp/ directory.
19<html xmlns="http://www.w3.org/1999/xhtml"
20 xmlns:ui="http://java.sun.com/jsf/facelets"
21 xmlns:h="http://java.sun.com/jsf/html"
22 xmlns:f="http://java.sun.com/jsf/core">
23
24 <head>
25 <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"
/>
26 <title>Numberguess</title>
27 </head>
28
29 <body>
30 <div id="content">
31 <h1>Guess a number...</h1>
32 <h:form id="numberGuess">
33
34 <!-- Feedback for the user on their guess -->
35 <div style="color: red">
36 <h:messages id="messages" globalOnly="false" />
37 <h:outputText id="Higher" value="Higher!"
38 rendered="#{game.number gt game.guess and game.guess ne 0}" />
39 <h:outputText id="Lower" value="Lower!"
40 rendered="#{game.number lt game.guess and game.guess ne 0}" />
41 </div>
42
43 <!-- Instructions for the user -->
44 <div>
45 I'm thinking of a number between <span
46 id="numberGuess:smallest">#{game.smallest}</span> and <span
47 id="numberGuess:biggest">#{game.biggest}</span>. You have
48 #{game.remainingGuesses} guesses remaining.
49 </div>
50
51 <!-- Input box for the users guess, plus a button to submit, and reset
-->
52 <!-- These are bound using EL to our CDI beans -->
53 <div>
54 Your guess:
55 <h:inputText id="inputGuess" value="#{game.guess}"
56 required="true" size="3"
57 disabled="#{game.number eq game.guess}"
58 validator="#{game.validateNumberRange}" />
59 <h:commandButton id="guessButton" value="Guess"
60 action="#{game.check}"
61 disabled="#{game.number eq game.guess}" />
62 </div>
63 <div>
64 <h:commandButton id="restartButton" value="Reset"
65 action="#{game.reset}" immediate="true" />
66 </div>
67 </h:form>
32
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
68
69 </div>
70
71 <br style="clear: both" />
72
73 </body>
74</html>
The following line numbers correspond to those seen when viewing the file in JBoss Developer Studio.
Line Note
36-40 These are the messages which can be sent to the user: "Higher!" and "Lower!"
45-48 As the user guesses, the range of numbers they can guess gets smaller. This
sentence changes to make sure they know the number range of a valid guess.
55-58 This input field is bound to a bean property using a value expression.
58 A validator binding is used to make sure the user does not accidentally input a
number outside of the range in which they can guess. If the validator was not
here, the user might use up a guess on an out of bounds number.
59-61 There must be a way for the user to send their guess to the server. Here we bind
to an action method on the bean.
33
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Retention(RUNTIME)
@Documented
@Qualifier
public @interface MaxNumber {
}
@SuppressWarnings("serial")
@ApplicationScoped
public class Generator implements Serializable {
java.util.Random getRandom() {
return random;
}
@Produces
@Random
int next() {
// a number between 1 and 100
return getRandom().nextInt(maxNumber - 1) + 1;
}
@Produces
@MaxNumber
int getMaxNumber() {
return maxNumber;
}
}
Notice the @Named annotation in the class. This annotation is only required when you want to
make the bean accessible to a JSF view by using Expression Language (EL), in this case #
{game}.
@SuppressWarnings("serial")
@Named
@SessionScoped
public class Game implements Serializable {
/**
34
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
/**
* The users latest guess
*/
private int guess;
/**
* The smallest number guessed so far (so we can track the valid
guess range).
*/
private int smallest;
/**
* The largest number guessed so far
*/
private int biggest;
/**
* The number of guesses remaining
*/
private int remainingGuesses;
/**
* The maximum number we should ask them to guess
*/
@Inject
@MaxNumber
private int maxNumber;
/**
* The random number to guess
*/
@Inject
@Random
Instance<Integer> randomNumber;
public Game() {
}
35
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
/**
* Check whether the current guess is correct, and update the
biggest/smallest guesses as needed. Give feedback to the user
* if they are correct.
*/
public void check() {
if (guess > number) {
biggest = guess - 1;
} else if (guess < number) {
smallest = guess + 1;
} else if (guess == number) {
FacesContext.getCurrentInstance().addMessage(null, new
FacesMessage("Correct!"));
}
remainingGuesses--;
}
/**
* Reset the game, by putting all values back to their defaults,
and getting a new random number. We also call this method
* when the user starts playing for the first time using
{@linkplain PostConstruct @PostConstruct} to set the initial
* values.
*/
@PostConstruct
public void reset() {
this.smallest = 0;
this.guess = 0;
this.remainingGuesses = 10;
this.biggest = maxNumber;
this.number = randomNumber.get();
}
/**
* A JSF validation method which checks whether the guess is
valid. It might not be valid because there are no guesses left,
* or because the guess is not in range.
*
*/
public void validateNumberRange(FacesContext context,
UIComponent toValidate, Object value) {
if (remainingGuesses <= 0) {
FacesMessage message = new FacesMessage("No guesses
left!");
context.addMessage(toValidate.getClientId(context),
message);
36
CHAPTER 1. GET STARTED DEVELOPING APPLICATIONS
((UIInput) toValidate).setValid(false);
return;
}
int input = (Integer) value;
This default Welcome application can be replaced with your own web application. This can be configured
in one of two ways:
/subsystem=undertow/configuration=handler/file=welcome-content:write-
attribute(name=path,value="/path/to/content")
NOTE
Alternatively, you could create a different file handler to be used by the server’s root.
/subsystem=undertow/configuration=handler/file=NEW_FILE_HANDLER
:add(path="/path/to/content")
/subsystem=undertow/server=default-server/host=default-
host/location=\/:write-
attribute(name=handler,value=NEW_FILE_HANDLER)
reload
37
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
/subsystem=undertow/server=default-server/host=default-host:write-
attribute(name=default-web-module,value=hello.war)
reload
/subsystem=undertow/server=default-server/host=default-
host/location=\/:remove
reload
38
CHAPTER 2. USING MAVEN WITH JBOSS EAP
Maven achieves this by using a repository. A Maven repository stores Java libraries, plug-ins, and other
build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be
private and internal within a company with a goal to share common artifacts among development teams.
Repositories are also available from third-parties. JBoss EAP includes a Maven repository that contains
many of the requirements that Java EE developers typically use to build applications on JBoss EAP. To
configure your project to use this repository, see Configure the JBoss EAP Maven Repository.
For more information about Maven repositories, see Apache Maven Project - Introduction to
Repositories.
For more information about POM files, see the Apache Maven Project POM Reference.
project root
modelVersion
<project>
39
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<modelVersion>4.0.0</modelVersion>
<groupId>com.jboss.app</groupId>
<artifactId>my-app</artifactId>
<version>1</version>
</project>
In the Maven installation: The settings file can be found in the $M2_HOME/conf/ directory.
These settings are referred to as global settings. The default Maven settings file is a template
that can be copied and used as a starting point for the user settings file.
In the user’s installation: The settings file can be found in the ${user.home}/.m2/
directory. If both the Maven and user settings.xml files exist, the contents are merged.
Where there are overlaps, the user’s settings.xml file takes precedence.
40
CHAPTER 2. USING MAVEN WITH JBOSS EAP
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<!-- Optionally, make the repository active by default -->
<activeProfile>jboss-eap-maven-repository</activeProfile>
</activeProfiles>
</settings>
They provide the ability to configure proxies between your organization and remote Maven
repositories. This provides a number of benefits, including faster and more efficient deployments
and a better level of control over what is downloaded by Maven.
They provide deployment destinations for your own generated artifacts, allowing collaboration
between different development teams across an organization.
For more information about Maven repository managers, see Best Practice - Using a Repository
Manager.
Sonatype Nexus
See Sonatype Nexus documentation for more information about Nexus.
Artifactory
See JFrog Artifactory documentation for more information about Artifactory.
Apache Archiva
See Apache Archiva: The Build Artifact Repository Manager for more information about Apache
Archiva.
NOTE
41
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
1. Go to Apache Maven Project - Download Maven and download the latest distribution for your
operating system.
2. See the Maven documentation for information on how to download and install Apache Maven for
your operating system.
You can install the JBoss EAP Maven repository on your local file system. For detailed
instructions, see Install the JBoss EAP Maven Repository Locally.
You can install the JBoss EAP Maven repository on the Apache Web Server. For more
information, see Install the JBoss EAP Maven Repository for Use with Apache httpd.
You can install the JBoss EAP Maven repository using the Nexus Maven Repository Manager.
For more information, see Repository Management Using Nexus Maven Repository Manager.
NOTE
You can use the JBoss EAP Maven repository available online, or download and install it
locally using any one of the three listed methods.
Follow these steps to download and install the JBoss EAP Maven repository to the local file system.
2. Find Red Hat JBoss Enterprise Application Platform 7.0 Maven Repository in the list.
3. Click the Download button to download a .zip file containing the repository.
4. Unzip the file on the local file system into a directory of your choosing.
This creates a new jboss-eap-7.0.0.GA-maven-repository/ directory, which contains
the Maven repository in a subdirectory named maven-repository/.
IMPORTANT
If you want to continue to use an older local repository, you must configure it separately in
the Maven settings.xml configuration file. Each local repository must be configured
within its own <repository> tag.
42
CHAPTER 2. USING MAVEN WITH JBOSS EAP
IMPORTANT
2.2.4. Install the JBoss EAP Maven Repository for Use with Apache httpd
This example will cover the steps to download the JBoss EAP Maven Repository for use with Apache
httpd. This option is good for multi-user and cross-team development environments because any
developer that can access the web server can also access the Maven repository.
NOTE
You must first configure Apache httpd. See Apache HTTP Server Project documentation
for instructions.
2. Find Red Hat JBoss Enterprise Application Platform 7.0 Maven Repository in the list.
3. Click the Download button to download a .zip file containing the repository.
4. Unzip the files in a directory that is web accessible on the Apache server.
5. Configure Apache to allow read access and directory browsing in the created directory.
This configuration allows a multi-user environment to access the Maven repository on Apache
httpd.
You can configure the repositories in the Maven global or user settings.
Configure the JBoss EAP Maven Repository Using the Maven Settings
This is the recommended approach. Maven settings used with a repository manager or repository on a
shared server provide better control and manageability of projects. Settings also provide the ability to
use an alternative mirror to redirect all lookup requests for a specific repository to your repository
manager without changing the project files. For more information about mirrors, see
http://maven.apache.org/guides/mini/guide-mirror-settings.html.
This method of configuration applies across all Maven projects, as long as the project POM file does not
contain repository configuration.
This section describes how to configure the Maven settings. You can configure the Maven install global
settings or the user’s install settings.
43
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
1. Locate the Maven settings.xml file for your operating system. It is usually located in the
${user.home}/.m2/ directory.
2. If you do not find a settings.xml file, copy the settings.xml file from the
${user.home}/.m2/conf/ directory into the ${user.home}/.m2/ directory.
3. Copy the following XML into the <profiles> element of the settings.xml file. Determine
the URL of the JBoss EAP repository and replace JBOSS_EAP_REPOSITORY_URL with it.
The following is an example configuration that accesses the online JBoss EAP Maven
repository.
44
CHAPTER 2. USING MAVEN WITH JBOSS EAP
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>jboss-enterprise-maven-repository</id>
<url>https://maven.repository.redhat.com/ga/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
4. Copy the following XML into the <activeProfiles> element of the settings.xml file.
<activeProfile>jboss-enterprise-maven-repository</activeProfile>
5. If you modify the settings.xml file while Red Hat JBoss Developer Studio is running, you
must refresh the user settings.
c. Click the Update Settings button to refresh the Maven user settings in Red Hat JBoss
Developer Studio.
45
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
IMPORTANT
If your Maven repository contains outdated artifacts, you may encounter one of the
following Maven error messages when you build or deploy your project:
To resolve the issue, delete the cached version of your local repository to force a
download of the latest Maven artifacts. The cached repository is located here:
${user.home}/.m2/repository/
Configure the JBoss EAP Maven Repository Using the Project POM
WARNING
You should avoid this method of configuration as it overrides the global and user
Maven settings for the configured project.
You must plan carefully if you decide to configure repositories using project POM file. Transitively
46
CHAPTER 2. USING MAVEN WITH JBOSS EAP
included POMs are an issue with this type of configuration since Maven has to query the external
repositories for missing artifacts and this slows the build process. It can also cause you to lose control
over where your artifacts are coming from.
NOTE
The URL of the repository will depend on where the repository is located: on the file
system, or web server. For information on how to install the repository, see: Install the
JBoss EAP Maven Repository. The following are examples for each of the installation
options:
File System
file:///path/to/repo/jboss-eap-maven-repository
Apache Web Server
http://intranet.acme.com/jboss-eap-maven-repository/
Nexus Repository Manager
https://intranet.acme.com/nexus/content/repositories/jboss-eap-maven-repository
<repositories>
<repository>
<id>jboss-eap-repository-group</id>
<name>JBoss EAP Maven Repository</name>
<url>JBOSS_EAP_REPOSITORY_URL</url>
<layout>default</layout>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>jboss-eap-repository-group</id>
<name>JBoss EAP Maven Repository</name>
<url>JBOSS_EAP_REPOSITORY_URL</url>
<releases>
47
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
To use the online JBoss EAP Maven repository, specify the following URL:
https://maven.repository.redhat.com/ga/
To use a JBoss EAP Maven repository installed on the local file system, you must download the
repository and then use the local file path for the URL. For example: file:///path/to/repo/jboss-
eap-7.0-maven-repository/maven-repository/
If you install the repository on an Apache Web Server, the repository URL will be similar to the
following: http://intranet.acme.com/jboss-eap-7.0-maven-repository/maven-repository/
If you install the JBoss EAP Maven repository using the Nexus Repository Manager, the URL will
look something like the following: https://intranet.acme.com/nexus/content/repositories/jboss-
eap-7.0-maven-repository/maven-repository/
NOTE
Remote repositories are accessed using common protocols such as http:// for a
repository on an HTTP server or file:// for a repository on a file server.
2.3.2. Configure Maven for Use with Red Hat JBoss Developer Studio
The artifacts and dependencies needed to build and deploy applications to Red Hat JBoss Enterprise
Application Platform are hosted on a public repository. You must direct Maven to use this repository
when you build your applications. This topic covers the steps to configure Maven if you plan to build and
deploy applications using Red Hat JBoss Developer Studio.
Maven is distributed with Red Hat JBoss Developer Studio, so it is not necessary to install it separately.
However, you must configure Maven for use by the Java EE Web Project wizard for deployments to
JBoss EAP. The procedure below demonstrates how to configure Maven for use with JBoss EAP by
editing the Maven configuration file from within Red Hat JBoss Developer Studio.
1. Click Window → Preferences, expand JBoss Tools and select JBoss Maven Integration.
48
CHAPTER 2. USING MAVEN WITH JBOSS EAP
3. Click Add Repository to configure the JBoss Enterprise Maven repository. Complete the Add
Maven Repository dialog as follows:
a. Set the Profile ID, Repository ID, and Repository Name values to jboss-ga-
repository.
d. Click OK.
49
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
50
CHAPTER 2. USING MAVEN WITH JBOSS EAP
5. You are prompted with the message "Are you sure you want to update the file
MAVEN_HOME/settings.xml?". Click Yes to update the settings. Click OK to close the dialog.
The JBoss EAP Maven repository is now configured for use with Red Hat JBoss Developer Studio.
A BOM is a Maven pom.xml (POM) file that specifies the versions of all runtime dependencies for a
given module. Version dependencies are listed in the dependency management section of the file.
NOTE
In many cases, dependencies in project POM files use the provided scope. This is
because these classes are provided by the application server at runtime and it is not
necessary to package them with the user application.
51
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Adding a supported artifact to the build configuration pom.xml file ensures that the build is using the
correct binary artifact for local building and testing. Note that an artifact with a -redhat version is not
necessarily part of the supported public API, and may change in future revisions. For information about
the public supported API, see the JavaDoc documentation included in the release.
For example, to use the supported version of Hibernate, add something similar to the following to your
build configuration.
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.0.1.Final-redhat-1</version>
<scope>provided</scope>
</dependency>
Notice that the above example includes a value for the <version/> field. However, it is recommended
to use Maven dependency management for configuring dependency versions.
Dependency Management
Maven includes a mechanism for managing the versions of direct and transitive dependencies
throughout the build. For general information about using dependency management, see the Apache
Maven Project: Introduction to the Dependency Mechanism.
Using one or more supported Red Hat dependencies directly in your build does not guarantee that all
transitive dependencies of the build will be fully supported Red Hat artifacts. It is common for Maven
builds to use a mix of artifact sources from the Maven central repository and other Maven repositories.
There is a dependency management BOM included in the JBoss EAP Maven repository, which specifies
all the supported JBoss EAP binary artifacts. This BOM can be used in a build to ensure that Maven will
prioritize supported JBoss EAP dependencies for all direct and transitive dependencies in the build. In
other words, transitive dependencies will be managed to the correct supported dependency version
where applicable. The version of this BOM matches the version of the JBoss EAP release.
<dependencyManagement>
<dependencies>
...
<dependency>
<groupId>org.jboss.bom</groupId>
<artifactId>eap-runtime-artifacts</artifactId>
<version>7.0.0.GA</version>
<type>pom</type>
<scope>import</scope>
</dependency>
...
</dependencies>
</dependencyManagement>
NOTE
In JBoss EAP 7 the name of this BOM was changed from eap6-supported-artifacts to
eap-runtime-artifacts. The purpose of this change is to make it more clear that the
artifacts in this POM are part of the JBoss EAP runtime, but are not necessarily part of the
supported public API. Some of the jars contain internal API and functionality which may
change between releases.
52
CHAPTER 2. USING MAVEN WITH JBOSS EAP
To use this BOM in a project, add a dependency for the GAV that contains the version of the JSP and
Servlet API JARs needed to build and deploy the application.
The following example uses the 1.0.3.Final-redhat-1 version of the jboss-javaee-7.0 BOM.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.spec</groupId>
<artifactId>jboss-javaee-7.0</artifactId>
<version>1.0.3.Final-redhat-1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
...
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.spec.javax.servlet</groupId>
<artifactId>jboss-servlet-api_3.1_spec</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jboss.spec.javax.servlet.jsp</groupId>
<artifactId>jboss-jsp-api_2.3_spec</artifactId>
<scope>provided</scope>
</dependency>
...
</dependencies>
jboss-eap-javaee7 Supported JBoss EAP JavaEE 7 APIs plus additional JBoss EAP API
jars
53
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
These BOMs from JBoss EAP 6 have been consolidated into fewer BOMs to make usage
simpler for most use cases. The Hibernate, logging, transactions, messaging, and other
public API jars are now included in jboss-javaee7-eap instead of a requiring a
separate BOM for each case.
The following example uses the 7.0.0.GA version of the jboss-eap-javaee7 BOM.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.bom</groupId>
<artifactId>jboss-eap-javaee7</artifactId>
<version>7.0.0.GA</version>
<type>pom</type>
<scope>import</scope>
</dependency>
...
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<scope>provided</scope>
</dependency>
...
</dependencies>
<dependencyManagement>
<dependencies>
<!-- jboss-eap-javaee7: JBoss stack of the Java EE APIs and related
components. -->
<dependency>
<groupId>org.jboss.bom</groupId>
<artifactId>jboss-eap-javaee7</artifactId>
<version>7.0.0.GA</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
...
54
CHAPTER 2. USING MAVEN WITH JBOSS EAP
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.eap</groupId>
<artifactId>wildfly-ejb-client-bom</artifactId>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.jboss.eap</groupId>
<artifactId>wildfly-jms-client-bom</artifactId>
<type>pom</type>
</dependency>
...
</dependencies>
For more information about Maven Dependencies and BOM POM files, see Apache Maven Project -
Introduction to the Dependency Mechanism.
55
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
3.1. INTRODUCTION
The modular class loader separates all Java classes into logical groups called modules. Each module
can define dependencies on other modules in order to have the classes from that module added to its
own class path. Because each deployed JAR and WAR file is treated as a module, developers can
control the contents of their application’s class path by adding module configuration to their application.
3.1.2. Modules
A module is a logical grouping of classes used for class loading and dependency management. JBoss
EAP identifies two different types of modules: static and dynamic. The main difference between the two
is how they are packaged.
Static Modules
Static modules are defined in the EAP_HOME/modules/ directory of the application server. Each
module exists as a subdirectory, for example EAP_HOME/modules/com/mysql/. Each module
directory then contains a slot subdirectory, which defaults to main and contains the module.xml
configuration file and any required JAR files. All the application server-provided APIs are provided as
static modules, including the Java EE APIs as well as other APIs.
The module name (com.mysql) must match the directory structure for the module, excluding the slot
name (main).
Creating custom static modules can be useful if many applications are deployed on the same server that
use the same third-party libraries. Instead of bundling those libraries with each application, a module
containing these libraries can be created and installed by an administrator. The applications can then
declare an explicit dependency on the custom static modules.
The modules provided in JBoss EAP distributions are located in the system directory within the
EAP_HOME/modules directory. This keeps them separate from any modules provided by third parties.
Any Red Hat provided products that layer on top of JBoss EAP also install their modules within the
56
CHAPTER 3. CLASS LOADING AND MODULES
system directory.
Users must ensure that custom modules are installed into the EAP_HOME/modules directory, using one
directory per module. This ensures that custom versions of modules that already exist in the system
directory are loaded instead of the shipped versions. In this way, user-provided modules will take
precedence over system modules.
If you use the JBOSS_MODULEPATH environment variable to change the locations in which JBoss EAP
searches for modules, then the product will look for a system subdirectory structure within one of the
locations specified. A system structure must exist somewhere in the locations specified with
JBOSS_MODULEPATH.
Dynamic Modules
Dynamic modules are created and loaded by the application server for each JAR or WAR deployment
(or subdeployment in an EAR). The name of a dynamic module is derived from the name of the deployed
archive. Because deployments are loaded as modules, they can configure dependencies and be used as
dependencies by other deployments.
Modules are only loaded when required. This usually only occurs when an application is deployed that
has explicit or implicit dependencies.
NOTE
See the Modules section for complete details about modules and the modular class
loading system.
Deployed applications (a JAR or WAR, for example) are loaded as dynamic modules and make use of
dependencies to access the APIs provided by JBoss EAP.
Explicit Dependencies
Explicit dependencies are declared by the developer in a configuration file. A static module can
declare dependencies in its module.xml file. A dynamic module can declare dependencies in the
deployment’s MANIFEST.MF or jboss-deployment-structure.xml deployment descriptor.
Implicit Dependencies
Implicit dependencies are added automatically by JBoss EAP when certain conditions or meta-data
are found in a deployment. The Java EE 7 APIs supplied with JBoss EAP are examples of modules
that are added by detection of implicit dependencies in deployments.
Deployments can also be configured to exclude specific implicit dependencies by using the jboss-
deployment-structure.xml deployment descriptor file. This can be useful when an application
bundles a specific version of a library that JBoss EAP will attempt to add as an implicit dependency.
See the Add an Explicit Module Dependency to a Deployment section for details on using the jboss-
deployment-structure.xml deployment descriptor.
57
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Optional Dependencies
Explicit dependencies can be specified as optional. Failure to load an optional dependency will not cause
a module to fail to load. However, if the dependency becomes available later it will not be added to the
module’s class path. Dependencies must be available when the module is loaded.
Export a Dependency
A module’s class path contains only its own classes and that of its immediate dependencies. A module is
not able to access the classes of the dependencies of one of its dependencies. However, a module can
specify that an explicit dependency is exported. An exported dependency is provided to any module that
depends on the module that exports it.
For example, Module A depends on Module B, and Module B depends on Module C. Module A can
access the classes of Module B, and Module B can access the classes of Module C. Module A cannot
access the classes of Module C unless:
Global Modules
A global module is a module that JBoss EAP provides as a dependency to every application. Any
module can be made global by adding it to JBoss EAP’s list of global modules. It does not require
changes to the module.
See the Define Global Modules section of the JBoss EAP Configuration Guide for details.
You can use the following management operation to view information about a particular module and its
dependencies:
/core-service=module-loading:module-info(name=$MODULE_NAME)
[standalone@localhost:9990 /] /core-service=module-loading:module-
info(name=org.jboss.logmanager
{
"outcome" => "success",
"result" => {
"name" => "org.jboss.logmanager:main",
"main-class" => undefined,
"fallback-loader" => undefined,
"dependencies" => [
{
"dependency-name" => "ModuleDependency",
"module-name" => "javax.api:main",
"export-filter" => "Reject",
"import-filter" => "multi-path filter {exclude children of
\"META-INF/\", exclude equals \"META-INF\", default accept}",
"optional" => false
},
{
"dependency-name" => "ModuleDependency",
"module-name" => "org.jboss.modules:main",
58
CHAPTER 3. CLASS LOADING AND MODULES
WAR Deployment
A WAR deployment is considered to be a single module. Classes in the WEB-INF/lib directory are
treated the same as classes in the WEB-INF/classes directory. All classes packaged in the WAR
will be loaded with the same class loader.
EAR Deployment
EAR deployments are made up of more than one module, and are defined by the following rules:
1. The lib/ directory of the EAR is a single module called the parent module.
59
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Subdeployment modules (the WAR and JAR deployments within the EAR) have an automatic
dependency on the parent module. However, they do not have automatic dependencies on each other.
This is called subdeployment isolation, and can be disabled per deployment, or for the entire application
server.
Explicit dependencies between subdeployment modules can be added by the same means as any other
module.
During deployment, a complete list of packages and classes is created for each deployment and each of
its dependencies. The list is ordered according to the class loading precedence rules. When loading
classes at runtime, the class loader searches this list, and loads the first match. This prevents multiple
copies of the same classes and packages within the deployments class path from conflicting with each
other.
The class loader loads classes in the following order, from highest to lowest:
1. Implicit dependencies: These dependencies are automatically added by JBoss EAP, such as
the JAVA EE APIs. These dependencies have the highest class loader precedence because
they contain common functionality and APIs that are supplied by JBoss EAP.
Refer to Implicit Module Dependencies for complete details about each implicit dependency.
3. Local resources: These are class files packaged up inside the deployment itself, e.g. from the
WEB-INF/classes or WEB-INF/lib directories of a WAR file.
Deployments of WAR and JAR files are named using the following format:
deployment.DEPLOYMENT_NAME
For example, inventory.war and store.jar will have the module names of
deployment.inventory.war and deployment.store.jar respectively.
Subdeployments within an Enterprise Archive (EAR) are named using the following format:
deployment.EAR_NAME.SUBDEPLOYMENT_NAME
For example, the subdeployment of reports.war within the enterprise archive accounts.ear
will have the module name of deployment.accounts.ear.reports.war.
60
CHAPTER 3. CLASS LOADING AND MODULES
3.1.7. jboss-deployment-structure.xml
jboss-deployment-structure.xml is an optional deployment descriptor for JBoss EAP. This
deployment descriptor provides control over class loading in the deployment.
NOTE
JBoss EAP automatically adds some dependencies to deployments. See Implicit Module
Dependencies for details.
Prerequisites
1. A working software project that you want to add a module dependency to.
2. You must know the name of the module being added as a dependency. See Included Modules
for the list of static modules included with JBoss EAP. If the module is another deployment then
see Dynamic Module Naming to determine the module name.
1. If the project does not have one, create a file called MANIFEST.MF. For a web application
(WAR) add this file to the META-INF directory. For an EJB archive (JAR) add it to theMETA-INF
directory.
2. Add a dependencies entry to the MANIFEST.MF file with a comma-separated list of dependency
module names:
To make a dependency optional, append optional to the module name in the dependency
entry:
61
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The annotations flag is needed when the module dependency contains annotations that
need to be processed during annotation scanning, such as when declaring EJB interceptors.
Without this, an EJB interceptor declared in a module cannot be used in a deployment.
There are other situations involving annotation scanning when this is needed too.
By default items in the META-INF of a dependency are not accessible. The services
dependency makes items from META-INF/services accessible so that services in the
modules can be loaded.
To scan a beans.xml file and make its resulting beans available to the application, the
meta-inf dependency can be used.
1. If the application does not have one, create a new file called jboss-deployment-
structure.xml and add it to the project. This file is an XML file with the root element of
<jboss-deployment-structure>.
<jboss-deployment-structure>
</jboss-deployment-structure>
For a web application (WAR) add this file to the WEB-INF directory. For an EJB archive (JAR)
add it to the META-INF directory.
2. Create a <deployment> element within the document root and a <dependencies> element
within that.
3. Within the <dependencies> node, add a module element for each module dependency. Set
the name attribute to the name of the module.
A dependency can be made optional by adding the optional attribute to the module entry
with the value of true. The default value for this attribute is false.
A dependency can be exported by adding the export attribute to the module entry with the
value of true. The default value for this attribute is false.
When the module dependency contains annotations that need to be processed during
annotation scanning, the annotations flag is used.
62
CHAPTER 3. CLASS LOADING AND MODULES
The Services dependency specifies whether and how services found in this
dependency are used. The default is none. Specifying a value of import for this attribute is
equivalent to adding a filter at the end of the import filter list which includes the META-
INF/services path from the dependency module. Setting a value of export for this
attribute is equivalent to the same action on the export filter list.
The META-INF dependency specifies whether and how META-INF entries in this
dependency are used. The default is none. Specifying a value of import for this attribute is
equivalent to adding a filter at the end of the import filter list which includes the META-
INF/** path from the dependency module. Setting a value of export for this attribute is
equivalent to the same action on the export filter list.
<jboss-deployment-structure>
<deployment>
<dependencies>
<module name="org.javassist" />
<module name="org.apache.velocity" export="true" />
</dependencies>
</deployment>
</jboss-deployment-structure>
JBoss EAP adds the classes from the specified modules to the class path of the application when it is
deployed.
mkdir /tmp/META-INF
mv $JAR_FILE.ifx /tmp/META-INF/jandex.idx
63
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Then place the JAR in the module directory and edit module.xml to add it to the resource
roots.
4. Tell the module import to utilize the annotation index, so that annotation scanning can find the
annotations.
a. Option 1: If you are adding a module dependency using MANIFEST.MF, add annotations
after the module name. For example change:
to
NOTE
Before generating the MANIFEST.MF entries using Maven, you will require:
A working Maven project, which is using one of the JAR, EJB, or WAR plug-ins (maven-jar-
plugin, maven-ejb-plugin, or maven-war-plugin).
You must know the name of the project’s module dependencies. Refer to Included Modules for
the list of static modules included with JBoss EAP. If the module is another deployment, then
refer to Dynamic Module Naming to determine the module name.
64
CHAPTER 3. CLASS LOADING AND MODULES
1. Add the following configuration to the packaging plug-in configuration in the project’s pom.xml
file.
<configuration>
<archive>
<manifestEntries>
<Dependencies></Dependencies>
</manifestEntries>
</archive>
</configuration>
2. Add the list of module dependencies to the <Dependencies> element. Use the same format
that is used when adding the dependencies to the MANIFEST.MF file:
<Dependencies>org.javassist, org.apache.velocity</Dependencies>
When the project is built using the assembly goal, the final archive contains a MANIFEST.MF file
with the specified module dependencies.
NOTE
The example here shows the WAR plug-in but it also works with the JAR and EJB
plug-ins (maven-jar-plugin and maven-ejb-plugin).
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<archive>
<manifestEntries>
<Dependencies>org.javassist,
org.apache.velocity</Dependencies>
</manifestEntries>
</archive>
</configuration>
</plugin>
</plugins>
65
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
You can configure a deployable application to prevent implicit dependencies from being loaded. This can
be useful when an application includes a different version of a library or framework than the one that will
be provided by the application server as an implicit dependency.
Prerequisites
A working software project that you want to exclude an implicit dependency from.
You must know the name of the module to exclude. Refer to Implicit Module Dependencies for a
list of implicit dependencies and their conditions.
1. If the application does not have one, create a new file called jboss-deployment-
structure.xml and add it to the project. This is an XML file with the root element of <jboss-
deployment-structure>.
<jboss-deployment-structure>
</jboss-deployment-structure>
For a web application (WAR) add this file to the WEB-INF directory. For an EJB archive (JAR)
add it to the META-INF directory.
2. Create a <deployment> element within the document root and an <exclusions> element
within that.
<deployment>
<exclusions>
</exclusions>
</deployment>
3. Within the exclusions element, add a <module> element for each module to be excluded. Set
the name attribute to the name of the module.
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.javassist" />
<module name="org.dom4j" />
</exclusions>
</deployment>
</jboss-deployment-structure>
66
CHAPTER 3. CLASS LOADING AND MODULES
Excluding a subsystem provides the same effect as removing the subsystem, but it applies only to a
single deployment. You can exclude a subsystem from a deployment by editing the jboss-
deployment-structure.xml configuration file.
Exclude a Subsystem
<exclude-subsystems>
<subsystem name="SUBSYSTEM_NAME" />
</exclude-subsystems>
The subsystem’s deployment unit processors will no longer run on the deployment.
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2">
<ear-subdeployments-isolated>true</ear-subdeployments-isolated>
<deployment>
<exclude-subsystems>
<subsystem name="jaxrs" />
</exclude-subsystems>
<exclusions>
<module name="org.javassist" />
</exclusions>
<dependencies>
<module name="deployment.javassist.proxy" />
<module name="deployment.myjavassist" />
<module name="myservicemodule" services="import"/>
</dependencies>
<resources>
<resource-root path="my-library.jar" />
</resources>
</deployment>
<sub-deployment name="myapp.war">
<dependencies>
<module name="deployment.myear.ear.myejbjar.jar" />
</dependencies>
<local-last value="true" />
</sub-deployment>
<module name="deployment.myjavassist" >
<resources>
<resource-root path="javassist.jar" >
<filter>
<exclude path="javassist/util/proxy" />
</filter>
</resource-root>
</resources>
</module>
<module name="deployment.javassist.proxy" >
<dependencies>
67
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Class.forName(String className): This signature takes only one parameter, the name of the
class you need to load. With this method signature, the class is loaded by the class loader of the
current class and initializes the newly loaded class by default.
The three argument signature is the recommended way to programmatically load a class. This signature
allows you to control whether you want the target class to be initialized upon load. It is also more efficient
to obtain and provide the class loader because the JVM does not need to examine the call stack to
determine which class loader to use. Assuming the class containing the code is named CurrentClass,
you can obtain the class’s class loader using CurrentClass.class.getClassLoader() method.
The following example provides the class loader to load and initialize the TargetClass class:
Load a Single Resource: To load a single resource located in the same directory as your class
or another class in your deployment, you can use the Class.getResourceAsStream()
method.
InputStream inputStream =
CurrentClass.class.getResourceAsStream("targetResourceName");
68
CHAPTER 3. CLASS LOADING AND MODULES
Load All Instances of a Single Resource: To load all instances of a single resource that are
visible to your deployment’s class loader, use the
Class.getClassLoader().getResources(String resourceName) method, where
resourceName is the fully qualified path of the resource. This method returns an Enumeration
of all URL objects for resources accessible by the class loader with the given name. You can
then iterate through the array of URLs to open each stream using the openStream() method.
The following example loads all instances of a resource and iterates through the results.
Enumeration<URL> urls =
CurrentClass.class.getClassLoader().getResources("full/path/to/resou
rce");
while (urls.hasMoreElements()) {
URL url = urls.nextElement();
InputStream inputStream = null;
try {
inputStream = url.openStream();
// Process the inputStream
...
} catch(IOException ioException) {
// Handle the error
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (Exception e) {
// ignore
}
}
}
}
Because the URL instances are loaded from local storage, it is not necessary to use the
openConnection() or other related methods. Streams are much simpler to use and minimize the
complexity of the code.
Load a Class File From the Class Loader: If a class has already been loaded, you can load
the class file that corresponds to that class using the following syntax:
InputStream inputStream =
CurrentClass.class.getResourceAsStream(TargetClass.class.getSimpleNa
me() + ".class");
If the class is not yet loaded, you must use the class loader and translate the path:
69
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Dependencies: org.jboss.modules
It is important to note that while these APIs provide increased flexibility, they will also run much more
slowly than a direct path lookup.
This topic describes some of the ways you can programmatically iterate through resources in your
application code.
List Resources Within a Deployment and Within All Imports: There are times when it is not
possible to look up resources by the exact path. For example, the exact path may not be known
or you may need to examine more than one file in a given path. In this case, the JBoss Modules
library provides several APIs for iterating all deployment resources. You can iterate through
resources in a deployment by utilizing one of two methods.
The resultant iterator may be used to examine each matching resource and query its name
and size (if available), open a readable stream, or acquire a URL for the resource.
Iterate All Resources Found in a Single Module and Imported Resources: The
Module.iterateResources() method iterates all the resources within this module class
loader, including the resources that are imported into the module. This method returns a
much larger set than the previous method. This method requires an argument, which is a
filter that narrows the result to a specific pattern. Alternatively, PathFilters.acceptAll() can be
supplied to return the entire set.
The following example demonstrates how to find the entire set of resources in this module,
including imports.
Find All Resources That Match a Pattern: If you need to find only specific resources within
your deployment or within your deployment’s full import set, you need to filter the resource
iteration. The JBoss Modules filtering APIs give you several tools to accomplish this.
Examine the Full Set of Dependencies: If you need to examine the full set of
dependencies, you can use the Module.iterateResources() method’s PathFilter
parameter to check the name of each resource for a match.
Examine Deployment Dependencies: If you need to look only within the deployment, use
the ModuleClassLoader.iterateResources() method. However, you must use
additional methods to filter the resultant iterator. The PathFilters.filtered() method
70
CHAPTER 3. CLASS LOADING AND MODULES
can provide a filtered view of a resource iterator this case. The PathFilters class includes
many static methods to create and compose filters that perform various functions, including
finding child paths or exact matches, or matching an Ant-style "glob" pattern.
Additional Code Examples For Filtering Resouces: The following examples demonstrate
how to filter resources based on different criteria.
Example: Find All Files Inside Any Directory Named my-resources in Your
Deployment
Example: Find All Files Named messages or errors in Your Deployment and
Imports
71
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The contents of the lib/ directory in the root of the EAR archive is a module. This is called the
parent module.
Each WAR and EJB JAR subdeployment is a module. These modules have the same behavior
as any other module as well as implicit dependencies on the parent module.
Subdeployments have implicit dependencies on the parent module and any other non-WAR
subdeployments.
The implicit dependencies on non-WAR subdeployments occur because JBoss EAP has subdeployment
class loader isolation disabled by default. Dependencies on the parent module persist, regardless of
subdeployment class loader isolation.
IMPORTANT
Subdeployment class loader isolation can be enabled if strict compatibility is required. This can be
enabled for a single EAR deployment or for all EAR deployments. The Java EE specification
recommends that portable applications should not rely on subdeployments being able to access each
other unless dependencies are explicitly declared as Class-Path entries in the MANIFEST.MF file of
each subdeployment.
IMPORTANT
Even when subdeployment class loader isolation is disabled it is not possible to add a
WAR deployment as a dependency.
72
CHAPTER 3. CLASS LOADING AND MODULES
<jboss-deployment-structure>
</jboss-deployment-structure>
<ear-subdeployments-isolated>true</ear-subdeployments-isolated>
Result
Subdeployment class loader isolation will now be enabled for this EAR deployment. This means that the
subdeployments of the EAR will not have automatic dependencies on each of the non-WAR
subdeployments.
IMPORTANT
Since this feature is not a standard servlet feature, your applications may not be portable if
this functionality is enabled.
To enable session sharing between WARs within an EAR, you need to declare a shared-session-
config element in the META-INF/jboss-all.xml of the EAR:
Example: META-INF/jboss-all.xml
<jboss umlns="urn:jboss:1.0">
...
<shared-session-config xmlns="urn:jboss:shared-session-config:1.0">
</shared-session-config>
...
</jboss>
The shared-session-config element is used to configure the shared session manager for all WARs
within the EAR. If the shared-session-config element is present, all WARs within the EAR will
share the same session manager. Changes made here will affect all the WARs contained within the
EAR.
73
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
shared-session-config
max-active-sessions
session-config
session-timeout
cookie-config
name
domain
path
comment
http-only
secure
max-age
tracking-mode
replication-config
cache-name
replication-granularity
Example: META-INF/jboss-all.xml
<jboss umlns="urn:jboss:1.0">
<shared-session-config xmlns="urn:jboss:shared-session-config:1.0">
<max-active-sessions>10</max-active-sessions>
<session-config>
<session-timeout>0</session-timeout>
<cookie-config>
<name>JSESSIONID</name>
<domain>domainName</domain>
<path>/cookiePath</path>
<comment>cookie comment</comment>
<http-only>true</http-only>
<secure>true</secure>
<max-age>-1</max-age>
</cookie-config>
<tracking-mode>COOKIE</tracking-mode>
</session-config>
<replication-config>
<cache-name>web</cache-name>
<replication-granularity>SESSION</replication-granularity>
</replication-config>
</shared-session-config>
</jboss>
74
CHAPTER 3. CLASS LOADING AND MODULES
shared-session-config
Root element for the shared session configuration. If this is present in the META-INF/jboss-
all.xml, then all deployed WARs contained in the EAR will share a single session manager.
max-active-sessions
Number of maximum sessions allowed.
session-config
Contains the session configuration parameters for all deployed WARs contained in the EAR.
session-timeout
Defines the default session timeout interval for all sessions created in the deployed WARs contained
in the EAR. The specified timeout must be expressed in a whole number of minutes. If the timeout is
0 or less, the container ensures the default behavior of sessions is to never time out. If this element is
not specified, the container must set its default timeout period.
cookie-config
Contains the configuration of the session tracking cookies created by the deployed WARs contained
in the EAR.
name
The name that will be assigned to any session tracking cookies created by the deployed WARs
contained in the EAR. The default is JSESSIONID.
domain
The domain name that will be assigned to any session tracking cookies created by the deployed
WARs contained in the EAR.
path
The path that will be assigned to any session tracking cookies created by the deployed WARs
contained in the EAR.
comment
The comment that will be assigned to any session tracking cookies created by the deployed WARs
contained in the EAR.
http-only
Specifies whether any session tracking cookies created by the deployed WARs contained in the EAR
will be marked as HttpOnly.
secure
Specifies whether any session tracking cookies created by the deployed WARs contained in the EAR
will be marked as secure even if the request that initiated the corresponding session is using plain
HTTP instead of HTTPS
max-age
The lifetime (in seconds) that will be assigned to any session tracking cookies created by the
deployed WARs contained in the EAR. Default is -1.
tracking-mode
Defines the tracking modes for sessions created by the deployed WARs contained in the EAR.
replication-config
Contains the HTTP session clustering configuration.
cache-name
This option is for use in clustering only. It specifies the name of the Infinispan container and cache in
which to store session data. The default value, if not explicitly set, is determined by the application
server. To use a specific cache within a cache container, use the form container.cache, for
75
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
example web.dist. If name is unqualified, the default cache of the specified container is used.
replication-granularity
This option is for use in clustering only. It determines the session replication granularity level. The
possible values are SESSION and ATTRIBUTE with SESSION being the default.
If SESSION granularity is used, all session attributes are replicated if any were modified within the
scope of a request. This policy is required if an object reference is shared by multiple session
attributes. However, this can be inefficient if session attributes are sufficiently large and/or are
modified infrequently, since all attributes must be replicated regardless of whether they were modified
or not.
If ATTRIBUTE granularity is used, only those attributes that were modified within the scope of a
request are replicated. This policy is not appropriate if an object reference is shared by multiple
session attributes. This can be more efficient than SESSION granularity if the session attributes are
sufficiently large and/or are modified infrequently.
This can be done by creating a custom JBoss EAP module that contains the TLD JARs, and declaring a
dependency on that module in the applications.
NOTE
Ensure that at least one JAR contains TLDs and the TLDs are packed in META-INF.
1. Using the management CLI, connect to your JBoss EAP instance and execute the following
command to create the custom module containing the TLD JAR:
IMPORTANT
Using the module management CLI command to add and remove modules is
provided as technology preview only. This command is not appropriate for use in
a managed domain or when connecting to the management CLI remotely.
Modules should be added and removed manually in a production environment.
For more information, see the Create a Custom Module Manually and Remove a
Custom Module Manually sections of the JBoss EAP Configuration Guide.
If the TLDs are packaged with classes that require dependencies, use the --dependencies=
option to ensure that you specify those dependencies when creating the custom module.
When creating the module, you can specify multiple JAR resources by separating each one with
the file system-specific separator for your system.
76
CHAPTER 3. CLASS LOADING AND MODULES
NOTE
--resources
It is required unless --module-xml is used. It lists file system paths,
usually JAR files, separated by a file system-specific path separator, for
example java.io.File.pathSeparatorChar. The files specified will
be copied to the created module’s directory.
--resource-delimiter
It is an optional user-defined path separator for the resources argument. If
this argument is present, the command parser will use the value here
instead of the file system-specific path separator. This allows the modules
command to be used in cross-platform scripts.
2. In your applications, declare a dependency on the new MyTagLibs custom module using one of
the methods described in Add an Explicit Module Dependency to a Deployment.
IMPORTANT
Ensure that you also import META-INF when declaring the dependency. For example, for
MANIFEST.MF:
3.9. REFERENCE
Application
Client
org.omg.api
org.jboss.xn
io
77
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Batch
javax.batch.
api
org.jberet.j
beret-core
org.wildfly.
jberet
Bean
Validation
org.hibernat
e.validator
javax.valida
tion.api
Core Server
javax.api
sun.jdk
org.jboss.vf
s
ibm.jdk
DriverDepend
enciesProcess
javax.transa
ction.api
or
78
CHAPTER 3. CLASS LOADING AND MODULES
EE
org.jboss.in
vocation
(except
org.jboss.in
vocation.pro
xy.classload
ing)
org.jboss.as
.ee (except
org.jboss.as
.ee.componen
t.serializat
ion,
org.jboss.as
.ee.concurre
nt,
org.jboss.as
.ee.concurre
nt.handle)
org.wildfly.
naming
javax.annota
tion.api
javax.enterp
rise.concurr
ent.api
javax.interc
eptor.api
javax.json.a
pi
javax.resour
ce.api
javax.rmi.ap
i
javax.xml.bi
nd.api
javax.api
org.glassfis
h.javax.el
org.glassfis
h.javax.ente
79
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
rprise.concu
Subsystem Package rrent
Dependencies Package Dependencies Conditions That Trigger
Responsible That Are Always Added That Are Conditionally the Addition of the
for Adding Added Dependency
EJB 3
the javax.ejb.ap org.wildfly.
Dependency i iiop-openjdk
javax.xml.rp
c.api
org.jboss.ej
b-client
org.jboss.ii
op-client
org.jboss.as
.ejb3
IIOP
org.omg.api
javax.rmi.ap
i
javax.orb.ap
i
javax.json.a
pi
org.jboss.re
steasy.reste
asy-atom-
provider
org.jboss.re
steasy.reste
asy-crypto
org.jboss.re
steasy.reste
asy-
validator-
provider-11
org.jboss.re
steasy.reste
asy-jaxrs
org.jboss.re
steasy.reste
80
CHAPTER 3. CLASS LOADING AND MODULES
asy-jaxb-
Subsystem Package provider
Dependencies Package Dependencies Conditions That Trigger
Responsible That Are Always Added That Are Conditionally the Addition of the
for Adding org.jboss.re Added Dependency
the steasy.reste
Dependency asy-jackson2-
provider
org.jboss.re
steasy.reste
asy-jsapi
org.jboss.re
steasy.reste
asy-json-p-
provider
org.jboss.re
steasy.reste
asy-
multipart-
provider
org.jboss.re
steasy.reste
asy-yaml-
provider
org.codehaus
.jackson.jac
kson-core-
asl
org.jboss.ir
onjacamar.ap
i
org.jboss.ir
onjacamar.im
pl
org.hibernat
e.validator
81
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
JSR-77
javax.manage
ment.j2ee.ap
i
82
CHAPTER 3. CLASS LOADING AND MODULES
Logging
org.jboss.lo
gging
org.apache.c
ommons.loggi
ng
org.apache.l
og4j
org.slf4j
org.jboss.lo
gging.jul-
to-slf4j-
stub
Mail
javax.mail.a
pi
javax.activa
tion.api
Messaging
javax.jms.ap org.wildfly.
i extension.me
ssaging-
activemq
PicketLink
Federation
org.picketli
nk
Pojo
org.jboss.as
.pojo
83
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
org.jboss.co
mmon-beans
Seam2 .
org.jboss.vf
s
Security
org.picketbo
x
org.jboss.as
.security
javax.securi
ty.jacc.api
javax.securi
ty.auth.mess
age.api
ServiceActivat
or
org.jboss.ms
c
Transactions
javax.transa org.jboss.xt
ction.api s
org.jboss.jt
s
org.jboss.na
rayana.compe
nsations
84
CHAPTER 3. CLASS LOADING AND MODULES
Undertow
javax.servle io.undertow.
t.jstl.api core
javax.servle io.undertow.
t.api servlet
javax.servle io.undertow.
t.jsp.api jsp
javax.websoc io.undertow.
ket.api websocket
io.undertow.
js
org.wildfly.
clustering.w
eb.api
javax.xml.ws
.api
85
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
org.jboss.as
.weld
org.jboss.we
ld.core
org.jboss.we
ld.probe
org.jboss.we
ld.api
org.jboss.we
ld.spi
org.hibernat
e.validator.
cdi
86
CHAPTER 4. LOGGING
CHAPTER 4. LOGGING
Log messages provide important information for developers when debugging an application and for
system administrators maintaining applications in production.
Most modern Java logging frameworks also include details such as the exact time and the origin of the
message.
Apache log4j
JBoss Logging
commons-logging
SLF4J
Log4j
java.util.logging
java.util.logging Handler
Log4j Appender
NOTE
If you are using the Log4j API and a Log4J Appender, then Objects will be converted
to string before being passed.
87
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
JBoss Logging is the application logging framework that is included in JBoss EAP. It provides an easy
way to add logging to an application. You add code to your application that uses the framework to send
log messages in a defined format. When the application is deployed to an application server, these
messages can be captured by the server and displayed or written to file according to the server’s
configuration.
An innovative, easy-to-use typed logger. A typed logger is a logger interface annotated with
org.jboss.logging.annotations.MessageLogger. For examples, see Creating
Internationalized Loggers, Messages and Exceptions.
Full support for internationalization and localization. Translators work with message bundles in
properties files while developers work with interfaces and annotations. For details, see
Internationalization and Localization.
Build-time tooling to generate typed loggers for production and runtime generation of typed
loggers for development.
IMPORTANT
If you use Maven to build your project, you must configure Maven to use the JBoss EAP
Maven repository. For more information, see Configure the JBoss EAP Maven
Repository.
1. The JBoss Logging JAR files must be in the build path for your application.
If you build using Red Hat JBoss Developer Studio, select Properties from the Project
menu, then select Targeted Runtimes and ensure the runtime for JBoss EAP is
checked.
If you use Maven to build your project, make sure you add the jboss-logging
dependency to your project’s pom.xml file for access to JBoss Logging framework:
<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging</artifactId>
<version>3.3.0.Final-redhat-1</version>
<scope>provided</scope>
</dependency>
The jboss-javaee-7.0 BOM manages the version of jboss-logging. For more details, see
Manage Project Dependencies. See the logging quickstart for a working example of
logging in an application.
You do not need to include the JARs in your built application because JBoss EAP provides them
to deployed applications.
88
CHAPTER 4. LOGGING
a. Add the import statements for the JBoss Logging class namespaces that you will be using.
At a minimum you will need the following import:
import org.jboss.logging.Logger;
3. Call the Logger object methods in your code where you want to send log messages.
The Logger has many different methods with different parameters for different types of
messages. Use the following methods to send a log message with the corresponding log level
and the message parameter as a string:
For the complete list of JBoss Logging methods, see the Logging API documentation.
The following example loads customized configuration for an application from a properties file. If the
specified file is not found, an ERROR level log message is recorded.
import org.jboss.logging.Logger;
public class LocalSystemConfig
{
private static final Logger LOGGER =
Logger.getLogger(LocalSystemConfig.class);
89
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
return props;
}
}
NOTE
If the per-deployment logging configuration is not done, the configuration from logging
subsystem is used for all the applications as well as the server.
This approach has advantages and disadvantages over using system-wide logging. An advantage is that
the administrator of the JBoss EAP instance does not need to configure any other logging than the server
logging. A disadvantage is that the per-deployment logging configuration is read only on server startup,
and so cannot be changed at runtime.
The directory into which the configuration file is added depends on the deployment method:
For EAR deployments, copy the logging configuration file to the META-INF directory.
For WAR or JAR deployments, copy the logging configuration file to the WEB-INF/classes
directory.
NOTE
If you are using Simple Logging Facade for Java (SLF4J) or Apache log4j,
the logging.properties configuration file is suitable. If you are using Apache log4j
appenders then the configuration file log4j.properties is required. The configuration
file jboss-logging.properties is supported only for legacy deployments.
Configuring logging.properties
The logging.properties file is used when the server boots, until the logging subsystem is started.
If the logging subsystem is not included in your configuration, then the server uses the configuration in
this file as the logging configuration for the entire server.
Logger options
90
CHAPTER 4. LOGGING
Handler options
handler.<name>.constructorProperties=<property>[,<property>,…] - Specify a
list of properties that should be used as construction parameters. A rudimentary type
introspection is done to ascertain the appropriate conversion for the given property.
For further information, see Log Handler Attributes in the JBoss EAP Configuration Guide.
91
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Formatter options
formatter.<name>.constructorProperties=<property>[,<property>,…] - Specify
a list of properties that should be used as construction parameters. A rudimentary type
introspection is done to ascertain the appropriate conversion for the given property.
The following example shows the minimal configuration for logging.properties file that will log to
the console.
Logging profiles allow administrators to create logging configurations that are specific to one or more
applications without affecting any other logging configurations. Because each profile is defined in the
server configuration, the logging configuration can be changed without requiring that the affected
applications be redeployed. However, logging profiles cannot be configured using the management
console. For more information, see Configure a Logging Profile in the JBoss EAP Configuration Guide.
92
CHAPTER 4. LOGGING
An application can specify a logging profile to use in its MANIFEST.MF file, using the Logging-
Profile attribute.
NOTE
You must know the name of the logging profile that has been set up on the server for this
application to use.
If your application does not have a MANIFEST.MF file, create one with the following content to
specify the logging profile name.
Manifest-Version: 1.0
Logging-Profile: LOGGING_PROFILE_NAME
If your application already has a MANIFEST.MF file, add the following line to specify the logging
profile name.
Logging-Profile: LOGGING_PROFILE_NAME
NOTE
If you are using Maven and the maven-war-plugin, put your MANIFEST.MF file in
src/main/resources/META-INF/ and add the following configuration to your
pom.xml file:
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<archive>
<manifestFile>src/main/resources/META-
INF/MANIFEST.MF</manifestFile>
</archive>
</configuration>
</plugin>
When the application is deployed, it will use the configuration in the specified logging profile for its log
messages.
93
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
For an example of how to configure a logging profile and the application using it, see Example Logging
Profile Configuration in the JBoss EAP Configuration Guide.
4.5.1. Introduction
Internationalization is the process of designing software so that it can be adapted to different languages
and regions without engineering changes.
Localization is the process of adapting internationalized software for a specific region or language by
adding locale-specific components and translations of text.
Internationalized messages and exceptions are created as method definitions inside of interfaces
annotated using org.jboss.logging.annotations annotations. Implementing the interfaces is not
necessary; JBoss Logging Tools does this at compile time. Once defined, you can use these methods to
log messages or obtain exception objects in your code.
Internationalized logging and exception interfaces created with JBoss Logging Tools can be localized by
creating a properties file for each bundle containing the translations for a specific language and region.
JBoss Logging Tools can generate template property files for each bundle that can then be edited by a
translator.
JBoss Logging Tools creates an implementation of each bundle for each corresponding translations
property file in your project. All you have to do is use the methods defined in the bundles and JBoss
Logging Tools ensures that the correct implementation is invoked for your current regional settings.
Message IDs and project codes are unique identifiers that are prepended to each log message. These
unique identifiers can be used in documentation to make it easy to find information about log messages.
With adequate documentation, the meaning of a log message can be determined from the identifiers
regardless of the language that the message was written in.
The JBoss Logging Tools includes support for the following features:
MessageLogger
This interface in the org.jboss.logging.annotations package is used to define
internationalized log messages. A message logger interface is annotated with @MessageLogger.
MessageBundle
This interface can be used to define generic translatable messages and Exception objects with
internationalized messages. A message bundle is not used for creating log messages. A message
bundle interface is annotated with @MessageBundle.
Internationalized Log Messages
94
CHAPTER 4. LOGGING
These log messages are created by defining a method in a MessageLogger. The method must be
annotated with the @LogMessage and @Message annotations and must specify the log message
using the value attribute of @Message. Internationalized log messages are localized by providing
translations in a properties file.
JBoss Logging Tools generates the required logging classes for each translation at compile time and
invokes the correct methods for the current locale at runtime.
Internationalized Exceptions
An internationalized exception is an exception object returned from a method defined in a
MessageBundle. These message bundles can be annotated to define a default exception message.
The default message is replaced with a translation if one is found in a matching properties file for the
current locale. Internationalized exceptions can also have project codes and message IDs assigned
to them.
Internationalized Messages
An internationalized message is a string returned from a method defined in a MessageBundle.
Message bundle methods that return Java String objects can be annotated to define the default
content of that string, known as the message. The default message is replaced with a translation if
one is found in a matching properties file for the current locale.
Translation Properties Files
Translation properties files are Java properties files that contain the translations of messages from
one interface for one locale, country, and variant. Translation properties files are used by the JBoss
Logging Tools to generate the classes that return the messages.
JBoss Logging Tools Project Codes
Project codes are strings of characters that identify groups of messages. They are displayed at the
beginning of each log message, prepended to the message ID. Project codes are defined with the
projectCode attribute of the @MessageLogger annotation.
NOTE
For a complete list of the new log message project code prefixes, see the Project
Codes used in JBoss EAP 7.0.
The logging-tools quickstart that ships with JBoss EAP is a simple Maven project that provides a
working example of many of the features of JBoss Logging Tools. The code examples that follow are
taken from the logging-tools quickstart.
You can use JBoss Logging Tools to create internationalized log messages by creating
MessageLogger interfaces.
95
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
This topic does not cover all optional features or the localization of the log messages.
1. If you have not yet done so, configure your Maven settings to use the JBoss EAP Maven
repository.
For more information, see Configure the JBoss EAP Maven Repository Using the Maven
Settings.
3. Create a message logger interface by adding a Java interface to your project to contain the log
message definitions.
Name the interface to describe the log messages it will define. The log message interface has
the following requirements:
The interface must define a field that is a message logger of the same type as the interface.
Do this with the getMessageLogger() method of @org.jboss.logging.Logger.
package com.company.accounts.loggers;
import org.jboss.logging.BasicLogger;
import org.jboss.logging.Logger;
import org.jboss.logging.annotations.MessageLogger;
@MessageLogger(projectCode="")
interface AccountsLogger extends BasicLogger {
AccountsLogger LOGGER = Logger.getMessageLogger(
AccountsLogger.class,
AccountsLogger.class.getPackage().getName() );
}
96
CHAPTER 4. LOGGING
@LogMessage
@Message(value = "Customer query failed, Database not
available.")
void customerQueryFailDBClosed();
5. Invoke the methods by adding the calls to the interface methods in your code where the
messages must be logged from.
Creating implementations of the interfaces is not necessary, the annotation processor does this
for you when the project is compiled.
AccountsLogger.LOGGER.customerQueryFailDBClosed();
The custom loggers are subclassed from BasicLogger, so the logging methods of
BasicLogger can also be used. It is not necessary to create other loggers to log non-
internationalized messages.
6. The project now supports one or more internationalized loggers that can be localized.
NOTE
The logging-tools quickstart that ships with JBoss EAP is a simple Maven project that
provides a working example of how to use JBoss Logging Tools.
NOTE
This section does not cover all optional features or the process of localizing those
messages.
1. If you have not yet done so, configure your Maven settings to use the JBoss EAP Maven
repository. For more information, see Configure the JBoss EAP Maven Repository Using the
Maven Settings.
2. Configure the project’s pom.xml file to use JBoss Logging Tools. For details, see JBoss
Logging Tools Maven Configuration.
3. Create an interface for the exceptions. JBoss Logging Tools defines internationalized messages
in interfaces. Name each interface descriptively for the messages that it contains. The interface
has the following requirements:
The interface must define a field that is a message bundle of the same type as the interface.
97
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@MessageBundle(projectCode="")
public interface GreetingMessageBundle {
GreetingMessageBundle MESSAGES =
Messages.getBundle(GreetingMessageBundle.class);
}
NOTE
Calling Messages.getBundle(GreetingMessagesBundle.class) is
equivalent to calling
Messages.getBundle(GreetingMessagesBundle.class,
Locale.getDefault()).
Locale.getDefault() gets the current value of the default locale for this
instance of the Java Virtual Machine. The Java Virtual Machine sets the
default locale during startup, based on the host environment. It is used by
many locale-sensitive methods if no locale is explicitly specified. It can be
changed using the setDefault method.
See Set the Default Locale of the Server in the JBoss EAP Configuration
Guide for more information.
4. Add a method definition to the interface for each message. Name each method descriptively for
the message that it represents. Each method has the following requirements:
5. Invoke the interface methods in your application where you need to obtain the message:
System.out.println(helloworldString());
The project now supports internationalized message strings that can be localized.
NOTE
See the logging-tools quickstart that ships with JBoss EAP for a complete working
example.
You can use JBoss Logging Tools to create and use internationalized exceptions.
The following instructions assume that you want to add internationalized exceptions to an existing
software project that is built using either Red Hat JBoss Developer Studio or Maven.
98
CHAPTER 4. LOGGING
NOTE
This topic does not cover all optional features or the process of localization of those
exceptions.
1. Configure the project’s pom.xml file to use JBoss Logging Tools. For details, see JBoss
Logging Tools Maven Configuration.
2. Create an interface for the exceptions. JBoss Logging Tools defines internationalized exceptions
in interfaces. Name each interface descriptively for the exceptions that will be defined in it. The
interface has the following requirements:
The interface must define a field that is a message bundle of the same type as the interface.
@MessageBundle(projectCode="")
public interface ExceptionBundle {
ExceptionBundle EXCEPTIONS =
Messages.getBundle(ExceptionBundle.class);
}
3. Add a method definition to the interface for each exception. Name each method descriptively for
the exception that it represents. Each method has the following requirements:
If the exception being returned has a constructor that requires parameters in addition to a
message string, then those parameters must be supplied in the method definition using the
@Param annotation. The parameters must be the same type and order as they are in the
constructor of the exception.
4. Invoke the interface methods in your code where you need to obtain one of the exceptions. The
methods do not throw the exceptions, they return the exception object, which you can then
throw.
try {
propsInFile=new File(configname);
99
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
props.load(new FileInputStream(propsInFile));
}
catch(IOException ioex) {
//in case props file does not exist
throw ExceptionBundle.EXCEPTIONS.configFileAccessError();
}
NOTE
See the logging-tools quickstart that ships with JBoss EAP for a complete working
example.
Projects that are built using Maven can generate empty translation property files for each
MessageLogger and MessageBundle it contains. These files can then be used as new translation
property files.
The following procedure demonstrates how to configure a Maven project to generate new translation
property files.
Prerequisites
The project must contain one or more interfaces that define internationalized log messages or
exceptions.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
<compilerArgument>
-
AgeneratedTranslationFilesPath=${project.basedir}/target/generated-
100
CHAPTER 4. LOGGING
translation-files
</compilerArgument>
<showDeprecation>true</showDeprecation>
</configuration>
</plugin>
$ mvn compile
One properties file is created for each interface annotated with @MessageBundle or
@MessageLogger.
The new files are created in a subdirectory corresponding to the Java package in which
each interface is declared.
Each new file is named using the following pattern where INTERFACE_NAME is the name of
the interface used to generated the file.
INTERFACE_NAME.i18n_locale_COUNTRY_VARIANT.properties
The resulting files can now be copied into your project as the basis for new translations.
NOTE
See the logging-tools quickstart that ships with JBoss EAP for a complete working
example.
Properties files can be used to provide translations for logging and exception messages defined in
interfaces using JBoss Logging Tools.
The following procedure shows how to create and use a translation properties file, and assumes that you
already have a project with one or more interfaces defined for internationalized exceptions or log
messages.
Prerequisites
The project must contain one or more interfaces that define internationalized log messages or
exceptions.
1. Run the following command to create the template translation properties files:
$ mvn compile
101
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
2. Copy the template for the interfaces that you want to translate from the directory where they
were created into the src/main/resources directory of your project. The properties files must
be in the same package as the interfaces they are translating.
3. Rename the copied template file to indicate the language it will contain. For example:
GreeterLogger.i18n_fr_FR.properties.
4. Edit the contents of the new translation properties file to contain the appropriate translation:
# Level: Logger.Level.INFO
# Message: Hello message sent.
logHelloMessageSent=Bonjour message envoyé.
5. Repeat the process of copying the template and modifying it for each translation in the bundle.
The project now contains translations for one or more message or logger bundles. Building the project
generates the appropriate classes to log messages with the supplied translations. It is not necessary to
explicitly invoke methods or supply parameters for specific languages, JBoss Logging Tools
automatically uses the correct class for the current locale of the application server.
The source code of the generated classes can be viewed under target/generated-
sources/annotations/.
This procedure demonstrates how to add message IDs and project codes to internationalized log
messages created using JBoss Logging Tools. A log message must have both a project code and
message ID to be displayed in the log. If a message does not have both a project code and a message
ID, then neither is displayed.
Prerequisites
1. You must already have a project with internationalized log messages. For details, see Create
Internationalized Log Messages.
2. You need to know the project code you will be using. You can use a single project code, or
define different ones for each interface.
1. Specify the project code for the interface by using the projectCode attribute of the
@MessageLogger annotation attached to a custom logger interface. All messages that are
defined in the interface will use that project code.
@MessageLogger(projectCode="ACCNTS")
interface AccountsLogger extends BasicLogger {
2. Specify a message ID for each message using the id attribute of the @Message annotation
attached to the method that defines the message.
102
CHAPTER 4. LOGGING
@LogMessage
@Message(id=43, value = "Customer query failed, Database not
available.") void customerQueryFailDBClosed();
3. The log messages that have both a message ID and project code associated with them will
prepend these to the logged message.
The default log level of a message defined by an interface by JBoss Logging Tools is INFO. A different
log level can be specified with the level attribute of the @LogMessage annotation attached to the
logging method. Use the following procedure to specify a different log level.
1. Add the level attribute to the @LogMessage annotation of the log message method definition.
2. Assign the log level for this message using the level attribute. The valid values for level are
the six enumerated constants defined in org.jboss.logging.Logger.Level: DEBUG,
ERROR, FATAL, INFO, TRACE, and WARN.
import org.jboss.logging.Logger.Level;
@LogMessage(level=Level.ERROR)
@Message(value = "Customer query failed, Database not available.")
void customerQueryFailDBClosed();
Invoking the logging method in the above sample will produce a log message at the level of ERROR.
Custom logging methods can define parameters. These parameters are used to pass additional
information to be displayed in the log message. Where the parameters appear in the log message is
specified in the message itself using either explicit or ordinary indexing.
1. Add parameters of any type to the method definition. Regardless of type, the String
representation of the parameter is what is displayed in the message.
2. Add parameter references to the log message. References can use explicit or ordinary indexes.
To use ordinary indexes, insert %s characters in the message string where you want each
parameter to appear. The first instance of %s will insert the first parameter, the second
instance will insert the second parameter, and so on.
To use explicit indexes, insert %#$s characters in the message, where # indicates the
number of the parameter that you wish to appear.
103
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Using explicit indexes allows the parameter references in the message to be in a different order than
they are defined in the method. This is important for translated messages that may require different
ordering of parameters.
IMPORTANT
The number of parameters must match the number of references to the parameters in the
specified message or the code will not compile. A parameter marked with the @Cause
annotation is not included in the number of parameters.
@LogMessage(level=Logger.Level.DEBUG)
@Message(id=2, value="Customer query failed, customerid:%s, user:%s")
void customerLookupFailed(Long customerid, String username);
@LogMessage(level=Logger.Level.DEBUG)
@Message(id=2, value="Customer query failed, user:%2$s, customerid:%1$s")
void customerLookupFailed(Long customerid, String username);
JBoss Logging Tools allows one parameter of a custom logging method to be defined as the cause of the
message. This parameter must be the Throwable type or any of its sub-classes, and is marked with the
@Cause annotation. This parameter cannot be referenced in the log message like other parameters, and
is displayed after the log message.
The following procedure shows how to update a logging method using the @Cause parameter to indicate
the "causing" exception. It is assumed that you have already created internationalized logging messages
to which you want to add this functionality.
@LogMessage
@Message(id=404, value="Loading configuration failed. Config
file:%s")
void loadConfigFailed(Exception ex, File file);
import org.jboss.logging.annotations.Cause
@LogMessage
@Message(value = "Loading configuration failed. Config file: %s")
void loadConfigFailed(@Cause Exception ex, File file);
3. Invoke the method. When the method is invoked in your code, an object of the correct type must
be passed and will be displayed after the log message.
104
CHAPTER 4. LOGGING
try
{
confFile=new File(filename);
props.load(new FileInputStream(confFile));
}
catch(Exception ex) //in case properties file cannot be read
{
ConfigLogger.LOGGER.loadConfigFailed(ex, filename);
}
The following is the output of the above code samples if the code threw an exception of type
FileNotFoundException:
Message IDs and project codes are unique identifiers that are prepended to each message displayed by
internationalized exceptions. These identifying codes make it possible to create a reference for all the
exception messages in an application. This allows someone to look up the meaning of an exception
message written in language that they do not understand.
The following procedure demonstrates how to add message IDs and project codes to internationalized
exception messages created using JBoss Logging Tools.
Prerequisites
1. You must already have a project with internationalized exceptions. For details, see Create
Internationalized Exceptions.
2. You need to know the project code you will be using. You can use a single project code, or
define different ones for each interface.
1. Specify the project code using the projectCode attribute of the @MessageBundle annotation
attached to a exception bundle interface. All messages that are defined in the interface will use
that project code.
@MessageBundle(projectCode="ACCTS")
interface ExceptionBundle
{
105
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
ExceptionBundle EXCEPTIONS =
Messages.getBundle(ExceptionBundle.class);
}
2. Specify message IDs for each exception using the id attribute of the @Message annotation
attached to the method that defines the exception.
IMPORTANT
A message that has both a project code and message ID displays them prepended to the
message. If a message does not have both a project code and a message ID, neither is
displayed.
@MessageBundle(projectCode="ACCTS")
interface ExceptionBundle
{
ExceptionBundle EXCEPTIONS =
Messages.getBundle(ExceptionBundle.class);
The exception object can be obtained and thrown using the following code:
throw ExceptionBundle.EXCEPTIONS.configFileAccessError();
Exception bundle methods that define exceptions can specify parameters to pass additional information
to be displayed in the exception message. The exact position of the parameters in the exception
message is specified in the message itself using either explicit or ordinary indexing.
1. Add parameters of any type to the method definition. Regardless of type, the String
representation of the parameter is what is displayed in the message.
106
CHAPTER 4. LOGGING
2. Add parameter references to the exception message. References can use explicit or ordinary
indexes.
To use ordinary indexes, insert %s characters in the message string where you want each
parameter to appear. The first instance of %s will insert the first parameter, the second
instance will insert the second parameter, and so on.
To use explicit indexes, insert %#$s characters in the message, where # indicates the
number of the parameter that you wish to appear.
Using explicit indexes allows the parameter references in the message to be in a different order than
they are defined in the method. This is important for translated messages that may require different
ordering of parameters.
IMPORTANT
The number of parameters must match the number of references to the parameters in the
specified message, or the code will not compile. A parameter marked with the @Cause
annotation is not included in the number of parameters.
Exceptions returned by exception bundle methods can have another exception specified as the
underlying cause. This is done by adding a parameter to the method and annotating the parameter with
@Cause. This parameter is used to pass the causing exception, and cannot be referenced in the
exception message.
The following procedure shows how to update a method from an exception bundle using the @Cause
parameter to indicate the causing exception. It is assumed that you have already created an exception
bundle to which you want to add this functionality.
import org.jboss.logging.annotations.Cause
107
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
3. Invoke the interface method to obtain an exception object. The most common use case is to
throw a new exception from a catch block using the caught exception as the cause.
try
{
...
}
catch(Exception ex)
{
throw ExceptionBundle.EXCEPTIONS.calculationError(
ex, "calculating payment due
per day");
}
The following is an example of specifying an exception as the cause of another exception. This exception
bundle defines a single method that returns an exception of type ArithmeticException.
@MessageBundle(projectCode = "TPS")
interface CalcExceptionBundle
{
CalcExceptionBundle EXCEPTIONS =
Messages.getBundle(CalcExceptionBundle.class);
This code snippet performs an operation that throws an exception, because it attempts to divide an
integer by zero. The exception is caught, and a new exception is created using the first one as the cause.
int totalDue = 5;
int daysToPay = 0;
int amountPerDay;
try
{
amountPerDay = totalDue/daysToPay;
}
catch (Exception ex)
{
throw CalcExceptionBundle.EXCEPTIONS.calcError(ex, "payments per day");
}
4.5.7. References
108
CHAPTER 4. LOGGING
The following procedure configures a Maven project to use JBoss Logging and JBoss Logging Tools for
internationalization.
1. If you have not yet done so, configure your Maven settings to use the JBoss EAP repository. For
more information, see Configure the JBoss EAP Maven Repository Using the Maven Settings.
Include the jboss-eap-javaee7 BOM in the <dependencyManagement> section of the
project’s pom.xml file.
<dependencyManagement>
<dependencies>
<!-- JBoss distributes a complete set of Java EE APIs including
a Bill of Materials (BOM). A BOM specifies the versions of a
"stack" (or
a collection) of artifacts. We use this here so that we always
get the correct versions of artifacts.
Here we use the jboss-javaee-7.0 stack (you can
read this as the JBoss stack of the Java EE APIs). You can
actually
use this stack with any version of JBoss EAP that implements
Java EE. -->
<dependency>
<groupId>org.jboss.bom</groupId>
<artifactId>jboss-eap-javaee7</artifactId>
<version>${version.jboss.bom.eap}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependencies>
<dependencyManagement>
b. If you plan to use the JBoss Logging Tools, also add the jboss-logging-processor
dependency.
Both of these dependencies are available in JBoss EAP BOM that was added in the
previous step, so the scope element of each can be set to provided as shown.
109
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
3. The maven-compiler-plugin must be at least version 3.1 and configured for target and
generated sources of 1.8.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
NOTE
For a complete working example of a pom.xml file that is configured to use JBoss
Logging Tools, see the logging-tools quickstart that ships with JBoss EAP.
The property files used for the translation of messages in JBoss Logging Tools are standard Java
property files. The format of the file is the simple line-oriented, key=value pair format described in the
java.util.Properties class documentation.
InterfaceName.i18n_locale_COUNTRY_VARIANT.properties
InterfaceName is the name of the interface that the translations apply to.
locale, COUNTRY, and VARIANT identify the regional settings that the translation applies to.
locale and COUNTRY specify the language and country using the ISO-639 and ISO-3166
Language and Country codes respectively. COUNTRY is optional.
VARIANT is an optional identifier that can be used to identify translations that only apply to a
specific operating system or browser.
The properties contained in the translation file are the names of the methods from the interface being
translated. The assigned value of the property is the translation. If a method is overloaded, then this is
indicated by appending a dot and then the number of parameters to the name. Methods for translation
can only be overloaded by supplying a different number of parameters.
# Level: Logger.Level.INFO
# Message: Hello message sent.
logHelloMessageSent=Bonjour message envoyé.
110
CHAPTER 4. LOGGING
The following annotations are defined in JBoss Logging for use with internationalization and localization
of log messages, strings, and exceptions.
The following table lists all the project codes used in JBoss EAP 7.0, along with the Maven modules they
belong to.
appclient WFLYAC
batch/extension-jberet WFLYBATCH
batch/extension WFLYBATCH-DEPRECATED
batch/jberet WFLYBAT
111
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
bean-validation WFLYBV
controller-client WFLYCC
controller WFLYCTL
clustering/common WFLYCLCOM
clustering/ejb/infinispan WFLYCLEJBINF
clustering/infinispan/extension WFLYCLINF
clustering/jgroups/extension WFLYCLJG
clustering/server WFLYCLSV
clustering/web/infinispan WFLYCLWEBINF
connector WFLYJCA
deployment-repository WFLYDR
deployment-scanner WFLYDS
domain-http WFLYDMHTTP
domain-management WFLYDM
ee WFLYEE
ejb3 WFLYEJB
embedded WFLYEMB
host-controller WFLYDC
host-controller WFLYHC
iiop-openjdk WFLYIIOP
io/subsystem WFLYIO
jaxrs WFLYRS
jdr WFLYJDR
112
CHAPTER 4. LOGGING
jmx WFLYJMX
jpa/hibernate5 JIPI
jpa/spi/src/main/java/org/jipijapa/JipiLogger.java JIPI
jpa/subsystem WFLYJPA
jsf/subsystem WFLYJSF
jsr77 WFLYEEMGMT
launcher WFLYLNCHR
legacy WFLYORB
legacy WFLYMSG
legacy WFLYWEB
logging WFLYLOG
mail WFLYMAIL
management-client-content WFLYCNT
messaging-activemq WFLYMSGAMQ
mod_cluster/extension WFLYMODCLS
naming WFLYNAM
network WFLYNET
patching WFLYPAT
picketlink WFLYPL
platform-mbean WFLYPMB
pojo WFLYPOJO
process-controller WFLYPC
protocol WFLYPRT
113
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
remoting WFLYRMT
request-controller WFLYREQCON
rts WFLYRTS
sar WFLYSAR
security-manager WFLYSM
security WFLYSEC
server WFLYSRV
system-jmx WFLYSYSJMX
threads WFLYTHR
transactions WFLYTX
undertow WFLYUT
webservices/server-integration WFLYWS
weld WFLYWELD
xts WFLYXTS
114
CHAPTER 5. REMOTE JNDI LOOKUP
If an object, registered to JNDI, is supposed to be looked up by remote JNDI clients (i.e. a client that runs
in a separate JVM), then it must be registered under java:jboss/exported context.
For example, if the JMS queue in a messaging-activemq subsystem must be exposed for remote
JNDI clients, then it must be registred to JNDI, like
java:jboss/exported/jms/queue/myTestQueue. Remote JNDI client can look it up by name
jms/queue/myTestQueue.
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
...
<jms-queue name="myTestQueue"
entries="java:jboss/exported/jms/queue/myTestQueue"/>
...
</server>
</subsystem>
The following example shows how to lookup the myTestQueue queue from JNDI in remote JNDI client:
115
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Session replication is the mechanism by which mod_cluster, mod_jk, mod_proxy, ISAPI, and NSAPI
clusters provide high availability.
1. Indicate that your application is distributable. If your application is not marked as distributable, its
sessions will never be distributed. Add the <distributable/> element inside the <web-app>
tag of your application’s web.xml descriptor file:
<?xml version="1.0"?>
<web-app xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
http://java.sun.com/xml/ns/j2ee/web-
app_3_0.xsd"
version="3.0">
<distributable/>
</web-app>
2. Next, if desired, modify the default replication behavior. If you want to change any of the values
affecting session replication, you can override them inside a <replication-config> element
inside <jboss-web> in an application’s WEB-INF/jboss-web.xml file. For a given element,
only include it if you want to override the defaults.
<jboss-web xmlns="http://www.jboss.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd">
<replication-config>
<replication-granularity>SESSION</replication-granularity>
</replication-config>
</jboss-web>
116
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
SESSION: The default value. The entire session object is replicated if any attribute is dirty. This
policy is required if an object reference is shared by multiple session attributes. The shared
object references are maintained on remote nodes since the entire session is serialized in one
unit.
ATTRIBUTE: This is only for dirty attributes in the session and for some session data, such as
the last-accessed timestamp.
null
java.io.File, java.nio.file.Path
java.security.Permission
java.time.format.DateTimeFormatter, DecimalStyle
java.time.zone.ZoneOffsetTransition, ZoneOffsetTransitionRule,
ZoneRules
@org.wildfly.clustering.web.annotation.Immutable
117
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@net.jcip.annotations.Immutable
Activation is when passivated data is retrieved from persisted storage and put back into memory.
When the container requests the creation of a new session, if the number of currently active
sessions exceeds a configurable limit, the server attempts to passivate some sessions to make
room for the new one.
When a web application is deployed and a backup copy of sessions active on other servers is
acquired by the newly deploying web application’s session manager, sessions may be
passivated.
Sessions are always passivated using a Least Recently Used (LRU) algorithm.
<jboss-web xmlns="http://www.jboss.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd">
<max-active-sessions>20</max-active-sessions>
</jboss-web>
The <max-active-sessions> element dictates the maximum number of active sessions allowed, and
is used to enable session passivation. If session creation would cause the number of active sessions to
exceed <max-active-sessions/>, then the oldest session known to the session manager will
passivate to make room for the new session.
118
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
NOTE
The total number of sessions in memory includes sessions replicated from other cluster
nodes that are not being accessed on this node. Take this into account when setting
<max-active-sessions>. The number of sessions replicated from other nodes also
depends on whether REPL or DIST cache mode is enabled. In REPL cache mode, each
session is replicated to each node. In DIST cache mode, each session is replicated only
to the number of nodes specified by the owners parameter. See Configure the Cache
Mode in the JBoss EAP Config Guide for information on configuring session cache
modes. For example, consider an eight node cluster, where each node handles requests
from 100 users. With REPL cache mode, each node would store 800 sessions in memory.
With DIST cache mode enabled, and the default owners setting of 2, each node stores
200 sessions in memory.
org.wildfly.clustering.group.Group
The group service provides a mechanism to view the cluster topology for a JGroups channel, and to
be notified when the topology changes.
@Resource(lookup = "java:jboss/clustering/group/channel-name")
private Group channelGroup;
org.wildfly.clustering.dispatcher.CommandDispatcher
The CommandDispatcherFactory service provides a mechanism to create a dispatcher for
executing commands on nodes in the cluster. The resulting CommandDispatcher is a command-
pattern analog to the reflection-based GroupRpcDispatcher from previous JBoss EAP releases.
@Resource(lookup = "java:jboss/clustering/dispatcher/channel-name")
private CommandDispatcherFactory factory;
119
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
A clustered singleton service, also known as a high-availability (HA) singleton, is a service deployed on
multiple nodes in a cluster. The service is provided on only one of the nodes. The node running the
singleton service is usually called the master node.
When the master node either fails or shuts down, another master is selected from the remaining nodes
and the service is restarted on the new master. Other than a brief interval when one master has stopped
and another has yet to take over, the service is provided by one, but only one, node.
The SingletonServiceBuilder implementation installs its services so they will start asynchronously,
preventing deadlocking of the Modular Service Container (MSC).
/**
* A flag whether the service is started.
120
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
*/
private final AtomicBoolean started = new AtomicBoolean(false);
/**
* @return the name of the server node
*/
public String getValue() throws IllegalStateException,
IllegalArgumentException {
LOGGER.info(String.format("%s is %s at %s",
HATimerService.class.getSimpleName(), (started.get() ? "started" :
"not started"), System.getProperty("jboss.node.name")));
return System.getProperty("jboss.node.name");
}
121
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
}
}
}
@Override
public void activate(ServiceActivatorContext context) {
log.info("HATimerService will be installed!");
factory.createSingletonServiceBuilder(HATimerService.SINGLETON_SERVI
CE_NAME, service)
.electionPolicy(new PreferredSingletonElectionPolicy(new
SimpleSingletonElectionPolicy(), new
NamePreference("node1/singleton")))
.build(new
DelegatingServiceContainer(context.getServiceTarget(),
context.getServiceRegistry()))
.setInitialMode(ServiceController.Mode.ACTIVE)
.addDependency(ejbComponentService)
.install();
}
}
org.jboss.as.quickstarts.cluster.hasingleton.service.ejb.HATimerServ
iceActivator
4. Create a Scheduler interface that contains the initialize() and stop() methods.
122
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
void stop();
5. Create a Singleton bean that implements the Scheduler interface. This bean is used as the
cluster-wide singleton timer.
IMPORTANT
The Singleton bean must not have a remote interface and you must not
reference its local interface from another EJB in any application. This prevents a
lookup by a client or other component and ensures the HATimerService has
total control of the Singleton.
@Singleton
public class SchedulerBean implements Scheduler {
private static Logger LOGGER =
Logger.getLogger(SchedulerBean.class.toString());
@Resource
private TimerService timerService;
@Timeout
public void scheduler(Timer timer) {
LOGGER.info("HASingletonTimer: Info=" + timer.getInfo());
}
@Override
public void initialize(String info) {
ScheduleExpression sexpr = new ScheduleExpression();
// set schedule to every 10 seconds for demonstration
sexpr.hour("*").minute("*").second("0/10");
// persistent must be false because the timer is started by
the HASingleton service
timerService.createCalendarTimer(sexpr, new
TimerConfig(info, false));
}
@Override
public void stop() {
LOGGER.info("Stop all existing HASingleton timers");
for (Timer timer : timerService.getTimers()) {
LOGGER.fine("Stop HASingleton timer: " +
timer.getInfo());
timer.cancel();
123
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
}
}
}
See the cluster-ha-singleton quickstart that ships with JBoss EAP for a complete working
example of this application. The quickstart provides detailed instructions to build and deploy the
application.
When deployed to a group of clustered servers, a singleton deployment will only deploy on a single node
at any given time. If the node on which the deployment is active stops or fails, the deployment will
automatically start on another node.
The policies for controlling HA singleton behavior are managed by a new singleton subsystem. A
deployment may either specify a specific singleton policy or use the default subsystem policy. A
deployment identifies itself as singleton deployment via a /META-INF/singleton-deployment.xml
deployment descriptor which is most easily applied to an existing deployment as a deployment overlay.
Alternatively, the requisite singleton configuration can be embedded within an existing jboss-all.xml.
124
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
<jboss xmlns="urn:jboss:1.0">
<singleton-deployment policy="my-new-policy"
xmlns="urn:jboss:singleton-deployment:1.0"/>
</jboss>
batch
/subsystem=singleton/singleton-policy=my-new-policy:add(cache-
container=server)
/subsystem=singleton/singleton-policy=my-new-policy/election-
policy=simple:add(position=-1)
run-batch
NOTE
To set the newly created policy my-new-policy as the default, run this
command:
/subsystem=singleton:write-attribute(name=default,
value=my-new-policy)
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="my-new-policy">
<singleton-policy name="my-new-policy" cache-
container="server">
<simple-election-policy position="-1"/>
</singleton-policy>
</singleton-policies>
</subsystem>
125
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
batch
/subsystem=singleton/singleton-policy=my-other-new-policy:add(cache-
container=server)
/subsystem=singleton/singleton-policy=my-other-new-policy/election-
policy=random:add()
run-batch
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="my-other-new-policy">
<singleton-policy name="my-other-new-policy" cache-
container="server">
<random-election-policy/>
</singleton-policy>
</singleton-policies>
</subsystem>
NOTE
Preferences
Additionally, any singleton election policy may indicate a preference for one or more members of a
cluster. Preferences may be defined either via node name or via outbound socket binding name. Node
preferences always take precedent over the results of an election policy.
Example: Indicate Preference in the Existing Singleton Policy Using the Management CLI
/subsystem=singleton/singleton-policy=foo/election-policy=simple:list-
add(name=name-preferences, value=nodeA)
/subsystem=singleton/singleton-policy=bar/election-policy=random:list-
add(name=socket-binding-preferences, value=binding1)
batch
/subsystem=singleton/singleton-policy=my-new-policy:add(cache-
container=server)
/subsystem=singleton/singleton-policy=my-new-policy/election-
policy=simple:add(name-preferences=[node1, node2, node3, node4])
run-batch
126
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
NOTE
To set the newly created policy my-new-policy as the default, run this command:
/subsystem=singleton:write-attribute(name=default, value=my-
new-policy)
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="my-other-new-policy">
<singleton-policy name="my-other-new-policy" cache-
container="server">
<random-election-policy>
<socket-binding-preferences>binding1 binding2 binding3
binding4</socket-binding-preferences>
</random-election-policy>
</singleton-policy>
</singleton-policies>
</subsystem>
Quorum
Network partitions are particularly problematic for singleton deployments, since they can trigger multiple
singleton providers for the same deployment to run at the same time. To defend against this scenario, a
singleton policy may define a quorum that requires a minimum number of nodes to be present before a
singleton provider election can take place. A typical deployment scenario uses a quorum of N/2 + 1,
where N is the anticipated cluster size. This value can be updated at runtime, and will immediately affect
any singleton deployments using the respective singleton policy.
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server"
quorum="4">
<simple-election-policy/>
</singleton-policy>
</singleton-policies>
</subsystem>
/subsystem=singleton/singleton-policy=foo:write-attribute(name=quorum,
value=3)
127
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
administration tasks, such as enabling or disabling contexts, and configuring the load-balancing
properties of worker nodes in a cluster.
[2] ajp://192.168.122.204:8099: The protocol used (either AJP, HTTP, or HTTPS), hostname or
IP address of the worker node, and the port.
[4] Virtual Host 1: The virtual host(s) configured on the worker node.
[5] Disable: An administration option that can be used to disable the creation of new sessions on
the particular context. However, the ongoing sessions do not get disabled and remain intact.
[6] Stop: An administration option that can be used to stop the routing of session requests to the
context. The remaining sessions will failover to another node unless the sticky-session-
force property is set to true.
[7] Enable Contexts Disable Contexts Stop Contexts: The operations that can be performed
on the whole node. Selecting one of these options affects all the contexts of a node in all its
virtual hosts.
[8] Load balancing group (LBGroup): The load-balancing-group property is set in the
modcluster subsystem in JBoss EAP configuration to group all worker nodes into custom load
balancing groups. Load balancing group (LBGroup) is an informational field that gives
information about all set load balancing groups. If this field is not set, then all worker nodes are
grouped into a single default load balancing group.
128
CHAPTER 6. CLUSTERING IN WEB APPLICATIONS
NOTE
This is only an informational field and thus cannot be used to set load-
balancing-group property. The property has to be set in modcluster
subsystem in JBoss EAP configuration.
[9] Load (value): The load factor on the worker node. The load factor(s) are evaluated as below:
-load > 0 : A load factor with value 1 indicates that the worker
node is overloaded. A load factor of 100 denotes a free and not-
loaded node.
-load = 0 : A load factor of value 0 indicates that the worker node
is in standby mode. This means that no session requests will be
routed to this node until and unless the other worker nodes are
unavailable.
-load = -1 : A load factor of value -1 indicates that the worker
node is in an error state.
-load = -2 : A load factor of value -2 indicates that the worker
node is undergoing CPing/CPong and is in a transition state.
NOTE
For JBoss EAP 7.0, it is also possible to use Undertow as load balancer.
129
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
JBoss EAP includes Weld, which is the reference implementation of JSR-346:Contexts and Dependency
Injection for Java™ EE 1.1.
Benefits of CDI
The benefits of CDI include:
Simplifying and shrinking your code base by replacing big chunks of code with annotations.
Flexibility, allowing you to disable and enable injections and events, use alternative beans, and
inject non-CDI objects easily.
Optionally, allowing you to include beans.xml in your META-INF/ or WEB-INF/ directory if you
need to customize the configuration to differ from the default. The file can be empty.
Simplifying packaging and deployments and reducing the amount of XML you need to add to
your deployments.
Providing lifecycle management via contexts. You can tie injections to requests, sessions,
conversations, or custom contexts.
Providing type-safe dependency injection, which is safer and easier to debug than string-based
injection.
The goal of Seam 2 was to unify Enterprise Java Beans and JavaServer Faces managed beans.
JavaServer Faces 2.2 implements JSR-344: JavaServer™ Faces 2.2. It is an API for building server-side
user interfaces.
130
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
the reference implementation of CDI. These tasks show you how to use CDI in your enterprise
applications.
Bean classes that do not have bean defining annotation and are not bean classes of
sessions beans are not discovered.
Producer methods that are not on a session bean and whose bean class does not have a bean
defining annotation are not discovered.
Producer fields that are not on a session bean and whose bean class does not have a bean
defining annotation are not discovered.
Disposer methods that are not on a session bean and whose bean class does not have a bean
defining annotation are not discovered.
Observer methods that are not on a session bean and whose bean class does not have a bean
defining annotation are not discovered.
IMPORTANT
All examples in the CDI section are valid only when you have a discovery mode set to
all.
If one of these annotations is declared on a bean class, then the bean class is said to have a bean
defining annotation.
@Dependent
public class BookShop
131
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
extends Business
implements Shop<Book> {
...
}
NOTE
A child element named <if-class-available> with a name attribute, and the class loader
for the bean archive can not load a class for that name, or
A child element named <if-class-not-available> with a name attribute, and the class
loader for the bean archive can load a class for that name, or
A child element named <if-system-property> with a name attribute, and there is no system
property defined for that name, or
A child element named <if-system-property> with a name attribute and a value attribute,
and there is no system property defined for that name with that value.
The fully qualified name of the type being discovered matches the value of the name attribute of
the exclude filter, or
The package name of the type being discovered matches the value of the name attribute with a
suffix ".*" of the exclude filter, or
The package name of the type being discovered starts with the value of the name attribute with
a suffix ".**" of the exclude filter
<scan>
<exclude name="com.acme.rest.*" /> 1
<exclude name="com.acme.faces.**"> 2
<if-class-not-available
name="javax.faces.context.FacesContext"/>
</exclude>
<exclude name="com.acme.verbose.*"> 3
132
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
<exclude name="com.acme.ejb.**"> 4
<if-class-available name="javax.enterprise.inject.Model"/>
<if-system-property name="exclude-ejbs"/>
</exclude>
</scan>
</beans>
1 The first exclude filter will exclude all classes in com.acme.rest package.
2 The second exclude filter will exclude all classes in the com.acme.faces package, and any
subpackages, but only if JSF is not available.
3 The third exclude filter will exclude all classes in the com.acme.verbose package if the
system property verbosity has the value low.
4 The fourth exclude filter will exclude all classes in the com.acme.ejb package, and any
subpackages, if the system property exclude-ejbs is set with any value and if at the same
time, the javax.enterprise.inject.Model class is also available to the classloader.
NOTE
The following example adds a translation ability to an existing class, and assumes you already have a
Welcome class, which has a method buildPhrase. The buildPhrase method takes as an argument
the name of a city, and outputs a phrase like "Welcome to Boston!".
133
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Unsatisfied dependencies exist when the container is unable to resolve an injection to any bean at all.
1. It resolves the qualifier annotations on all beans that implement the bean type of an injection
point.
2. It filters out disabled beans. Disabled beans are @Alternative beans which are not explicitly
enabled.
In the event of an ambiguous or unsatisfied dependency, the container aborts deployment and throws an
exception.
7.3.1. Qualifiers
Qualifiers are annotations used to avoid ambiguous dependencies when the container can resolve
multiple beans, which fit into an injection point. A qualifier declared at an injection point provides the set
of eligible beans, which declare the same Qualifier.
Qualifiers have to be declared with a retention and target as shown in the example below.
@Qualifier
@Retention(RUNTIME)
@Target({TYPE, METHOD, FIELD, PARAMETER})
public @interface Synchronous {}
@Qualifier
@Retention(RUNTIME)
@Target({TYPE, METHOD, FIELD, PARAMETER})
public @interface Asynchronous {}
@Synchronous
public class SynchronousPaymentProcessor implements PaymentProcessor {
public void process(Payment payment) { ... }
}
@Asynchronous
public class AsynchronousPaymentProcessor implements PaymentProcessor {
public void process(Payment payment) { ... }
}
134
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
'@Any'
Whenever a bean or injection point does not explicitly declare a qualifier, the container assumes the
qualifier @Default. From time to time, you will need to declare an injection point without specifying a
qualifier. There is a qualifier for that too. All beans have the qualifier @Any. Therefore, by explicitly
specifying @Any at an injection point, you suppress the default qualifier, without otherwise restricting the
beans that are eligible for injection.
This is especially useful if you want to iterate over all beans with a certain bean type.
import javax.enterprise.inject.Instance;
...
@Inject
service.init();
Every bean has the qualifier @Any, even if it does not explicitly declare this qualifier.
Every event also has the qualifier @Any, even if it was raised without explicit declaration of this qualifier.
The @Any qualifier allows an injection point to refer to all beans or all events of a certain bean type.
The following example is ambiguous and features two implementations of Welcome, one which
translates and one which does not. The injection needs to be specified to use the translating Welcome.
@Inject
void init(Welcome welcome) {
this.welcome = welcome;
}
...
}
135
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Qualifier
@Retention(RUNTIME)
@Target({TYPE,METHOD,FIELD,PARAMETERS})
public @interface Translating{}
@Translating
public class TranslatingWelcome extends Welcome {
@Inject Translator translator;
public String buildPhrase(String city) {
return translator.translate("Welcome to " + city + "!");
}
...
}
3. Request the translating Welcome in your injection. You must request a qualified implementation
explicitly, similar to the factory method pattern. The ambiguity is resolved at the injection point.
With very few exceptions, almost every concrete Java class that has a constructor with no parameters
(or a constructor designated with the annotation @Inject) is a bean. This includes every JavaBean and
every EJB session bean.
136
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
It is not annotated with an EJB component-defining annotation or declared as an EJB bean class
in ejb-jar.xml.
The unrestricted set of bean types for a managed bean contains the bean class, every superclass and all
interfaces it implements directly or indirectly.
If a managed bean has a public field, it must have the default scope @Dependent.
@Vetoed
CDI 1.1 introduces a new annotation, @Vetoed. You can prevent a bean from injection by adding this
annotation:
@Vetoed
public class SimpleGreeting implements Greeting {
...
}
@Vetoed
package org.sample.beans;
import javax.enterprise.inject.Vetoed;
This code in package-info.java in the org.sample.beans package will prevent all beans inside
this package from injection.
Java EE components, such as stateless EJBs or JAX-RS resource endpoints, can be marked with
@Vetoed to prevent them from being considered beans. Adding the @Vetoed annotation to all persistent
entities prevents the BeanManager from managing an entity as a CDI Bean. When an entity is
annotated @Vetoed, no injections take place. The reasoning behind this is to prevent the BeanManager
from performing the operations that may cause the JPA provider to break.
1. To obtain an instance of a class, annotate the field with @Inject within your bean:
137
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
2. Use your injected object’s methods directly. Assume that TextTranslator has a method
translate:
// in TranslateController class
3. Use an injection in the constructor of a bean. You can inject objects into the constructor of a
bean as an alternative to using a factory or service locator to create them:
@Inject
TextTranslator(SentenceParser sentenceParser, Translator
sentenceTranslator) {
this.sentenceParser = sentenceParser;
this.sentenceTranslator = sentenceTranslator;
}
4. Use the Instance(<T>) interface to get instances programmatically. The Instance interface
can return an instance of TextTranslator when parameterized with the bean type.
When you inject an object into a bean, all of the object’s methods and properties are available to your
bean. If you inject into your bean’s constructor, instances of the injected objects are created when your
bean’s constructor is called, unless the injection refers to an instance that already exists. For instance, a
new instance would not be created if you inject a session-scoped bean during the lifetime of the session.
138
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
A scope is the link between a bean and a context. A scope/context combination may have a specific
lifecycle. Several predefined scopes exist, and you can create your own. Examples of predefined scopes
are @RequestScoped, @SessionScoped, and @ConversationScope.
Scope Description
@Dependent The bean is bound to the lifecycle of the bean holding the reference. The
default scope for an injected bean is @Dependent.
@ConversationScoped The bean is bound to the lifecycle of the conversation. The conversation
scope is between the lengths of the request and the session, and is
controlled by the application.
Custom scopes If the above contexts do not meet your needs, you can define custom
scopes.
The @Named annotation takes an optional parameter, which is the bean name. If this parameter is
omitted, the bean name defaults to the class name of the bean with its first letter converted to lower-
case.
@Named("greeter")
public class GreeterBean {
private Welcome welcome;
@Inject
void init (Welcome welcome) {
this.welcome = welcome;
}
139
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
In the example above, the default name would be greeterBean if no name had been specified.
<h:form>
<h:commandButton value="Welcome visitors" action="#
{greeter.welcomeVisitors}"/>
</h:form>
The default scope for an injected bean is @Dependent. This means that the bean’s lifecycle is
dependent upon the lifecycle of the bean that holds the reference. Several other scopes exist, and you
can define your own scopes. For more information, see Contexts and Scopes.
@RequestScoped
@Named("greeter")
public class GreeterBean {
private Welcome welcome;
private String city; // getter & setter not shown
@Inject void init(Welcome welcome) {
this.welcome = welcome;
}
public void welcomeVisitors() {
System.out.println(welcome.buildPhrase(city));
}
}
<h:form>
<h:inputText value="#{greeter.city}"/>
<h:commandButton value="Welcome visitors" action="#
{greeter.welcomeVisitors}"/>
</h:form>
Your bean is saved in the context relating to the scope that you specify, and lasts as long as the scope
applies.
This task shows how to use producer methods to produce a variety of different objects that are not beans
for injection.
140
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
The @Preferred annotation in the example is a qualifier annotation. For more information about
qualifiers, see Qualifiers.
@SessionScoped
public class Preferences implements Serializable {
private PaymentStrategyType paymentStrategy;
...
@Produces @Preferred
public PaymentStrategy getPaymentStrategy() {
switch (paymentStrategy) {
case CREDIT_CARD: return new CreditCardPaymentStrategy();
case CHECK: return new CheckPaymentStrategy();
default: return null;
}
}
}
The following injection point has the same type and qualifier annotations as the producer method, so it
resolves to the producer method using the usual CDI injection rules. The producer method is called by
the container to obtain an instance to service this injection point.
141
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
If you inject a request-scoped bean into a session-scoped producer, the producer method promotes the
current request-scoped instance into session scope. This is almost certainly not the desired behavior, so
use caution when you use a producer method in this way.
NOTE
The scope of the producer method is not inherited from the bean that declares the
producer method.
Producer methods allow you to inject non-bean objects and change your code dynamically.
By default, @Alternative beans are disabled. They are enabled for a specific bean archive by editing
its beans.xml file. However, this activation only applies to the beans in that archive. From CDI 1.1
onwards, the alternative can be enabled for the entire application using the @Priority annotation.
<beans
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd">
<alternatives>
<class>org.mycompany.mock.MockPaymentProcessor</class>
</alternatives>
</beans>
by placing the @Priority annotation on the bean class of a managed bean or session bean, or
142
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
by placing the @Priority annotation on the bean class that declares the producer method, field
or resource.
Override an Injection
This task assumes that you already have a TranslatingWelcome class in your project, but you want
to override it with a "mock" TranslatingWelcome class. This would be the case for a test deployment,
where the true Translator bean cannot be used.
@Alternative
@Translating
public class MockTranslatingWelcome extends Welcome {
public String buildPhrase(string city) {
return "Bienvenue à " + city + "!");
}
}
2. Activate the substitute implementation by adding the fully-qualified class name to your META-
INF/beans.xml or WEB-INF/beans.xml file.
<beans>
<alternatives>
<class>com.acme.MockTranslatingWelcome</class>
</alternatives>
</beans>
7.9. STEREOTYPES
In many systems, use of architectural patterns produces a set of recurring bean roles. A stereotype
allows you to identify such a role and declare some common metadata for beans with that role in a
central place.
Default scope
143
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
A bean may declare zero, one, or multiple stereotypes. A stereotype is an @Stereotype annotation that
packages several other annotations. Stereotype annotations may be applied to a bean class, producer
method, or field.
A class that inherits a scope from a stereotype may override that stereotype and specify a scope directly
on the bean.
In addition, if a stereotype has a @Named annotation, any bean it is placed on has a default bean name.
The bean may override this name if the @Named annotation is specified directly on the bean. For more
information about named beans, see Named Beans.
@Secure
@Transactional
@RequestScoped
@Named
public class AccountManager {
public boolean transfer(Account a, Account b) {
...
}
}
@Secure
@Transactional
@RequestScoped
@Named
@Stereotype
@Retention(RUNTIME)
@Target(TYPE)
public @interface BusinessComponent {
...
}
@BusinessComponent
public class AccountManager {
public boolean transfer(Account a, Account b) {
...
}
}
144
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
CDI also provides transactional observer methods, which receive event notifications during the before
completion or after completion phase of the transaction in which the event was fired.
145
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Transactional observers receive the event notifications before or after the completion phase of the
transaction in which the event was raised. Transactional observers are important in a stateful object
model because state is often held for longer than a single atomic transaction.
AFTER_SUCCESS: Observers are invoked after the completion phase of the transaction, but only
if the transaction completes successfully.
AFTER_FAILURE: Observers are invoked after the completion phase of the transaction, but only
if the transaction fails to complete successfully.
AFTER_COMPLETION: Observers are invoked after the completion phase of the transaction.
BEFORE_COMPLETION: Observers are invoked before the completion phase of the transaction.
The following observer method refreshes a query result set cached in the application context, but only
when transactions that update the Category tree are successful:
Assume we have cached a JPA query result set in the application scope:
import javax.ejb.Singleton;
import javax.enterprise.inject.Produces;
@ApplicationScoped @Singleton
Occasionally a Product is created or deleted. When this occurs, we need to refresh the Product
catalog. But we have to wait for the transaction to complete successfully before performing this refresh.
import javax.enterprise.event.Event;
@Stateless
146
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
The Catalog can now observe the events after successful completion of the transaction:
import javax.ejb.Singleton;
@ApplicationScoped @Singleton
public class Catalog {
...
void addProduct(@Observes(during = AFTER_SUCCESS) @Created Product
product) {
products.add(product);
}
7.11. INTERCEPTORS
Interceptors allow you to add functionality to the business methods of a bean without modifying the
bean’s method directly. The interceptor is executed before any of the business methods of the bean.
Interceptors are defined as part of the JSR 318: Enterprise JavaBeans™ 3.1 specification.
CDI enhances this functionality by allowing you to use annotations to bind interceptors to beans.
Interception points
Timeout method interception: A timeout method interceptor applies to invocations of the EJB
timeout methods by the container.
Enabling Interceptors
147
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
By default, all interceptors are disabled. You can enable the interceptor by using the beans.xml
descriptor of a bean archive. However, this activation only applies to the beans in that archive. From CDI
1.1 onwards the interceptor can be enabled for the whole application using the @Priority annotation.
<beans
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd">
<interceptors>
<class>org.mycompany.myapp.TransactionInterceptor</class>
</interceptors>
</beans>
It enables us to specify an ordering for the interceptors in our system, ensuring deterministic
behavior
Interceptors enabled using @Priority are called before interceptors enabled using the beans.xml file.
NOTE
Every bean in the application must specify the full set of interceptors in the correct order. This
makes adding or removing interceptors on an application-wide basis time-consuming and error-
prone.
@Interceptors({
SecurityInterceptor.class,
TransactionInterceptor.class,
LoggingInterceptor.class
})
@Stateful public class BusinessComponent {
...
}
148
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
@InterceptorBinding
@Retention(RUNTIME)
@Target({TYPE, METHOD})
public @interface Secure {}
@Secure
@Interceptor
public class SecurityInterceptor {
@AroundInvoke
public Object aroundInvoke(InvocationContext ctx) throws Exception
{
// enforce security ...
return ctx.proceed();
}
}
@Secure
public class AccountManager {
public boolean transfer(Account a, Account b) {
...
}
}
<beans>
<interceptors>
<class>com.acme.SecurityInterceptor</class>
<class>com.acme.TransactionInterceptor</class>
</interceptors>
</beans>
7.12. DECORATORS
A decorator intercepts invocations from a specific Java interface, and is aware of all the semantics
attached to that interface. Decorators are useful for modeling some kinds of business concerns, but do
not have the generality of interceptors. A decorator is a bean, or even an abstract class, that implements
the type it decorates, and is annotated with @Decorator. To invoke a decorator in a CDI application, it
must be specified in the beans.xml file.
149
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<beans
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd">
<decorators>
<class>org.mycompany.myapp.LargeTransactionDecorator</class>
</decorators>
</beans>
It enables us to specify an ordering for decorators in our system, ensuring deterministic behavior
A decorator must have exactly one @Delegate injection point to obtain a reference to the decorated
object.
@Decorator
public abstract class LargeTransactionDecorator implements Account {
From CDI 1.1 onwards, the decorator can be enabled for the whole application using @Priority
annotation.
Decorators enabled using @Priority are called before decorators enabled using beans.xml. The
lower priority values are called first.
NOTE
Having a decorator enabled by @Priority and at the same time invoked by beans.xml,
leads to a non-portable behavior. This combination of enablement should therefore be
avoided in order to maintain consistent behavior across different CDI implementations.
150
CHAPTER 7. CONTEXTS AND DEPENDENCY INJECTION (CDI)
According to the JSR-346 specification, a portable extension can integrate with the container in the
following ways:
Injecting dependencies into its own objects using the dependency injection service
Augmenting or overriding the annotation-based metadata with metadata from another source
A bean proxy, which can be referred to as client proxy, is responsible for ensuring the bean instance that
receives a method invocation is the instance associated with the current context. The client proxy also
allows beans bound to contexts, such as the session context, to be serialized to disk without recursively
serializing other injected beans.
Due to Java limitations, some Java types cannot be proxied by the container. If an injection point
declared with one of these types resolves to a bean with a scope other than @Dependent, the container
aborts the deployment.
In this example, the PaymentProcessor instance is not injected directly into Shop. Instead, a proxy is
injected, and when the processPayment() method is called, the proxy looks up the current
PaymentProcessor bean instance and calls the processPayment() method on it.
@ConversationScoped
class PaymentProcessor
151
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
{
public void processPayment(int amount)
{
System.out.println("I'm taking $" + amount);
}
}
@ApplicationScoped
public class Shop
{
@Inject
PaymentProcessor paymentProcessor;
152
CHAPTER 8. JBOSS EAP MBEAN SERVICES
You can manage the dependency state using any of the following approaches:
If you want specific methods to be called on your MBean, declare those methods in your MBean
interface. This approach allows your MBean implementation to avoid dependencies on JBoss
specific classes.
If you are not bothered about dependencies on JBoss specific classes, then you may have your
MBean interface extend the ServiceMBean interface and ServiceMBeanSupport class. The
ServiceMBeanSupport class provides implementations of the service lifecycle methods like
create, start, and stop. To handle a specific event like the start() event, you need to override
startService() method provided by the ServiceMBeanSupport class.
ConfigServiceMBean interface declares specific methods like the start, getTimeout, and stop
methods to start, hold, and stop the MBean correctly without using any JBoss specific classes.
ConfigService class implements ConfigServiceMBean interface and consequently implements the
methods used within that interface.
The PlainThread class extends the ServiceMBeanSupport class and implements the
PlainThreadMBean interface. PlainThread starts a thread and uses
ConfigServiceMBean.getTimeout() to determine how long the thread should sleep.
package org.jboss.example.mbean.support;
public interface ConfigServiceMBean {
int getTimeout();
void start();
void stop();
}
package org.jboss.example.mbean.support;
public class ConfigService implements ConfigServiceMBean {
int timeout;
@Override
public int getTimeout() {
return timeout;
}
@Override
public void start() {
//Create a random number between 3000 and 6000 milliseconds
153
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
package org.jboss.example.mbean.support;
import org.jboss.system.ServiceMBean;
public interface PlainThreadMBean extends ServiceMBean {
void setConfigService(ConfigServiceMBean configServiceMBean);
}
package org.jboss.example.mbean.support;
import org.jboss.system.ServiceMBeanSupport;
public class PlainThread extends ServiceMBeanSupport implements
PlainThreadMBean {
private ConfigServiceMBean configService;
private Thread thread;
private volatile boolean done;
@Override
public void setConfigService(ConfigServiceMBean configService) {
this.configService = configService;
}
@Override
protected void startService() throws Exception {
System.out.println("Starting Plain Thread MBean");
done = false;
thread = new Thread(new Runnable() {
@Override
public void run() {
try {
while (!done) {
System.out.println("Sleeping....");
Thread.sleep(configService.getTimeout());
System.out.println("Slept!");
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
thread.start();
}
@Override
protected void stopService() throws Exception {
System.out.println("Stopping Plain Thread MBean");
done = true;
}
}
The jboss-service.xml descriptor shows how the ConfigService class is injected into the
PlainThread class using the inject tag. The inject tag establishes a dependency between
154
CHAPTER 8. JBOSS EAP MBEAN SERVICES
<server xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:jboss:service:7.0 jboss-service_7_0.xsd"
xmlns="urn:jboss:service:7.0">
<mbean code="org.jboss.example.mbean.support.ConfigService"
name="jboss.support:name=ConfigBean"/>
<mbean code="org.jboss.example.mbean.support.PlainThread"
name="jboss.support:name=ThreadBean">
<attribute name="configService">
<inject bean="jboss.support:name=ConfigBean"/>
</attribute>
</mbean>
</server>
After writing the MBeans example, you can package the classes and the jboss-service.xml
descriptor in the META-INF folder of a service archive (.sar).
deploy ~/Desktop/ServiceMBeanTest.sar
undeploy ServiceMBeanTest.sar
155
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Concurrency Utilities help to extend the invocation context by pulling in the existing context’s application
threads and using these in its own threads. This extending of invocation context includes class loading,
JNDI, and security contexts, by default.
Context Service
<subsystem xmlns="urn:jboss:domain:ee:4.0">
<spec-descriptor-property-replacement>false</spec-descriptor-
property-replacement>
<concurrent>
<context-services>
<context-service name="default" jndi-
name="java:jboss/ee/concurrency/context/default" use-transaction-setup-
provider="true"/>
</context-services>
<managed-thread-factories>
<managed-thread-factory name="default" jndi-
name="java:jboss/ee/concurrency/factory/default" context-
service="default"/>
</managed-thread-factories>
<managed-executor-services>
<managed-executor-service name="default" jndi-
name="java:jboss/ee/concurrency/executor/default" context-
service="default" hung-task-threshold="60000" keepalive-time="5000"/>
</managed-executor-services>
<managed-scheduled-executor-services>
<managed-scheduled-executor-service name="default"
jndi-name="java:jboss/ee/concurrency/scheduler/default" context-
service="default" hung-task-threshold="60000" keepalive-time="3000"/>
</managed-scheduled-executor-services>
</concurrent>
<default-bindings context-
service="java:jboss/ee/concurrency/context/default"
datasource="java:jboss/datasources/ExampleDS" managed-executor-
service="java:jboss/ee/concurrency/executor/default" managed-scheduled-
executor-service="java:jboss/ee/concurrency/scheduler/default" managed-
thread-factory="java:jboss/ee/concurrency/factory/default"/>
</subsystem>
156
CHAPTER 9. CONCURRENCY UTILITIES
jndi-name: Defines where the context service should be placed in the JNDI.
See the example above for the usage of context service concurrency utility.
/subsystem=ee/context-service=newContextService:add(jndi-
name=java:jboss/ee/concurrency/contextservice/newContextService)
/subsystem=ee/context-service=newContextService:write-attribute(name=jndi-
name,
value=java:jboss/ee/concurrency/contextservice/changedContextService)
/subsystem=ee/context-service=newContextService:remove()
jndi-name: Defines where in the JNDI the managed thread factory should be placed.
priority: Optional. Indicates the priority for new threads created by the factory, and defaults
to 5.
157
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
/subsystem=ee/managed-thread-factory=newManagedTF:add(context-
service=newContextService, jndi-
name=java:jboss/ee/concurrency/threadfactory/newManagedTF, priority=2)
/subsystem=ee/managed-thread-factory=newManagedTF:write-
attribute(name=jndi-name,
value=java:jboss/ee/concurrency/threadfactory/changedManagedTF)
This operation requires reload. Similarly, you can change other attributes as well.
/subsystem=ee/managed-thread-factory=newManagedTF:remove()
jndi-name: Defines where the managed thread factory should be placed in the JNDI.
max-threads: Defines the maximum number of threads used by the executor, which defaults to
Integer.MAX_VALUE.
thread-factory: References an existing managed thread factory by its name, to handle the
creation of internal threads. If not specified, then a managed thread factory with default
configuration will be created and used internally.
core-threads: Provides the number of threads to keep in the executor’s pool, even if they are
idle. A value of 0 means there is no limit.
keepalive-time: Defines the time, in milliseconds, that an internal thread may be idle. The
attribute default value is 60000.
queue-length: Indicates the number of tasks that can be stored in the input queue. The
default value is 0, which means the queue capacity is unlimited.
hung-task-threshold: Defines the time, in milliseconds, after which tasks are considered
hung by the managed executor service and forcefully aborted. If the value is 0 (which is the
default), tasks are never considered hung.
long-running-tasks: Suggests optimizing the execution of long running tasks, and defaults
to false.
158
CHAPTER 9. CONCURRENCY UTILITIES
reject-policy: Defines the policy to use when a task is rejected by the executor. The
attribute value may be the default ABORT, which means an exception should be thrown, or
RETRY_ABORT, which means the executor will try to submit it once more, before throwing an
exception
/subsystem=ee/managed-executor-service=newManagedExecutorService:add(jndi-
name=java:jboss/ee/concurrency/executor/newManagedExecutorService, core-
threads=7, thread-factory=default)
/subsystem=ee/managed-executor-service=newManagedExecutorService:write-
attribute(name=core-threads,value=10)
This operation requires reload. Similarly, you can change other attributes too.
/subsystem=ee/managed-executor-service=newManagedExecutorService:remove()
context-service: References an existing context service by its name. If specified then the
referenced context service will capture the invocation context present when submitting a task to
the executor, which will then be used when executing the task.
hung-task-threshold: Defines the time, in milliseconds, after which tasks are considered
hung by the managed scheduled executor service and forcefully aborted. If the value is 0 (which
is the default), tasks are never considered hung.
keepalive-time: Defines the time, in milliseconds, that an internal thread may be idle. The
attribute default value is 60000.
reject-policy: Defines the policy to use when a task is rejected by the executor. The
attribute value may be the default ABORT, which means an exception should be thrown, or
RETRY_ABORT, which means the executor will try to submit it once more, before throwing an
exception.
core-threads: Provides the number of threads to keep in the executor’s pool, even if they are
idle. A value of 0 means there is no limit.
159
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
jndi-name: Defines where the managed scheduled executor service should be placed in the
JNDI .
long-running-tasks: Suggests optimizing the execution of long running tasks, and defaults
to false.
thread-factory: References an existing managed thread factory by its name, to handle the
creation of internal threads. If not specified, then a managed thread factory with default
configuration will be created and used internally.
/subsystem=ee/managed-scheduled-executor-
service=newManagedScheduledExecutorService:add(jndi-
name=java:jboss/ee/concurrency/scheduledexecutor/newManagedScheduledExecut
orService, core-threads=7, context-service=default)
/subsystem=ee/managed-scheduled-executor-
service=newManagedScheduledExecutorService:write-attribute(name=core-
threads, value=10)
This operation requires reload. Similarly, you can change other attributes.
/subsystem=ee/managed-scheduled-executor-
service=newManagedScheduledExecutorService:remove()
160
CHAPTER 10. UNDERTOW
High Performance
Embeddable
Servlet 3.1
Web Sockets
Reverse Proxy
Request Lifecycle
When a client connects to the server, Undertow creates a
io.undertow.server.HttpServerConnection. When the client sends a request, it is parsed by
the Undertow parser, and then the resulting io.undertow.server.HttpServerExchange is passed
to the root handler. When the root handler finishes, one of four things can happen:
161
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
}
//handler code
}
Because exchange is not actually dispatched until the call stack returns, you can be sure that more that
one thread is never active in an exchange at once. The exchange is not thread safe. However it can be
passed between multiple threads as long as both threads do not attempt to modify it at once.
For more information on configuring the Undertow, see Configuring the Web Server in the JBoss EAP
Configuration Guide.
allowed-methods(methods='GET')
All handlers may also take an optional predicate to apply that handler in specific cases.
The above example will only apply the allowed-methods handler to the path /my-path.
Some handlers have a default parameter, which allows you to specify the value of that parameter in the
handler definition without using the name.
You also may update the WEB-INF/jboss-web.xml file to include the definition of one or more
handlers but using WEB-INF/undertow-handlers.conf is preferred.
<jboss-web>
<http-handler>
162
CHAPTER 10. UNDERTOW
<class-
name>io.undertow.server.handlers.AllowedMethodsHandler</class-name>
<param>
<param-name>methods</param-name>
<param-value>GET</param-value>
</param>
</http-handler>
</jboss-web>
<jboss-web>
<http-handler>
<class-name>org.jboss.example.MyHttpHandler</class-name>
</http-handler>
</jboss-web>
package org.jboss.example;
import io.undertow.server.HttpHandler;
import io.undertow.server.HttpServerExchange;
Parameters could also be set for the custom handler via the WEB-INF/jboss-web.xml file.
<jboss-web>
<http-handler>
<class-name>org.jboss.example.MyHttpHandler</class-name>
<param>
<param-name>myParam</param-name>
163
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<param-value>foobar</param-value>
</param>
</http-handler>
</jboss-web>
For these parameters to work, the handler class needs to have corresponding setters.
package org.jboss.example;
import io.undertow.server.HttpHandler;
import io.undertow.server.HttpServerExchange;
Instead of using the WEB-INF/jboss-web.xml for defining the handler, it could also be
defined in the WEB-INF/undertow-handlers.conf file.
myHttpHandler(myParam='foobar')
package org.jboss.example;
import io.undertow.server.HandlerWrapper;
import io.undertow.server.HttpHandler;
import io.undertow.server.handlers.builder.HandlerBuilder;
import java.util.Collections;
164
CHAPTER 10. UNDERTOW
import java.util.Map;
import java.util.Set;
org.jboss.example.MyHandlerBuilder
165
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
11.1. OVERVIEW
Transaction Lifecycle
The typical standard for a well-designed transaction is that it is Atomic, Consistent, Isolated, and
Durable (ACID).
Atomicity
For a transaction to be atomic, all transaction members must make the same decision. Either they all
commit, or they all roll back. If atomicity is broken, what results is termed a heuristic outcome.
Consistency
Consistency means that data written to the database is guaranteed to be valid data, in terms of the
database schema. The database or other data source must always be in a consistent state. One
example of an inconsistent state would be a field in which half of the data is written before an
operation aborts. A consistent state would be if all the data were written, or the write were rolled back
when it could not be completed.
Isolation
Isolation means that data being operated on by a transaction must be locked before modification, to
prevent processes outside the scope of the transaction from modifying the data.
Durability
Durability means that in the event of an external failure after transaction members have been
instructed to commit, all members will be able to continue committing the transaction when the failure
is resolved. This failure may be related to hardware, software, network, or any other involved system.
166
CHAPTER 11. JAVA TRANSACTION API (JTA)
The terms Transaction Coordinator and Transaction Manager (TM) are mostly interchangeable in terms
of transactions with JBoss EAP. The term Transaction Coordinator is usually used in the context of
distributed JTS transactions.
In JTA transactions, the TM runs within JBoss EAP and communicates with transaction participants
during the two-phase commit protocol.
The TM tells transaction participants whether to commit or roll back their data, depending on the outcome
of other transaction participants. In this way, it ensures that transactions adhere to the ACID standard.
Implementation of JTA is done using TM, which is covered by project Narayana for JBoss EAP
application server. TM allows application to assign various resources, for example, database or JMS
brokers, through a single global transaction. The global transaction is referred as XA transaction.
Generally resources with XA capabilities are included in such transaction, but non-XA resources could
also be part of global transaction. There are several optimizations which help non-XA resources to
behave as XA capable resources. For more information, refer LRCO Optimization for Single-phase
Commit
167
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
implementation when the transaction manager is switched to JTS mode. JTS works over the IIOP
protocol. Transaction managers that use JTS communicate with each other using a process called an
Object Request Broker (ORB), using a communication standard called Common Object Request Broker
Architecture (CORBA). For more information, see ORB Configuration in the JBoss EAP Configuration
Guide.
Using JTA API from an application standpoint, a JTS transaction behaves in the same way as a JTA
transaction.
NOTE
The implementation of JTS included in JBoss EAP supports distributed transactions. The
difference from fully-compliant JTS transactions is interoperability with external third-party
ORBs. This feature is unsupported with JBoss EAP. Supported configurations distribute
transactions across multiple JBoss EAP containers only.
The WS-Coordination (WS-C) specification defines a framework that allows different coordination
protocols to be plugged in to coordinate work between clients, services, and participants.
The WS-Transaction (WS-T) protocol comprises the pair of transaction coordination protocols, WS-
Atomic Transaction (WS-AT) and WS-Business Activity (WS-BA), which utilize the coordination
framework provided by WS-C. WS-T is developed to unify existing traditional transaction processing
systems, allowing them to communicate reliably with one another.
An atomic transaction (AT) is designed to support short duration interactions where ACID semantics are
appropriate. Within the scope of an AT, web services typically employ bridging to access XA resources,
such as databases and message queues, under the control of the WS-T. When the transaction
terminates, the participant propagates the outcome decision of the AT to the XA resources, and the
appropriate commit or rollback actions are taken by each participant.
1. To initiate an AT, the client application first locates a WS-C Activation Coordinator web service
that supports WS-T.
3. The client receives an appropriate WS-T context from the activation service.
4. The response to the CreateCoordinationContext message, the transaction context, has its
CoordinationType element set to the WS-AT namespace,
http://schemas.xmlsoap.org/ws/2004/10/wsat. It also contains a reference to the atomic
168
CHAPTER 11. JAVA TRANSACTION API (JTA)
transaction coordinator endpoint, the WS-C Registration Service, where participants can be
enlisted.
5. The client normally proceeds to invoke web services and complete the transaction, either
committing all the changes made by the web services, or rolling them back. In order to be able to
drive this completion, the client must register itself as a participant for the completion protocol,
by sending a register message to the registration service whose endpoint was returned in the
coordination context.
6. Once registered for completion, the client application then interacts with web services to
accomplish its business-level work. With each invocation of a business web service, the client
inserts the transaction context into a SOAP header block, such that each invocation is implicitly
scoped by the transaction. The toolkits that support WS-AT aware web services provide facilities
to correlate contexts found in SOAP header blocks with back-end operations. This ensures that
modifications made by the web service are done within the scope of the same transaction as the
client and subject to commit or rollback by the Transaction Coordinator.
7. Once all the necessary application work is complete, the client can terminate the transaction,
with the intent of making any changes to the service state permanent. The completion
participant instructs the coordinator to try to commit or roll back the transaction. When the
commit or rollback operation completes, a status is returned to the participant to indicate the
outcome of the transaction.
Web Services-Business Activity (WS-BA) defines a protocol for web service applications to enable
existing business processing and workflow systems to wrap their proprietary mechanisms and
interoperate across implementations and business boundaries.
Unlike the WS-AT protocol model, where participants inform the transaction coordinator of their state
only when asked, a child activity within a WS-BA can specify its outcome to the coordinator directly,
without waiting for a request. A participant may choose to exit the activity or notify the coordinator of a
failure at any point. This feature is useful when tasks fail because the notification can be used to modify
the goals and drive processing forward, without waiting until the end of the transaction to identify failures.
2. Wherever these services have the ability to undo any work, they inform the WS-BA, in case the
WS-BA later decides the cancel the work. If the WS-BA suffers a failure. it can instruct the
service to execute its undo behavior.
169
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Transaction Bridging describes the process of linking the Java EE and WS-T domains. The transaction
bridge component txbridge provides bi-directional linkage, such that either type of transaction may
encompass business logic designed for use with the other type. The technique used by the bridge is a
combination of interposition and protocol mapping.
In the transaction bridge, an interposed coordinator is registered into the existing transaction and
performs the additional task of protocol mapping; that is, it appears to its parent coordinator to be a
resource of its native transaction type, whilst appearing to its children to be a coordinator of their native
transaction type, even though these transaction types differ.
The transaction bridge resides in the package org.jboss.jbossts.txbridge and its sub-packages.
It consists of two distinct sets of classes, one for bridging in each direction.
An XA transaction is a transaction which can span multiple resources. It involves a coordinating TM, with
one or more databases or other transactional resources, all involved in a single global XA transaction.
XA Recovery is the process of ensuring that all resources affected by a transaction are updated or rolled
back, even if any of the resources, which are transaction participants, crash or become unavailable.
Within the scope of JBoss EAP, the transactions subsystem provides the mechanisms for XA
Recovery to any XA resources or subsystems which use them, such as XA datasources, JMS message
queues, and JCA resource adapters.
XA Recovery happens without user intervention. In the event of an XA Recovery failure, errors are
recorded in the log output. Contact Red Hat Global Support Services if you need assistance. The XA
recovery process is driven by periodic recovery thread which is launched by default each 2 minutes. The
periodic recovery thread processes all unfinished transactions.
NOTE
It can take four to eight minutes to complete the recovery for an in-doubt transaction
because it might require multiple runs of the recovery process.
170
CHAPTER 11. JAVA TRANSACTION API (JTA)
The transaction log may not be cleared from a successfully committed transaction
If the JBoss EAP server crashes after an XAResource commit method successfully completes and
commits the transaction, but before the coordinator can update the log, you may see the following
warning message in the log when you restart the server:
ARJUNA016037: Could not find new XAResource to use for recovering non-
serializable XAResource XAResourceRecord
This is because upon recovery, the JBoss Transaction Manager (TM) sees the transaction
participants in the log and attempts to retry the commit. Eventually the JBoss TM assumes the
resources are committed and no longer retries the commit. In this situation, can safely ignore this
warning as the transaction is committed and there is no loss of data.
NOTE
JBoss EAP 7.0 has an implemented enhancement to clear transaction logs after a
successfully committed transaction and the above situation should not occur frequently.
Rollback is not called for JTS transaction when a server crashes at the end of
XAResource.prepare()
If the JBoss EAP server crashes after the completion of an XAResource prepare() method call, all
of the participating XAResources are locked in the prepared state and remain that way upon server
restart. The transaction is not rolled back and the resources remain locked until the transaction times
out or a DBA manually rolls back the resources and clears the transaction log. For more information,
see https://issues.jboss.org/browse/JBTM-2124
Periodic recovery can occur on committed transactions.
When the server is under excessive load, the server log may contain the following warning message,
followed by a stacktrace:
Under heavy load, the processing time taken by a transaction can overlap with the timing of the
periodic recovery process’s activity. The periodic recovery process detects the transaction still in
progress and attempts to initiate a rollback but in fact the transaction continues to completion. At the
time the periodic recovery attempts but fails the rollback, it records the rollback failure in the server
log. The underlying cause of this issue will be addressed in a future release, but in the meantime a
workaround is available.
Increase the interval between the two phases of the recovery process by setting the
com.arjuna.ats.jta.orphanSafetyInterval property to a value higher than the default value
of 10000 milliseconds. A value of 40000 milliseconds is recommended. Please note that this does not
solve the issue, instead it decreases the probability that it will occur and that the warning message
will be shown in the log. For more information, see https://developer.jboss.org/thread/266729
171
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Phase 1: Prepare
In the first phase, the transaction participants notify the transaction coordinator whether they are able to
commit the transaction or must roll back.
Phase 2: Commit
In the second phase, the transaction coordinator makes the decision about whether the overall
transaction should commit or roll back. If any one of the participants cannot commit, the transaction must
roll back. Otherwise, the transaction can commit. The coordinator directs the resources about what to do,
and they notify the coordinator when they have done it. At that point, the transaction is finished.
Transaction timeouts can be associated with transactions in order to control their lifecycle. If a timeout
threshold passes before the transaction commits or rolls back, the timeout causes the transaction to be
rolled back automatically.
You can configure default timeout values for the entire transaction subsystem, or you disable default
timeout values, and specify timeouts on a per-transaction basis.
NOTE
NOTE
In other application server vendor documentation, you can find that term distributed
transaction means XA transaction. In context of JBoss EAP documentation, the
distributed transaction refers transactions distributed among several JBoss EAP
application servers. Transaction which consists from different resources (for example,
database resource and jms resource) are referred as XA transactions in this document.
For more information, refer to About Java Transaction Service (JTS) and About XA
Datasources and XA Transactions.
172
CHAPTER 11. JAVA TRANSACTION API (JTA)
An ORB uses a standardized Interface Description Language (IDL) to communicate and interpret
messages. Common Object Request Broker Architecture (CORBA) is the IDL used by the ORB in JBoss
EAP.
The main type of service which uses an ORB is a system of distributed Java Transactions, using the
Java Transaction Service (JTS) specification. Other systems, especially legacy systems, may choose to
use an ORB for communication, rather than other mechanisms such as remote Enterprise JavaBeans or
JAX-WS or JAX-RS web services.
The ORB Portability API provides mechanisms to interact with an ORB. This API provides methods for
obtaining a reference to the ORB, as well as placing an application into a mode where it listens for
incoming connections from an ORB. Some of the methods in the API are not supported by all ORBs. In
those cases, an exception is thrown.
com.arjuna.orbportability.orb
com.arjuna.orbportability.oa
Refer to the JBoss EAP Javadocs bundle from the Red Hat Customer Portal for specific details about the
methods and properties included in the ORB Portability API.
Optimizations serve to enhance the 2-phase commit protocol in particular cases. Generally, the TM
starts a global transaction which passes through the 2-phase commit. But when we optimize these
transactions, in certain cases, the TM does not need to proceed with full 2-phased commits and thus the
process gets faster.
The prepare phase generally locks the resource until the second phase is processed. Single-phase
commit means that the prepare phase is skipped and only the commit is processed on the resource. If
not specified, the single-phase commit optimization is used automatically when the global transaction
contains only one participant.
173
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The non-XA resource is processed at the end of the prepare phase, and an attempt is made to commit it.
If the commit succeeds, the transaction log is written and the remaining resources go through the commit
phase. If the last resource fails to commit, the transaction is rolled back.
Where a single local TX datasource is used in a transaction, the LRCO is automatically applied to it.
Previously, adding non-XA resources to an XA transaction was achieved via the LRCO method.
However, there is a window of failure in LRCO. The procedure for adding non-XA resources to an XA
transaction via the LRCO method is as follows:
1. Prepare XA transaction
2. Commit LRCO
3. Write tx log
4. Commit XA transaction
If the procedure crashes between steps 2 and step 3, this could lead to data inconsistency and you
cannot commit the XA transaction. The data inconsistency is because the LRCO non-XA resource is
committed but information about preparation of XA resource was not recorded. The recovery manager
will rollback the resource after the server is up. CMR eliminates this restriction and allows non-XA to be
reliably enlisted in an XA transaction.
NOTE
CMR is a special case of LRCO optimalization, which could be used only for datasources.
It is not suitable for all non-XA resources.
Summary
Configuring access to a resource manager using the Commit Markable Resource (CMR) interface
ensures that a non-XA datasource can be reliably enlisted to an XA (2PC) transaction. It is an
implementation of the LRCO algorithm, which makes non-XA resource fully recoverable.
174
CHAPTER 11. JAVA TRANSACTION API (JTA)
CREATE TABLE xids (xid VARCHAR(255) for bit data not null,
transactionManagerID
varchar(64), actionuid VARCHAR(255) for bit data not null)
CREATE UNIQUE INDEX index_xid ON xids (xid)
175
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
By default, the CMR feature is disabled for datasources. To enable it, you must create or modify the
datasource configuration and ensure that the connectible attribute is set to true. The following is an
example of the datasources section of a server XML configuration file:
NOTE
You can also enable a resource manager as a CMR, using the management CLI, as follows:
/subsystem=datasources/data-source=ConnectableDS:add(enabled="true", jndi-
name="java:jboss/datasources/ConnectableDS", jta="true", use-java-
context="true", connectable="true", connection-url="validConnectionURL",
exception-
sorter="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter"
, driver-name="h2")
/subsystem=datasources/data-source=ConnectableDS:write-
attribute(name=connectable,value=true)
<subsystem xmlns="urn:jboss:domain:transactions:3.0">
...
<commit-markable-resources>
<commit-markable-resource jndi-
name="java:jboss/datasources/ConnectableDS">
<xid-location name="xids" batch-size="100" immediate-
cleanup="false"/>
</commit-markable-resource>
...
</commit-markable-resources>
</subsystem>
NOTE
You must restart the server after adding the CMR reference under the transactions
subsystem.
176
CHAPTER 11. JAVA TRANSACTION API (JTA)
NOTE
Use the exception-sorter parameter in the datasource configuration. For details, see
Example Datasource Configurations in the JBoss EAP Configuration Guide.
If a subsequent request for the status of the transaction occurs there will be no information available. In
this case, the requester assumes that the transaction has aborted and rolled back. This presumed-abort
optimization means that no information about participants needs to be made persistent until the
transaction has decided to commit, since any failure prior to this point will be assumed to be an abort of
the transaction.
Commit
If every transaction participant can commit, the transaction coordinator directs them to do so. See
About Transaction Commit for more information.
Roll-back
If any transaction participant cannot commit, or the transaction coordinator cannot direct participants
to commit, the transaction is rolled back. See About Transaction Roll-Back for more information.
Heuristic outcome
If some transaction participants commit and others roll back. it is termed a heuristic outcome.
Heuristic outcomes require human intervention. See About Heuristic Outcomes for more information.
After commit, information about the transaction is removed from the transaction coordinator, and the
newly-written state is now the durable state.
177
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
A transaction participant rolls back by restoring its state to reflect the state before the transaction began.
After a roll-back, the state is the same as if the transaction had never been started.
Heuristic outcomes typically occur during the second phase of the 2-phase commit (2PC) protocol. In
rare cases, this outcome may occur in 1PC. They are often caused by failures to the underlying
hardware or communications subsystems of the underlying servers.
Heuristic is possible due to timeouts in various subsystems or resources even with transaction manager
and full crash recovery. In any system that requires some form of distributed agreement, situations may
arise some parts of the system diverge in terms of the global outcome.
Heuristic rollback
The commit operation was not able to commit the resources but all of the participants were able to be
rolled back and so an atomic outcome was still achieved.
Heuristic commit
An attempted rollback operation failed because all of the participants unilaterally committed. This may
happen if, for example, the coordinator is able to successfully prepare the transaction but then decides to
roll it back because of a failure on its side, such as a failure to update its log. In the interim, the
participants may decide to commit.
Heuristic mixed
Some participants committed and others rolled back.
Heuristic hazard
The disposition of some of the updates is unknown. For those which are known, they have either all
been committed or all rolled back.
When a resource asks to participate in a transaction, a chain of events is set in motion. The Transaction
Manager (TM) is a process that lives within the application server and manages transactions. Transaction
participants are objects which participate in a transaction. Resources are datasources, JMS connection
factories, or other JCA connections.
178
CHAPTER 11. JAVA TRANSACTION API (JTA)
To begin a transaction, your application obtains an instance of class UserTransaction from JNDI
or, if it is an EJB, from an annotation. The UserTransaction interface includes methods for
beginning, committing, and rolling back top-level transactions. Newly-created transactions are
automatically associated with their invoking thread. Nested transactions are not supported in
JTA, so all transactions are top-level transactions.
NOTE
Failure Recovery
If a resource, transaction participant, or the application server crashes or become unavailable, the
Transaction Manager handles recovery when the underlying failure is resolved and the resource is
available again. This process happens automatically. For more information, see XA Recovery.
For more information, see Configuring Transactions in the JBoss EAP Configuration Guide.
179
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Control Transactions
Begin a Transaction
Commit a Transaction
Introduction
This list of procedures outlines the different ways to control transactions in your applications which use
JTA APIs.
Begin a Transaction
Commit a Transaction
a. JNDI
new InitialContext().lookup("java:comp/UserTransaction")
b. Injection
c. Context
In a stateless/stateful bean:
In a message-driven bean:
180
CHAPTER 11. JAVA TRANSACTION API (JTA)
try {
System.out.println("\nCreating connection to database: "+url);
stmt = conn.createStatement(); // non-tx statement
try {
System.out.println("Starting top-level transaction.");
userTransaction.begin();
stmtx = conn.createStatement(); // will be a tx-statement
...
}
}
Result
The transaction begins. All uses of your datasource until you commit or roll back the transaction are
transactional.
Nested transactions are available only when used with the JTS specification. Nested transactions are not
a supported feature of JBoss EAP application server. In addition, many database vendors do not support
nested transactions, so consult your database vendor before you add nested transactions to your
application.
Pre-requisites
You must begin a transaction before you can commit it. For information on how to begin a transaction,
refer to Begin a Transaction.
181
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
When you call the commit() method on the UserTransaction, the TM attempts to commit the
transaction.
@Inject
private UserTransaction userTransaction;
2. If you use Container Managed Transactions (CMT), you do not need to manually commit
If you configure your bean to use Container Managed Transactions, the container will manage
the transaction lifecycle for you based on annotations you configure in the code.
@PersistenceContext
private EntityManager em;
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void updateTable(String key, String value)
<!-- Perform some data manipulation using entityManager -->
...
}
Result
Your datasource commits and your transaction ends, or an exception is thrown.
NOTE
Pre-requisites
182
CHAPTER 11. JAVA TRANSACTION API (JTA)
You must begin a transaction before you can roll it back. For information on how to begin a transaction,
refer to Begin a Transaction.
@Inject
private UserTransaction userTransaction;
2. If you use Container Managed Transactions (CMT), you do not need to manually roll back
the transaction
If you configure your bean to use Container Managed Transactions, the container will manage
the transaction lifecycle for you based on annotations you configure in the code.
NOTE
Rollback for CMT occurs if RuntimeException is thrown. You can also explicitly call the
setRollbackOnly method to gain the rollback. Or, use the
@ApplicationException(rollback=true) for application exception to rollback.
Result
Your transaction is rolled back by the TM.
NOTE
183
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Heuristic transaction outcomes are uncommon and usually have exceptional causes. The word heuristic
means "by hand", and that is the way that these outcomes usually have to be handled. See About
Heuristic Outcomes for more information about heuristic transaction outcomes.
This procedure shows how to handle a heuristic outcome of a transaction using the Java Transaction API
(JTA).
1. Determine the cause: The over-arching cause of a heuristic outcome in a transaction is that a
resource manager promised it could commit or roll-back, and then failed to fulfill the promise.
This could be due to a problem with a third-party component, the integration layer between the
third-party component and JBoss EAP, or JBoss EAP itself.
By far, the most common two causes of heuristic errors are transient failures in the environment
and coding errors in the code dealing with resource managers.
2. Fix transient failures in the environment: Typically, if there is a transient failure in your
environment, you will know about it before you find out about the heuristic error. This could be a
network outage, hardware failure, database failure, power outage, or a host of other things.
If you experienced the heuristic outcome in a test environment, during stress testing, it provides
information about weaknesses in your environment.
WARNING
3. Contact resource manager vendors: If you have no obvious failure in your environment, or the
heuristic outcome is easily reproducible, it is probably a coding error. Contact third-party vendors
to find out if a solution is available. If you suspect the problem is in the TM of JBoss EAP itself,
contact Red Hat Global Support Services.
4. Try to manually recover transaction through the management CLI. For more information, see the
Recover a Transaction Participant section of the JBoss EAP Configuration Guide.
5. In a test environment, delete the logs and restart JBoss EAP: In a test environment, or if you do
not care about the integrity of the data, deleting the transaction logs and restarting JBoss EAP
gets rid of the heuristic outcome. By default, the transaction logs are located in the
EAP_HOME/standalone/data/tx-object-store/ directory for a standalone server, or the
EAP_HOME/domain/servers/SERVER_NAME/data/tx-object-store/ directory in a
managed domain. In the case of a managed domain, SERVER_NAME refers to the name of the
individual server participating in a server group.
NOTE
The location of the transaction log also depends on the object store in use and the
values set for the oject-store-relative-to and object-store-path
parameters. For file system logs (such as a standard shadow and Apache
ActiveMQ Artemis logs) the default direction location is used, but when using a
JDBC object store, the transaction logs are stored in a database.
184
CHAPTER 11. JAVA TRANSACTION API (JTA)
6. Resolve the outcome by hand: The process of resolving the transaction outcome by hand is very
dependent on the exact circumstance of the failure. Typically, you need to take the following
steps, applying them to your situation:
c. Manually force log cleanup and data reconciliation in one or more of the involved
components.
The details of how to perform these steps are out of the scope of this documentation.
Transaction errors are challenging to solve because they are often dependent on timing. Here are some
common errors and ideas for troubleshooting them.
NOTE
These guidelines do not apply to heuristic errors. If you experience heuristic errors, refer
to Handle a Heuristic Outcome in a Transaction and contact Red Hat Global Support
Services for assistance.
The transaction timed out but the business logic thread did not notice
This type of error often manifests itself when Hibernate is unable to obtain a database connection for
lazy loading. If it happens frequently, you can lengthen the timeout value. See the JBoss EAP
Configuration Guide for information on configuring the transaction manager.
If that is not feasible, you may be able to tune your external environment to perform more quickly, or
restructure your code to be more efficient. Contact Red Hat Global Support Services if you still have
trouble with timeouts.
If your code does use TransactionManager or Transaction methods directly, be aware of the
following behavior when committing or rolling back a transaction. If your code uses
TransactionManager methods to control your transactions, committing or rolling back a
transaction disassociates the transaction from the current thread. However, if your code uses
Transaction methods, the transaction may not be associated with the running thread, and you
need to disassociate it from its threads manually, before returning it to the thread pool.
185
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
// Get a UserTransaction
UserTransaction txn = new
InitialContext().lookup("java:comp/UserTransaction");
try {
stmt = conn.createStatement(); // non-tx statement
try {
stmt.executeUpdate("CREATE TABLE test_table (a INTEGER,b
INTEGER)");
stmt.executeUpdate("CREATE TABLE test_table2 (a INTEGER,b
INTEGER)");
}
catch (Exception e) {
throw new RuntimeException(e);
}
try {
System.out.println("Starting top-level transaction.");
txn.begin();
186
CHAPTER 11. JAVA TRANSACTION API (JTA)
while (res1.next()) {
System.out.println("Column 1: "+res1.getInt(1));
System.out.println("Column 2: "+res1.getInt(2));
}
System.out.println("\nAdding entries to table 2.");
while (res1.next()) {
System.out.println("Column 1: "+res1.getInt(1));
System.out.println("Column 2: "+res1.getInt(2));
}
txn.rollback();
while (res2.next()) {
System.out.println("Column 1: "+res2.getInt(1));
System.out.println("Column 2: "+res2.getInt(2));
}
stmtx = conn.createStatement();
while (res2.next()) {
187
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
System.out.println("Column 1: "+res2.getInt(1));
System.out.println("Column 2: "+res2.getInt(2));
}
txn.commit();
}
catch (Exception ex) {
throw new RuntimeException(ex);
}
}
catch (Exception sysEx) {
sysEx.printStackTrace();
System.exit(0);
}
}
}
UserTransaction - http://docs.oracle.com/javaee/7/api/javax/transaction/UserTransaction.html
If you use Red Hat JBoss Developer Studio to develop your applications, the API documentation is
included in the Help menu.
188
CHAPTER 12. JAVA PERSISTENCE API (JPA)
NOTE
JPA itself is just a specification, not a product; it cannot perform persistence or anything
else by itself. JPA is just a set of interfaces, and requires an implementation.
JBoss EAP is 100% compliant with the Java Persistence 2.1 specification. Hibernate also provides
additional features to the specification. To get started with JPA and JBoss EAP, see the bean-
validation, greeter, and kitchensink quickstarts that ship with JBoss EAP. For information
about how to download and run the quickstarts, see Using the Quickstart Examples.
Persistence in JPA is available in containers like EJB 3 or the more modern CDI, Java Context and
Dependency Injection, as well as in standalone Java SE applications that execute outside of a particular
container. The following programming interfaces and artifacts are available in both environments.
EntityManagerFactory
An entity manager factory provides entity manager instances, all instances are configured to connect
to the same database, to use the same default settings as defined by the particular implementation,
etc. You can prepare several entity manager factories to access several data stores. This interface is
similar to the SessionFactory in native Hibernate.
EntityManager
The EntityManager API is used to access a database in a particular unit of work. It is used to create
and remove persistent entity instances, to find entities by their primary key identity, and to query over
all entities. This interface is similar to the Session in Hibernate.
Persistence context
A persistence context is a set of entity instances in which for any persistent entity identity there is a
unique entity instance. Within the persistence context, the entity instances and their lifecycle is
189
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
managed by a particular entity manager. The scope of this context can either be the transaction, or an
extended unit of work.
Persistence unit
The set of entity types that can be managed by a given entity manager is defined by a persistence
unit. A persistence unit defines the set of all classes that are related or grouped by the application,
and which must be collocated in their mapping to a single data store.
Container-managed entity manager
An entity manager whose lifecycle is managed by the container.
Application-managed entity manager
An entity manager whose lifecycle is managed by the application.
JTA entity manager
Entity manager involved in a JTA transaction.
Resource-local entity manager
Entity manager using a resource transaction (not a JTA transaction).
a. In Red Hat JBoss Developer Studio, click File-→ New -→ Project. Find JPA in the list,
expand it, and select JPA Project. You are presented with the following dialog.
190
CHAPTER 12. JAVA PERSISTENCE API (JPA)
c. Select a Target runtime. If no target runtime is available, follow these instructions to define
a new server and runtime: Add the JBoss EAP Server Using Define New Server .
191
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
f. Click Finish.
g. If prompted, choose whether you wish to associate this type of project with the JPA
perspective window.
b. Right click the project root directory in the Project Explorer panel.
d. Select XML File from the XML folder and click Next.
g. Select Create XML file from an XML schema file and click Next.
i. Click Finish to create the file. The persistence.xml has been created in the META-INF/
folder and is ready to be configured.
192
CHAPTER 12. JAVA PERSISTENCE API (JPA)
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="example" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:jboss/datasources/ExampleDS</jta-
data-source>
<mapping-file>ormap.xml</mapping-file>
<jar-file>TestApp.jar</jar-file>
<class>org.test.Test</class>
<shared-cache-mode>NONE</shared-cache-mode>
<validation-mode>CALLBACK</validation-mode>
<properties>
<property name="hibernate.dialect"
value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.hbm2ddl.auto" value="create-
drop"/>
</properties>
</persistence-unit>
</persistence>
You can connect to the database using the persistence.xml file. There are two ways of doing this:
Specifying a data source which is configured in the datasources subsystem in JBoss EAP.
The jta-data-source points to the JNDI name of the data source this persistence unit maps
to. The java:jboss/datasources/ExampleDS here points to the H2 DB embedded in the
JBoss EAP.
<persistence>
<persistence-unit name="myapp">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:jboss/datasources/ExampleDS</jta-data-
source>
<properties>
... ...
</properties>
</persistence-unit>
</persistence>
<property name="javax.persistence.jdbc.driver"
193
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
value="org.hsqldb.jdbcDriver"/>
<property name="javax.persistence.jdbc.user" value="sa"/>
<property name="javax.persistence.jdbc.password" value=""/>
<property name="javax.persistence.jdbc.url" value="jdbc:hsqldb:."/>
For the complete list of connection properties, see Connection Properties Configurable in the
persistence.xml File.
There are a number of properties that control the behavior of Hibernate at runtime. All are optional and
have reasonable default values. These Hibernate properties are all used in the persistence.xml file.
For the complete list of all configurable Hibernate properties, see Hibernate Properties in the appendix of
this guide.
SSO Clustering
Each cache container defines a "repl" and a "dist" cache. These caches should not be used directly by
user applications.
It is recommended to configure the second-level cache through JPA applications, using the
persistence.xml file.
Alternatively, you can configure the second-level cache through Hibernate native applications,
using the hibernate.cfg.xml file.
1. See Create a Simple JPA Application for details on how to create a Hibernate configuration file
in Red Hat JBoss Developer Studio.
<persistence-unit name="...">
(...) <!-- other configuration -->
<shared-cache-mode>$SHARED_CACHE_MODE</shared-cache-mode>
<properties>
194
CHAPTER 12. JAVA PERSISTENCE API (JPA)
<property name="hibernate.cache.use_second_level_cache"
value="true" />
<property name="hibernate.cache.use_query_cache" value="true" />
</properties>
</persistence-unit>
NOTE
2. Add the following XML to the hibernate.cfg.xml file. The XML needs to be within the
<session-factory> tag:
<property
name="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.use_query_cache">true</property>
<property
name="hibernate.cache.region.factory_class">org.jboss.as.jpa.hiberna
te5.infinispan.InfinispanRegionFactory</property>
Annotation Description
195
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Annotation Description
Annotation Description
Annotation Description
196
CHAPTER 12. JAVA PERSISTENCE API (JPA)
Annotation Description
Annotation Description
197
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Annotation Description
NOTE
198
CHAPTER 12. JAVA PERSISTENCE API (JPA)
Annotation Description
Annotation Description
Annotation Description
Annotation Description
199
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Annotation Description
Annotation Description
Annotation Description
Annotation Description
200
CHAPTER 12. JAVA PERSISTENCE API (JPA)
Annotation Description
Annotation Description
Annotation Description
201
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Annotation Description
Annotation Description
Introduction to HQL
202
CHAPTER 12. JAVA PERSISTENCE API (JPA)
The Hibernate Query Language (HQL) is a powerful query language, similar in appearance to SQL.
Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance,
polymorphism and association.
HQL is a superset of JPQL. An HQL query is not always a valid JPQL query, but a JPQL query is
always a valid HQL query.
Both HQL and JPQL are non-type-safe ways to perform query operations. Criteria queries offer a type-
safe approach to querying.
IMPORTANT
Care should be taken when executing bulk update or delete operations because they may
result in inconsistencies between the database and the entities in the active persistence
context. In general, bulk update and delete operations should only be performed within a
transaction in a new persistence context or before fetching or accessing entities whose
state might be affected by such operations.
Statement Description
select_statement :: =
[select_clause]
from_clause
[where_clause]
[groupby_clause]
[having_clause]
[orderby_clause]
203
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The attribute_list is analogous to the column specification in the SQL INSERT statement.
For entities involved in mapped inheritance, only attributes directly defined on the named entity can be
used in the attribute_list. Superclass properties are not allowed and subclass properties do not
make sense. In other words, INSERT statements are inherently non-polymorphic.
WARNING
The select_statement can be any valid HQL select query, with the caveat that
the return types must match the types expected by the insert. Currently, this is
checked during query compilation rather than allowing the check to relegate to the
database. This can cause problems with Hibernate Types that are equivalent as
opposed to equal. For example, this might cause mismatch issues between an
attribute mapped as an org.hibernate.type.DateType and an attribute
defined as a org.hibernate.type.TimestampType, even though the database
might not make a distinction or might be able to handle the conversion.
For the id attribute, the insert statement gives you two options. You can either explicitly specify the id
property in the attribute_list, in which case its value is taken from the corresponding select
expression, or omit it from the attribute_list in which case a generated value is used. This latter
option is only available when using id generators that operate "in the database"; attempting to use this
option with any "in memory" type generators will cause an exception during parsing.
For optimistic locking attributes, the insert statement again gives you two options. You can either specify
the attribute in the attribute_list in which case its value is taken from the corresponding select
expressions, or omit it from the attribute_list in which case the seed value defined by the
corresponding org.hibernate.type.VersionType is used.
204
CHAPTER 12. JAVA PERSISTENCE API (JPA)
select distinct c
from Customer c
left join c.orders o
with o.value > 5000.00
The important distinction is that in the generated SQL the conditions of the with clause are made part
of the on clause in the generated SQL as opposed to the other queries in this section where the
HQL/JPQL conditions are made part of the where clause in the generated SQL. The distinction in this
specific example is probably not that significant. The with clause is sometimes necessary in more
complicated queries.
state fields
component/embeddable attributes
identification variable declared in the select clause for any of the previous expression types
HQL does not mandate that all values referenced in the order-by clause must be named in the select
clause, but it is required by JPQL. Applications desiring database portability should be aware that not all
databases support referencing values in the order-by clause that are not referenced in the select clause.
Individual expressions in the order-by can be qualified with either ASC (ascending) or DESC (descending)
to indicate the desired ordering direction.
205
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
WARNING
Using DML may violate the object/relational mapping and may affect object state.
Object state stays in memory and by using DML, the state of an in-memory object is
not affected depending on the operation that is performed on the underlying
database. In-memory data must be used with care if DML is used.
NOTE
The result of execution of a UPDATE or DELETE statement is the number of rows that are actually
affected (updated or deleted).
The int value returned by the Query.executeUpdate() method indicates the number of entities
within the database that were affected by the operation.
Internally, the database might use multiple SQL statements to execute the operation in response to a
DML Update or Delete request. This might be because of relationships that exist between tables and the
join tables that may need to be updated or deleted.
206
CHAPTER 12. JAVA PERSISTENCE API (JPA)
For example, issuing a delete statement (as in the example above) may actually result in deletes being
executed against not just the Company table for companies that are named with oldName, but also
against joined tables. Thus, a Company table in a BiDirectional ManyToMany relationship with an
Employee table, would lose rows from the corresponding join table Company_Employee as a result of
the successful execution of the previous example.
The int deletedEntries value above will contain a count of all the rows affected due to this
operation, including the rows in the join tables.
The pseudo-syntax for INSERT statements is: INSERT INTO EntityName properties_list
select_statement.
NOTE
Only the INSERT INTO … SELECT … form is supported; not the INSERT INTO …
VALUES … form.
String hqlInsert = "insert into Account (id, name) select c.id, c.name
from Customer c where ...";
int createdEntities = s.createQuery( hqlInsert )
.executeUpdate();
tx.commit();
session.close();
If you do not supply the value for the id attribute via the SELECT statement, an identifier is generated
for you, as long as the underlying database supports auto-generated keys. The return value of this bulk
insert operation is the number of entries actually created in the database.
select c
from Customer c
join c.orders o
join o.lineItems l
join l.product p
where o.status = 'pending'
and p.status = 'backorder'
// alternate syntax
select c
from Customer c,
in(c.orders) o,
in(o.lineItems) l
207
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
join l.product p
where o.status = 'pending'
and p.status = 'backorder'
In the example, the identification variable o actually refers to the object model type Order which is the
type of the elements of the Customer#orders association.
The example also shows the alternate syntax for specifying collection association joins using the IN
syntax. Both forms are equivalent. Which form an application chooses to use is simply a matter of taste.
Expression Description
KEY Valid only for Maps. Refers to the map’s key. If the
key is itself an entity, can be further navigated.
ENTRY Only valid only for Maps. Refers to the Map’s logical
java.util.Map.Entry tuple (the combination of its key
and value). ENTRY is only valid as a terminal path
and only valid in the select clause.
// select all the image file paths (the map value) for Product#123
select i
from Product p
join p.images i
where p.id = 123
// same as above
select value(i)
208
CHAPTER 12. JAVA PERSISTENCE API (JPA)
from Product p
join p.images i
where p.id = 123
// select all the image names (the map key) for Product#123
select key(i)
from Product p
join p.images i
where p.id = 123
// select all the image names and file paths (the 'Map.Entry') for
Product#123
select entry(i)
from Product p
join p.images i
where p.id = 123
// total the value of the initial line items for all orders for a customer
select sum( li.amount )
from Customer c
join c.orders o
join o.lineItems li
where c.id = 123
and index(li) = 1
Function Description
209
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Function Description
Application developers can also supply their own set of functions. This would usually represent either
custom SQL functions or aliases for snippets of SQL. Such function declarations are made by using the
addSqlFunction method of org.hibernate.cfg.Configuration
So rather than dealing with the Object[] here we are wrapping the values in a type-safe java object that
will be returned as the results of the query. The class reference must be fully qualified and it must have a
matching constructor.
The class here need not be mapped. If it does represent an entity, the resulting instances are returned in
the NEW state (not managed!).
This is the part JPQL supports as well. HQL supports additional "dynamic instantiation" features. First,
the query can specify to return a List rather than an Object[] for scalar results:
210
CHAPTER 12. JAVA PERSISTENCE API (JPA)
The results from this query will be a List<Map<String,Object>> as opposed to a List<Object[]>. The keys
of the map are defined by the aliases given to the select expressions.
HQL Predicates
Null Predicate
Check a value for null. Can be applied to basic attribute references, entity references and
parameters. HQL additionally allows it to be applied to component/embeddable types.
Like Predicate
Performs a like comparison on string values. The syntax is:
like_expression ::=
string_expression
[NOT] LIKE pattern_value
[ESCAPE escape_character]
211
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The semantics follow that of the SQL like expression. The pattern_value is the pattern to
attempt to match in the string_expression. Just like SQL, pattern_value can use _
(underscore) and % (percent) as wildcards. The meanings are the same. The _ matches any
single character. The % matches any number of characters.
The optional escape_character is used to specify an escape character used to escape the
special meaning of _ and % in the pattern_value. This is useful when needing to search on
patterns including either _ or %.
select p
from Person p
where p.name like '%Schmidt'
select p
from Person p
where p.name not like 'Jingleheimmer%'
Between Predicate
Analogous to the SQL BETWEEN expression. Perform an evaluation that a value is within the
range of 2 other values. All the operands should have comparable types.
select p
from Customer c
join c.paymentHistory p
where c.id = 123
and index(p) between 0 and 9
select c
from Customer c
where c.president.dateOfBirth
between {d '1945-01-01'}
and {d '1965-01-01'}
select o
from Order o
where o.total between 500 and 5000
select p
from Person p
where p.name between 'A' and 'E'
IN Predicate
The IN predicate performs a check that a particular value is in a list of values. Its syntax is:
212
CHAPTER 12. JAVA PERSISTENCE API (JPA)
[NOT] IN single_valued_list
"state fields", which is its term for simple attributes. Specifically this excludes association
and component/embedded attributes.
The list of values can come from a number of different sources. In the
constructor_expression and collection_valued_input_parameter, the list of
values must not be empty; it must contain at least one value.
In Predicate Examples
select p
from Payment p
where type(p) in (CreditCardPayment, WireTransferPayment)
select c
from Customer c
where c.hqAddress.state in ('TX', 'OK', 'LA', 'NM')
select c
from Customer c
where c.hqAddress.state in ?
select c
from Customer c
where c.hqAddress.state in (
select dm.state
from DeliveryMetadata dm
where dm.salesTax is not null
)
213
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
('Jane','Doe')
)
// numeric comparison
select c
from Customer c
where c.chiefExecutive.age < 30
// string comparison
select c
from Customer c
where c.name = 'Acme'
// datetime comparison
select c
from Customer c
where c.inceptionDate < {d '2000-01-01'}
// enum comparison
select c
from Customer c
where c.chiefExecutive.gender = com.acme.Gender.MALE
// boolean comparison
select c
from Customer c
where c.sendEmail = true
Comparisons can also involve subquery qualifiers - ALL, ANY, SOME. SOME and ANY are synonymous.
214
CHAPTER 12. JAVA PERSISTENCE API (JPA)
The ALL qualifier resolves to true if the comparison is true for all of the values in the result of the
subquery. It resolves to false if the subquery result is empty.
The ANY/SOME qualifier resolves to true if the comparison is true for some of (at least one of) the values
in the result of the subquery. It resolves to false if the subquery result is empty.
@org.hibernate.service.spi.InjectService
Any method on the service implementation class accepting a single parameter and annotated with
@InjectService is considered requesting injection of another service.
By default the type of the method parameter is expected to be the service role to be injected. If the
parameter type is different than the service role, the serviceRole attribute of the InjectService
should be used to explicitly name the role.
By default injected services are considered required, that is the start up will fail if a named dependent
service is missing. If the service to be injected is optional, the required attribute of the
InjectService should be declared as false (default is true).
org.hibernate.service.spi.ServiceRegistryAwareService
215
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The second approach is a pull approach where the service implements the optional service interface
org.hibernate.service.spi.ServiceRegistryAwareService which declares a single
injectServices method.
During startup, Hibernate will inject the org.hibernate.service.ServiceRegistry itself into
services which implement this interface. The service can then use the ServiceRegistry reference
to locate any additional services it needs.
The central service API, aside from the services themselves, is the org.hibernate.service.ServiceRegistry
interface. The main purpose of a service registry is to hold, manage and provide access to services.
Service registries are hierarchical. Services in one registry can depend on and utilize services in that
same registry as well as any parent registries.
ServiceRegistryBuilder registryBuilder =
new ServiceRegistryBuilder( bootstrapServiceRegistry );
ServiceRegistry serviceRegistry =
registryBuilder.buildServiceRegistry();
Either approach is valid for extending a registry, such as adding new service roles, and overriding
services, such as replacing service implementations.
216
CHAPTER 12. JAVA PERSISTENCE API (JPA)
ServiceRegistryBuilder registryBuilder =
new ServiceRegistryBuilder(bootstrapServiceRegistry);
registryBuilder.addService(JdbcServices.class, new MyCustomJdbcService());
ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry();
@Override
public ConnectionProvider getConnectionProvider() {
return null;
}
@Override
public Dialect getDialect() {
return null;
}
@Override
public SqlStatementLogger getSqlStatementLogger() {
return null;
}
@Override
public SqlExceptionHelper getSqlExceptionHelper() {
return null;
}
@Override
public ExtractedDatabaseMetaData getExtractedMetaDataSupport() {
return null;
}
@Override
public LobCreator getLobCreator(LobCreationContext lobCreationContext)
{
return null;
}
@Override
public ResultSetWrapper getResultSetWrapper() {
return null;
}
}
The boot-strap registry holds services that absolutely have to be available for most things to work. The
main service here is the ClassLoaderService which is a perfect example. Even resolving
configuration files needs access to class loading services i.e. resource look ups. This is the root registry,
no parent, in normal use.
217
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Using BootstrapServiceRegistryBuilder
BootstrapServiceRegistry bootstrapServiceRegistry =
new BootstrapServiceRegistryBuilder()
// pass in org.hibernate.integrator.spi.Integrator instances which are
not
// auto-discovered (for whatever reason) but which should be included
.with(anExplicitIntegrator)
// pass in a class loader that Hibernate should use to load
application classes
.with(anExplicitClassLoaderForApplicationClasses)
// pass in a class loader that Hibernate should use to load resources
.with(anExplicitClassLoaderForResources)
// see BootstrapServiceRegistryBuilder for rest of available methods
...
// finally, build the bootstrap registry with all the above options
.build();
org.hibernate.service.classloading.spi.ClassLoaderService
Hibernate needs to interact with class loaders. However, the manner in which Hibernate, or any
library, should interact with class loaders varies based on the runtime environment that is hosting the
application. Application servers, OSGi containers, and other modular class loading systems impose
very specific class loading requirements. This service provides Hibernate an abstraction from this
environmental complexity. And just as importantly, it does so in a single-swappable-component
manner.
In terms of interacting with a class loader, Hibernate needs the following capabilities:
the ability to locate resources, such as properties files and XML files
NOTE
Currently, the ability to load application classes and the ability to load
integration classes are combined into a single load class capability on the
service. That may change in a later release.
org.hibernate.integrator.spi.IntegratorService
Applications, add-ons and other modules need to integrate with Hibernate. The previous approach
required a component, usually an application, to coordinate the registration of each individual module.
This registration was conducted on behalf of each module’s integrator.
This service focuses on the discovery aspect. It leverages the standard Java
java.util.ServiceLoader capability provided by the
org.hibernate.service.classloading.spi.ClassLoaderService in order to discover
implementations of the org.hibernate.integrator.spi.Integrator contract.
218
CHAPTER 12. JAVA PERSISTENCE API (JPA)
This file is used by the java.util.ServiceLoader mechanism. It lists, one per line, the fully
qualified names of classes which implement the org.hibernate.integrator.spi.Integrator
interface.
The difference is a matter of timing in when they need to be initiated. Generally they need access to the
org.hibernate.SessionFactory to be initiated. This special registry is
org.hibernate.service.spi.SessionFactoryServiceRegistry
org.hibernate.event.service.spi.EventListenerRegistry
Description
Service for managing event listeners.
Initiator
org.hibernate.event.service.internal.EventListenerServiceInitiator
Implementations
org.hibernate.event.service.internal.EventListenerRegistryImpl
12.9.8. Integrators
The org.hibernate.integrator.spi.Integrator is intended to provide a simple means for
allowing developers to hook into the process of building a functioning SessionFactory. The
org.hibernate.integrator.spi.Integrator interface defines two methods of interest:
NOTE
219
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
eventListenerRegistry.addDuplicationStrategy(myDuplicationStrategy);
12.10. ENVERS
220
CHAPTER 12. JAVA PERSISTENCE API (JPA)
Each time a change is made to the class, an entry is added to the audit table. The entry contains the
changes to the class, and is given a revision number. This means that changes can be rolled back, or
previous revisions can be viewed.
Auditing strategies define how audit information is persisted, queried and stored. There are currently two
audit strategies available with Hibernate Envers:
This strategy persists the audit data together with a start revision. For each row that is
inserted, updated or deleted in an audited table, one or more rows are inserted in the audit
tables, along with the start revision of its validity.
Rows in the audit tables are never updated after insertion. Queries of audit information use
subqueries to select the applicable rows in the audit tables, which are slow and difficult to
index.
This strategy stores the start revision, as well as the end revision of the audit information. For
each row that is inserted, updated or deleted in an audited table, one or more rows are
inserted in the audit tables, along with the start revision of its validity.
At the same time, the end revision field of the previous audit rows (if available) is set to this
revision. Queries on the audit information can then use between start and end revision,
instead of subqueries. This means that persisting audit information is a little slower because
of the extra updates, but retrieving audit information is a lot faster.
For more information on auditing, refer to About Auditing Persistent Classes. To set the auditing strategy
for the application, refer here: Set the Auditing Strategy .
221
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<property name="org.hibernate.envers.audit_strategy"
value="org.hibernate.envers.strategy.DefaultAuditStrategy"/>
<property name="org.hibernate.envers.audit_strategy"
value="org.hibernate.envers.strategy.ValidityAuditStrategy"/>
1. Configure the available auditing parameters to suit the deployment: Configure Envers
Parameters .
4. Apply the @Audited annotation to each field or property to be audited, or apply it once to the
whole class.
import org.hibernate.envers.Audited;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.GeneratedValue;
import javax.persistence.Column;
@Entity
public class Person {
@Id
@GeneratedValue
private int id;
@Audited
private String name;
222
CHAPTER 12. JAVA PERSISTENCE API (JPA)
@ManyToOne
@Audited
private Address address;
import org.hibernate.envers.Audited;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.GeneratedValue;
import javax.persistence.Column;
@Entity
@Audited
public class Person {
@Id
@GeneratedValue
private int id;
@ManyToOne
private Address address;
Once the JPA entity has been configured for auditing, a table called _AUD will be created to store the
historical changes.
12.10.5. Configuration
JBoss EAP uses entity auditing, through Hibernate Envers, to track the historical changes of a persistent
class.
2. Add, remove or configure Envers properties as required. For a list of available properties, refer
to Envers Configuration Properties .
<persistence-unit name="mypc">
<description>Persistence Unit.</description>
223
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<jta-data-source>java:jboss/datasources/ExampleDS</jta-data-
source>
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.cache.use_second_level_cache"
value="true" />
<property name="hibernate.cache.use_query_cache" value="true" />
<property name="hibernate.generate_statistics" value="true" />
<property name="org.hibernate.envers.versionsTableSuffix"
value="_V" />
<property name="org.hibernate.envers.revisionFieldName"
value="ver_rev" />
</properties>
</persistence-unit>
onPostInsert
onPostUpdate
onPostDelete
onPreUpdateCollection
onPreRemoveCollection
onPostRecreateCollection
Hibernate Envers persists audit data in reaction to various Hibernate events, using a series of event
listeners. These listeners are registered automatically if the Envers jar is in the class path.
2. Subclass each event listener to be overridden. Place the conditional auditing logic in the
subclass, and call the super method if auditing should be performed.
224
CHAPTER 12. JAVA PERSISTENCE API (JPA)
225
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
org.hibernate.envers.def null (same as normal The default schema name used for audit
ault_schema tables) tables. Can be overridden using the
@AuditTable(schema="…")
annotation. If not present, the schema will
be the same as the schema of the normal
tables.
org.hibernate.envers.def null (same as normal The default catalog name that should be
ault_catalog tables) used for audit tables. Can be overridden
using the
@AuditTable(catalog="…")
annotation. If not present, the catalog will
be the same as the catalog of the normal
tables.
org.hibernate.envers.aud REVEND The column name that will hold the end
it_strategy_validity_end revision number in audit entities. This
_rev_field_name property is only valid if the validity audit
strategy is used.
226
CHAPTER 12. JAVA PERSISTENCE API (JPA)
NOTE
Queries on the audited data will be, in many cases, much slower than corresponding
queries on live data, as they involve correlated subselects.
Constraints can then be specified, using the AuditEntity factory class. The query below only selects
entities where the name property is equal to John:
query.add(AuditEntity.property("name").eq("John"));
The queries below only select entities that are related to a given entity:
227
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
query.add(AuditEntity.property("address").eq(relatedEntityInstance));
// or
query.add(AuditEntity.relatedId("address").eq(relatedEntityId));
The results can then be ordered, limited, and have aggregations and projections (except grouping) set.
The example below is a full query.
Constraints can be added to this query in the same way as the previous example. There are additional
possibilities for this query:
AuditEntity.revisionNumber()
Specify constraints, projections and order on the revision number in which the audited entity was
modified.
AuditEntity.revisionProperty(propertyName)
Specify constraints, projections and order on a property of the revision entity, corresponding to the
revision in which the audited entity was modified.
AuditEntity.revisionType()
Provides accesses to the type of the revision (ADD, MOD, DEL).
The query results can then be adjusted as necessary. The query below selects the smallest revision
number at which the entity of the MyEntity class, with the entityId ID has changed, after revision
number 42:
Queries for revisions can also minimize/maximize a property. The query below selects the revision at
which the value of the actualDate for a given entity was larger than a given value, but as small as
possible:
228
CHAPTER 12. JAVA PERSISTENCE API (JPA)
.setProjection(AuditEntity.revisionNumber().min())
.add(AuditEntity.property("actualDate").minimize()
.add(AuditEntity.property("actualDate").ge(givenDate))
.add(AuditEntity.id().eq(givenEntityId)))
.getSingleResult();
The minimize() and maximize() methods return a criteria, to which constraints can be added, which
must be met by the entities with the maximized/minimized properties.
There are two boolean parameters passed when creating the query.
selectEntitiesOnly
The hasChanged condition can be combined with additional criteria. The query below will return a
horizontal slice for MyEntity at the time the revisionNumber was generated. It will be limited to the
revisions that modified prop1, but not prop2.
The result set will also contain revisions with numbers lower than the revisionNumber. This means that
this query cannot be read as "Return all MyEntities changed in revisionNumber with prop1 modified
and prop2 untouched."
The query below shows how this result can be returned, using the
forEntitiesModifiedAtRevision query:
229
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
There are a number of other queries that are also accessible from
org.hibernate.envers.CrossTypeRevisionChangesReader:
List<Object> findEntities(Number)
Returns snapshots of all audited entities changed (added, updated and removed) in a given revision.
Executes n+1 SQL queries, where n is a number of different entity classes modified within the
specified revision.
List<Object> findEntities(Number, RevisionType)
Returns snapshots of all audited entities changed (added, updated or removed) in a given revision
filtered by modification type. Executes n+1 SQL queries, where n is a number of different entity
classes modified within specified revision. Map<RevisionType, List<Object>>
findEntitiesGroupByRevisionType(Number)
Returns a map containing lists of entity snapshots grouped by modification operation (e.g. addition,
update and removal). Executes 3n+1 SQL queries, where n is a number of different entity classes
modified within specified revision.
There are two ways to configure batch fetching: per-class level or per-collection level.
Per-Class Level
When Hibernate loads data on a per-class level, it requires the batch size of the association to
pre-load when queried. For example, consider that at runtime you have 30 instances of a car
object loaded in session. Each car object belongs to an owner object. If you were to iterate
through all the car objects and request their owners, with lazy loading, Hibernate will issue 30
select statements - one for each owner. This is a performance bottleneck.
You can instead, tell Hibernate to pre-load the data for the next batch of owners before they
have been sought via a query. When an owner object has been queried, Hibernate will query
many more of these objects in the same SELECT statement.
The number of owner objects to query in advance depends upon the batch-size parameter
specified at configuration time:
230
CHAPTER 12. JAVA PERSISTENCE API (JPA)
This tells Hibernate to query at least 10 more owner objects in expectation of them being
needed in the near future. When a user queries the owner of car A, the owner of car B may
already have been loaded as part of batch loading. When the user actually needs the owner of
car B, instead of going to the database (and issuing a SELECT statement), the value can be
retrieved from the current session.
In addition to the batch-size parameter, Hibernate 4.2.0 has introduced a new configuration
item to improve in batch loading performance. The configuration item is called Batch Fetch
Style configuration and specified by the hibernate.batch_fetch_style parameter.
Three different batch fetch styles are supported: LEGACY, PADDED and DYNAMIC. To specify
which style to use, use org.hibernate.cfg.AvailableSettings#BATCH_FETCH_STYLE.
LEGACY: In the legacy style of loading, a set of pre-built batch sizes based on
ArrayHelper.getBatchSizes(int) are utilized. Batches are loaded using the next-
smaller pre-built batch size from the number of existing batchable identifiers.
Continuing with the above example, with a batch-size setting of 30, the pre-built batch
sizes would be [30, 15, 10, 9, 8, 7, .., 1]. An attempt to batch load 29 identifiers would result
in batches of 15, 10, and 4. There will be 3 corresponding SQL queries, each loading 15, 10
and 4 owners from the database.
PADDED - Padded is similar to LEGACY style of batch loading. It still utilizes pre-built batch
sizes, but uses the next-bigger batch size and pads the extra identifier placeholders.
As with the example above, if 30 owner objects are to be initialized, there will only be one
query executed against the database.
However, if 29 owner objects are to be initialized, Hibernate will still execute only 1 SQL
select statement of batch size 30, with the extra space padded with a repeated identifier.
Dynamic - While still conforming to batch-size restrictions, this style of batch loading
dynamically builds its SQL SELECT statement using the actual number of objects to be
loaded.
For example, for 30 owner objects, and a maximum batch size of 30, a call to retrieve 30
owner objects will result in one SQL SELECT statement. A call to retrieve 35 will result in
two SQL statements, of batch sizes 30 and 5 respectively. Hibernate will dynamically alter
the second SQL statement to keep at 5, the required number, while still remaining under the
restriction of 30 as the batch-size. This is different to the PADDED version, as the second
SQL will not get PADDED, and unlike the LEGACY style, there is no fixed size for the
second SQL statement - the second SQL is created dynamically.
For a query of less than 30 identifiers, this style will dynamically only load the number of
identifiers requested.
Per-Collection Level
Hibernate can also batch load collections honoring the batch fetch size and styles as listed in
the per-class section above.
To reverse the example used in the previous section, consider that you need to load all the car
objects owned by each owner object. If 10 owner objects are loaded in the current session
iterating through all owners will generate 10 SELECT statements, one for every call to
getCars() method. If you enable batch fetching for the cars collection in the mapping of
Owner, Hibernate can pre-fetch these collections, as shown below.
231
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Thus, with a batch-size of 5 and using legacy batch style to load 10 collections, Hibernate will
execute two SELECT statements, each retrieving 5 collections.
Hibernate maintains two types of caches. The primary cache (also called the first-level cache) is
mandatory. This cache is associated with the current session and all requests must pass through it. The
secondary cache (also called the second-level cache) is optional, and is only consulted after the primary
cache has been consulted first.
Data is stored in the second-level cache by first disassembling it into a state array. This array is deep
copied, and that deep copy is put into the cache. The reverse is done for reading from the cache. This
works well for data that changes (mutable data), but is inefficient for immutable data.
Deep copying data is an expensive operation in terms of memory usage and processing speed. For large
data sets, memory and processing speed become a performance-limiting factor. Hibernate allows you to
specify that immutable data be referenced rather than copied. Instead of copying entire data sets,
Hibernate can now store the reference to the data in the cache.
WARNING
232
CHAPTER 13. HIBERNATE SEARCH
NOTE
The prior release of JBoss EAP included Hibernate 4.2 and Hibernate Search 4.6. JBoss
EAP 7 includes Hibernate 5 and Hibernate Search 5.5.
Hibernate Search 5.5 works with Java 7 and now builds upon Lucene 5.3.x. If you are
using any native Lucene APIs make sure to align with this version.
13.1.2. Overview
Hibernate Search consists of an indexing component as well as an index search component, both are
backed by Apache Lucene. Each time an entity is inserted, updated or removed from the database,
Hibernate Search keeps track of this event through the Hibernate event system and schedules an index
update. All these updates are handled without having to interact with the Apache Lucene APIs directly.
Instead, interaction with the underlying Lucene indexes is handled via an IndexManager. By default
there is a one-to-one relationship between IndexManager and Lucene index. The IndexManager
abstracts the specific index configuration, including the selected back end, reader strategy and the
DirectoryProvider.
Once the index is created, you can search for entities and return lists of managed entities instead of
dealing with the underlying Lucene infrastructure. The same persistence context is shared between
Hibernate and Hibernate Search. The FullTextSession class is built on top of the Hibernate
Session class so that the application code can use the unified org.hibernate.Query or
javax.persistence.Query APIs exactly the same way an HQL, JPA-QL, or native query would.
Transactional batching mode is recommended for all operations, whether or not they are JDBC-based.
NOTE
It is recommended, for both your database and Hibernate Search, to execute your
operations in a transaction, whether it is JDBC or JTA.
NOTE
233
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Apache Lucene, which is part of the Hibernate Search infrastructure, has the concept of a Directory for
storage of indexes. Hibernate Search handles the initialization and configuration of a Lucene Directory
instance via a Directory Provider.
The directory_provider property specifies the directory provider to be used to store the indexes.
The default file system directory provider is filesystem, which uses the local file system to store
indexes.
For better efficiency, interactions are batched and generally applied once the context ends. Outside a
transaction, the index update operation is executed right after the actual database operation. In the case
of an ongoing transaction, the index update operation is scheduled for the transaction commit phase and
discarded in case of transaction rollback. A worker may be configured with a specific batch size limit,
after which indexing occurs regardless of the context.
There are two immediate benefits to this method of handling index updates:
Performance: Lucene indexing works better when operation are executed in batch.
ACIDity: The work executed has the same scoping as the one executed by the database
transaction and is executed if and only if the transaction is committed. This is not ACID in the
strict sense, but ACID behavior is rarely useful for full text search indexes since they can be
rebuilt from the source at any time.
The two batch modes, no scope vs transactional, are the equivalent of autocommit versus transactional
behavior. From a performance perspective, the transactional mode is recommended. The scoping choice
is made transparently. Hibernate Search detects the presence of a transaction and adjust the scoping.
Hibernate Search uses various back ends to process batches of work. The back end is not limited to the
configuration option default.worker.backend. This property specifies a implementation of the
BackendQueueProcessor interface which is a part of a back-end configuration. Additional settings are
required to set up a back-end, for example the JMS back-end.
13.1.5.2. Lucene
In the Lucene mode, all index updates for a node are executed by the same node to the Lucene
directories using the directory providers. Use this mode in a non-clustered environment or in clustered
environments with a shared directory store.
234
CHAPTER 13. HIBERNATE SEARCH
Lucene mode targets non-clustered or clustered applications where the directory manages the locking
strategy. The primary advantage of Lucene mode is simplicity and immediate visibility of changes in
Lucene queries. The Near Real Time (NRT) back end is an alternative back end for non-clustered and
non-shared index configurations.
13.1.5.3. JMS
Index updates for a node are sent to the JMS queue. A unique reader processes the queue and updates
the master index. The master index is subsequently replicated regularly to slave copies, to establish the
master and slave pattern. The master is responsible for Lucene index updates. The slaves accept read
and write operations but process read operations on local index copies. The master is solely responsible
for updating the Lucene index. Only the master applies the local changes in an update operation.
235
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
This mode targets clustered environment where throughput is critical and index update delays are
affordable. The JMS provider ensures reliability and uses the slaves to change the local index copies.
Using the shared strategy, Hibernate Search shares the same IndexReader for a given Lucene index
across multiple queries and threads provided that the IndexReader remains updated. If the
IndexReader is not updated, a new one is opened and provided. Each IndexReader is made of
several SegmentReaders. The shared strategy reopens segments that have been modified or created
after the last opening and shares the already loaded segments from the previous instance. This is the
default strategy.
236
CHAPTER 13. HIBERNATE SEARCH
Using the not-shared strategy, a Lucene IndexReader opens every time a query executes. Opening
and starting up a IndexReader is an expensive operation. As a result, opening an IndexReader for
each query execution is not an efficient strategy.
13.2. CONFIGURATION
If you are using Hibernate directly, settings such as the DirectoryProvider must be set in the
configuration file, either hibernate.properties or hibernate.cfg.xml. If you are using Hibernate via JPA, the
configuration file is persistence.xml.
directory-based: the default implementation which uses the Lucene Directory abstraction
to manage index files.
near-real-time: avoids flushing writes to disk at each commit. This index manager is also
Directory based, but uses Lucene’s near real-time, NRT, functionality.
To specify an IndexManager other than the default, specify the following property:
hibernate.search.[default|<indexname>].indexmanager = near-real-time
13.2.2.1. Directory-based
The NRTIndexManager is an extension of the default IndexManager and leverages the Lucene NRT,
Near Real Time, feature for low latency index writes. However, it ignores configuration settings for
alternative back ends other than lucene and acquires exclusive write locks on the Directory.
237
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The IndexWriter does not flush every change to the disk to provide low latency. Queries can read the
updated states from the unflushed index writer buffers. However, this means that if the IndexWriter is
killed or the application crashes, updates can be lost so the indexes must be rebuilt.
The Near Real Time configuration is recommended for non-clustered websites with limited data due to
the mentioned disadvantages and because a master node can be individually configured for improved
performance as well.
13.2.2.3. Custom
Specify a fully qualified class name for the custom implementation to set up a customized
IndexManager. Set up a no-argument constructor for the implementation as follows:
[default|<indexname>].indexmanager = my.corp.myapp.CustomIndexManager
The custom index manager implementation does not require the same components as the default
implementations. For example, delegate to a remote indexing service which does not expose a
Directory interface.
Each indexed entity is associated with a Lucene index (except of the case where multiple entities share
the same index). The name of the index is given by the index property of the @Indexed annotation. If
the index property is not specified the fully qualified name of the indexed class will be used as name
(recommended).
The DirectoryProvider and any additional options can be configured by using the prefix
hibernate.search.<indexname>. The name default (hibernate.search.default) is
reserved and can be used to define properties which apply to all indexes. Configuring Directory Providers
shows how hibernate.search.default.directory_provider is used to set the default directory
provider to be the filesystem one. hibernate.search.default.indexBase sets then the default
base directory for the indexes. As a result the index for the entity Status is created in
/usr/lucene/indexes/org.hibernate.example.Status.
The index for the Rule entity, however, is using an in-memory directory, because the default directory
provider for this entity is overridden by the property
hibernate.search.Rules.directory_provider.
Finally the Action entity uses a custom directory provider CustomDirectoryProvider specified via
hibernate.search.Actions.directory_provider.
package org.hibernate.example;
@Indexed
public class Status { ... }
@Indexed(index="Rules")
238
CHAPTER 13. HIBERNATE SEARCH
@Indexed(index="Actions")
public class Action { ... }
hibernate.search.default.directory_provider = filesystem
hibernate.search.default.indexBase=/usr/lucene/indexes
hibernate.search.Rules.directory_provider = ram
hibernate.search.Actions.directory_provider =
com.acme.hibernate.CustomDirectoryProvider
NOTE
Using the described configuration scheme you can easily define common rules like the
directory provider and base directory, and override those defaults later on a per index
basis.
ram
None
filesystem
File system based directory. The directory used will be <indexBase>/< indexName >
filesystem-master
File system based directory. Like filesystem. It also copies the index to a source directory (aka
copy directory) on a regular basis.
The recommended value for the refresh period is (at least) 50% higher that the time to copy the
information (default 3600 seconds - 60 minutes).
Note that the copy is based on an incremental copy mechanism reducing the average copy time.
DirectoryProvider typically used on the master node in a JMS back end cluster.
The buffer_size_on_copy optimum depends on your operating system and available RAM; most
people reported good results using values between 16 and 64MB.
239
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
source: source directory suffix (default to @Indexed.index). The actual source directory
name being <sourceBase>/<source>
refresh: refresh period in seconds (the copy will take place every refresh seconds). If a copy
is still in progress when the following refresh period elapses, the second copy operation will
be skipped.
filesystem-slave
File system based directory. Like filesystem, but retrieves a master version (source) on a regular
basis. To avoid locking and inconsistent search results, 2 local copies are kept.
The recommended value for the refresh period is (at least) 50% higher that the time to copy the
information (default 3600 seconds - 60 minutes).
Note that the copy is based on an incremental copy mechanism reducing the average copy time. If a
copy is still in progress when refresh period elapses, the second copy operation will be skipped.
The buffer_size_on_copy optimum depends on your operating system and available RAM; most
people reported good results using values between 16 and 64MB.
source: Source directory suffix (default to @Indexed.index). The actual source directory
name being <sourceBase>/<source>
refresh: refresh period in second (the copy will take place every refresh seconds).
240
CHAPTER 13. HIBERNATE SEARCH
retry_initialize_period : optional, set an integer value in seconds to enable the retry initialize
feature: if the slave cannot find the master index it will try again until it’s found in background,
without preventing the application to start: fullText queries performed before the index is
initialized are not blocked but will return empty results. When not enabling the option or
explicitly setting it to zero it will fail with an exception instead of scheduling a retry timer. To
prevent the application from starting without an invalid index but still control an initialization
timeout, see retry_marker_lookup instead.
NOTE
If the built-in directory providers do not fit your needs, you can write your own directory
provider by implementing the org.hibernate.store.DirectoryProvider interface.
In this case, pass the fully qualified class name of your provider into the
directory_provider property. You can pass any additional properties using the prefix
hibernate.search.<indexname>.
Use the worker configuration to refine how Infinispan Query interacts with Lucene. Several architectural
components and possible extension points are available for this configuration.
First there is a Worker. An implementation of the Worker interface is responsible for receiving all entity
changes, queuing them by context and applying them once a context ends. The most intuitive context,
especially in connection with ORM, is the transaction. For this reason Hibernate Search will per default
use the TransactionalWorker to scope all changes per transaction. One can, however, imagine a
scenario where the context depends for example on the number of entity changes or some other
application (lifecycle) events.
Property Description
241
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Property Description
Once a context ends it is time to prepare and apply the index changes. This can be done synchronously
or asynchronously from within a new thread. Synchronous updates have the advantage that the index is
at all times in sync with the databases. Asynchronous updates, on the other hand, can help to minimize
the user response time. The drawback is potential discrepancies between database and index states.
NOTE
The following options can be different on each index; in fact they need the indexName
prefix or use default to set the default value for all indexes.
Property Description
hibernate.search.<indexName>. The back end can apply updates from the same
worker.thread_pool.size transaction context (or batch) in parallel, using a
threadpool. The default value is 1. You can
experiment with larger values if you have many
operations per transaction.
So far all work is done within the same Virtual Machine (VM), no matter which execution mode. The total
amount of work has not changed for the single VM. Luckily there is a better approach, namely
delegation. It is possible to send the indexing work to a different server by configuring
hibernate.search.default.worker.backend. Again this option can be configured differently for
each index.
242
CHAPTER 13. HIBERNATE SEARCH
Property Description
Property Description
hibernate.search.<indexName>. Mandatory for the JMS back end. Defines the JNDI
worker.jms.connection_factory name to lookup the JMS connection factory from
(/ConnectionFactory by default in Red Hat
JBoss Enterprise Application Platform)
hibernate.search.<indexName>. Mandatory for the JMS back end. Defines the JNDI
worker.jms.queue name to lookup the JMS queue from. The queue will
be used to post work messages.
243
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
WARNING
As you probably noticed, some of the shown properties are correlated which means
that not all combinations of property values make sense. In fact you can end up with
a non-functional configuration. This is especially true for the case that you provide
your own implementations of some of the shown interfaces. Make sure to study the
existing code before you write your own Worker or BackendQueueProcessor
implementation.
This section describes in greater detail how to configure the Master/Slave Hibernate Search architecture.
244
CHAPTER 13. HIBERNATE SEARCH
Every index update operation is sent to a JMS queue. Index querying operations are executed on a local
index copy.
## DirectoryProvider
# (remote) master location
hibernate.search.default.sourceBase =
/mnt/mastervolume/lucenedirs/mastercopy
## Back-end configuration
hibernate.search.default.worker.backend = jms
hibernate.search.default.worker.jms.connection_factory =
/ConnectionFactory
hibernate.search.default.worker.jms.queue = queue/hibernatesearch
#optional jndi configuration (check your JMS provider for more information)
NOTE
Every index update operation is taken from a JMS queue and executed. The master index is copied on a
regular basis.
Index update operations in the JMS queue are executed and the master index is copied regularly.
## DirectoryProvider
# (remote) master location where information is copied to
hibernate.search.default.sourceBase =
/mnt/mastervolume/lucenedirs/mastercopy
245
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
## Back-end configuration
#Back-end is the default for Lucene
In addition to the Hibernate Search framework configuration, a Message Driven Bean has to be written
and set up to process the index works queue through JMS.
@MessageDriven(activationConfig = {
@ActivationConfigProperty(propertyName="destinationType",
propertyValue="javax.jms.Queue"),
@ActivationConfigProperty(propertyName="destination",
propertyValue="queue/hibernatesearch"),
@ActivationConfigProperty(propertyName="DLQMaxResent",
propertyValue="1")
} )
public class MDBSearchController extends
AbstractJMSHibernateSearchController
implements MessageListener {
@PersistenceContext EntityManager em;
This example inherits from the abstract JMS controller class available in the Hibernate Search source
code and implements a JavaEE MDB. This implementation is given as an example and can be adjusted
to make use of non Java EE Message Driven Beans.
Hibernate Search is used to tune the Lucene indexing performance by specifying a set of parameters
which are passed through to underlying Lucene IndexWriter such as mergeFactor,
maxMergeDocs, and maxBufferedDocs. Specify these parameters either as default values applying
for all indexes, on a per index basis, or even per shard.
246
CHAPTER 13. HIBERNATE SEARCH
There are several low level IndexWriter settings which can be tuned for different use cases. These
parameters are grouped by the indexwriter keyword:
hibernate.search.[default|<indexname>].indexwriter.<parameter_name>
If no value is set for an indexwriter value in a specific shard configuration, Hibernate Search checks
the index section, then at the default section.
The configuration in the following table will result in these settings applied on the second shard of the
Animal index:
max_merge_docs = 10
merge_factor = 20
ram_buffer_size = 64MB
The default for all values is to leave them at Lucene’s own default. The values listed in List of indexing
performance and behavior properties depend for this reason on the version of Lucene you are using. The
values shown are relative to version 2.4.
NOTE
Previous versions of Hibernate Search had the notion of batch and transaction
properties. This is no longer the case as the back end will always perform work using the
same settings.
hibernate.search. Set to true when no other process will need to write true(improved
[default| to the same index. This enables Hibernate Search to performance,
<indexname>]. work in exclusive mode on the index and improve releases locks
exclusive_index_use performance when writing changes to the index. only at shutdown)
247
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
See also
org.apache.lucene.index.LogDocMergeP
olicy.maxMergeSize .
248
CHAPTER 13. HIBERNATE SEARCH
Applied to
org.apache.lucene.index.LogDocMergeP
olicy.maxMergeSizeForOptimize.
249
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
WARNING
The blackhole back end is not meant to be used in production, only as a tool to
identify indexing bottlenecks.
There are several low level IndexWriter settings which can be tuned for different use cases. These
parameters are grouped by the indexwriter keyword:
default.<indexname>.indexwriter.<parameter_name>
If no value is set for indexwriter in a shard configuration, Hibernate Search looks at the index section
and then at the default section.
The following configuration will result in these settings being applied on the second shard of the Animal
index:
default.Animals.2.indexwriter.max_merge_docs = 10
default.Animals.2.indexwriter.merge_factor = 20
default.Animals.2.indexwriter.term_index_interval = default
default.indexwriter.max_merge_docs = 100
default.indexwriter.ram_buffer_size = 64
max_merge_docs = 10
250
CHAPTER 13. HIBERNATE SEARCH
merge_factor = 20
ram_buffer_size = 64MB
The Lucene default values are the default setting for Hibernate Search. Therefore, the values listed in
the following table depend on the version of Lucene being used. The values shown are relative to version
2.4. For more information about Lucene indexing performance, see the Lucene documentation.
NOTE
The back end will always perform work using the same settings.
default. Set to true when no other process will need to write true (improved
<indexname>.exclusiv to the same index. This enables Hibernate Search to performance,
e_index_use work in exclusive mode on the index and improve releases locks
performance when writing changes to the index. only at shutdown)
251
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
See also
org.apache.lucene.index.LogDocMergeP
olicy.minMergeSize .
See also
org.apache.lucene.index.LogDocMergeP
olicy.maxMergeSize .
Applied to
org.apache.lucene.index.LogDocMergeP
olicy.maxMergeSizeForOptimize.
252
CHAPTER 13. HIBERNATE SEARCH
Applied to
org.apache.lucene.index.LogMergePoli
cy.calibrateSizeByDeletes.
253
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
When the architecture permits it, keep default.exclusive_index_use=true for improved index
writing efficiency.
When tuning indexing speed the recommended approach is to focus first on optimizing the object
loading, and then use the timings you achieve as a baseline to tune the indexing process. Set the
blackhole as worker back end and start your indexing routines. This back end does not disable
Hibernate Search. It generates the required change sets to the index, but discards them instead of
flushing them to the index. In contrast to setting the hibernate.search.indexing_strategy to
manual, using blackhole will possibly load more data from the database because associated entities
are re-indexed as well.
hibernate.search.[default|<indexname>].worker.backend blackhole
WARNING
The blackhole back end is not to be used in production, only as a diagnostic tool
to identify indexing bottlenecks.
merge_max_size
merge_max_optimize_size
merge_calibrate_by_deletes
Set the max_size for merge operations to less than half of the hard limit segment size, as merging
segments combines two segments into one larger segment.
A new segment may initially be a larger size than expected, however a segment is never created
significantly larger than the ram_buffer_size. This threshold is checked as an estimate.
254
CHAPTER 13. HIBERNATE SEARCH
Some locking strategies require a filesystem level lock, and may be used on RAM-based indexes. When
using this strategy the IndexBase configuration option must be specified to point to a filesystem location
in which to store the lock marker files.
simple
native
single
none
nativ org.apache.lucene.store. As does simple this also marks the usage of the
e NativeFSLockFactory index by creating a marker file, but this one is using
native OS file locks so that even if the JVM is
terminated the locks will be cleaned up.
singl org.apache.lucene.store. This LockFactory does not use a file marker but is a
e SingleInstanceLockFactory Java object lock held in memory; therefore it’s
possible to use it only when you are sure the index is
not going to be shared by any other process.
hibernate.search.default.locking_strategy = simple
hibernate.search.Animals.locking_strategy = native
hibernate.search.Books.locking_strategy =
org.custom.components.MyLockingFactory
255
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
WARNING
hibernate.search.lucene_version = LUCENE_30
The configured SearchFactory is global and affects all Lucene APIs that contain the relevant
parameter. If Lucene is used and Hibernate Search is bypassed, apply the same value to it for consistent
results.
Indexing
Searching
Analyzer
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.hibernate</groupId>
256
CHAPTER 13. HIBERNATE SEARCH
<artifactId>hibernate-search-orm</artifactId>
<version>5.5.1.Final-redhat-1</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-orm</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>
Example: Entities Book and Author Before Adding Hibernate Search Specific Annotations
package example;
...
@Entity
public class Book {
@Id
@GeneratedValue
private Integer id;
@ManyToMany
private Set<Author> authors = new HashSet<Author>();
public Book() {}
package example;
...
@Entity
public class Author {
@Id
@GeneratedValue
private Integer id;
257
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
public Author() {}
To achieve this you have to add a few annotations to the Book and Author class. The first annotation
@Indexed marks Book as indexable. By design Hibernate Search stores an untokenized ID in the index
to ensure index unicity for a given entity. @DocumentId marks the property to use for this purpose and is
in most cases the same as the database primary key. The @DocumentId annotation is optional in the
case where an @Id annotation exists.
Next the fields you want to make searchable must be marked as such. In this example, start with title
and subtitle and annotate both with @Field. The parameter index=Index.YES will ensure that the
text will be indexed, while analyze=Analyze.YES ensures that the text will be analyzed using the
default Lucene analyzer. Usually, analyzing means chunking a sentence into individual words and
potentially excluding common words like 'a' or ‘the’. We will talk more about analyzers a little later on.
The third parameter we specify within @Field, store=Store.NO, ensures that the actual data will not
be stored in the index. Whether this data is stored in the index or not has nothing to do with the ability to
search for it. From Lucene’s perspective it is not necessary to keep the data once the index is created.
The benefit of storing it is the ability to retrieve it via projections.
Without projections, Hibernate Search will per default execute a Lucene query in order to find the
database identifiers of the entities matching the query criteria and use these identifiers to retrieve
managed objects from the database. The decision for or against projection has to be made on a case to
case basis. The default behavior is recommended since it returns managed objects whereas projections
only return object arrays. Note that index=Index.YES, analyze=Analyze.YES and
store=Store.NO are the default values for these parameters and could be omitted.
Another annotation not yet discussed is @DateBridge. This annotation is one of the built-in field bridges
in Hibernate Search. The Lucene index is purely string based. For this reason Hibernate Search must
convert the data types of the indexed fields to strings and vice-versa. A range of predefined bridges are
provided, including the DateBridge which will convert a java.util.Date into a String with the specified
resolution. For more details see Bridges.
This leaves us with @IndexedEmbedded. This annotation is used to index associated entities
(@ManyToMany, @*ToOne, @Embedded and @ElementCollection) as part of the owning entity. This
is needed since a Lucene index document is a flat data structure which does not know anything about
object relations. To ensure that the authors' name will be searchable you have to ensure that the names
are indexed as part of the book itself. On top of @IndexedEmbedded you will also have to mark all fields
of the associated entity you want to have included in the index with @Indexed. For more details see
Embedded and Associated Objects
These settings should be sufficient for now. For more details on entity mapping see Mapping an Entity.
package example;
...
@Entity
258
CHAPTER 13. HIBERNATE SEARCH
@Id
@GeneratedValue
private Integer id;
@ManyToMany
private Set<Author> authors = new HashSet<Author>();
public Book() {
}
package example;
...
@Entity
public class Author {
@Id
@GeneratedValue
private Integer id;
public Author() {
}
13.3.4. Indexing
Hibernate Search will transparently index every entity persisted, updated or removed through Hibernate
Core. However, you have to create an initial Lucene index for the data already present in your database.
Once you have added the above properties and annotations it is time to trigger an initial batch index of
your books. You can achieve this by using one of the following code snippets (see also ):
259
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
FullTextSession fullTextSession =
org.hibernate.search.Search.getFullTextSession(session);
fullTextSession.createIndexer().startAndWait();
EntityManager em = entityManagerFactory.createEntityManager();
FullTextEntityManager fullTextEntityManager =
org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
fullTextEntityManager.createIndexer().startAndWait();
After executing the above code, you should be able to see a Lucene index under
/var/lucene/indexes/example.Book. Go ahead an inspect this index with Luke. It will help you to
understand how Hibernate Search works.
13.3.5. Searching
To execute a search, create a Lucene query using either the Lucene API or the Hibernate Search query
DSL. Wrap the query in a org.hibernate.Query to get the required functionality from the Hibernate API.
The following code prepares a query against the indexed fields. Executing the code returns a list of
Books.
// execute search
List result = hibQuery.list();
tx.commit();
session.close();
EntityManager em = entityManagerFactory.createEntityManager();
FullTextEntityManager fullTextEntityManager =
260
CHAPTER 13. HIBERNATE SEARCH
org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
em.getTransaction().begin();
// execute search
List result = persistenceQuery.getResultList();
em.getTransaction().commit();
em.close();
13.3.6. Analyzer
Assuming that the title of an indexed book entity is Refactoring: Improving the Design of
Existing Code and that hits are required for the following queries: refactor, refactors,
refactored, and refactoring. Select an analyzer class in Lucene that applies word stemming when
indexing and searching. Hibernate Search offers several ways to configure the analyzer (see Default
Analyzer and Analyzer by Class for more information):
Set the analyzer property in the configuration file. The specified class becomes the default
analyzer.
Specify the fully qualified classname or the analyzer to use, or see an analyzer defined by the
@AnalyzerDef annotation with the @Analyzer annotation. The Solr analyzer framework with its
factories are utilized for the latter option. For more information about factory classes, see the Solr
JavaDoc or read the corresponding section on the Solr Wiki
(http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters)
If using the Solr framework, use the tokenizer with an arbitrary number of filters.
Example: Using @AnalyzerDef and the Solr Framework to Define and Use an Analyzer
261
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Indexed
@AnalyzerDef(
name = "customanalyzer",
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = SnowballPorterFilterFactory.class,
params = { @Parameter(name = "language", value = "English") })
})
public class Book implements Serializable {
@Field
@Analyzer(definition = "customanalyzer")
private String title;
@Field
@Analyzer(definition = "customanalyzer")
private String subtitle;
@IndexedEmbedded
private Set authors = new HashSet();
public Book() {
}
Use @AnalyzerDef to define an analyzer, then apply it to entities and properties using @Analyzer. In the
example, the customanalyzer is defined but not applied on the entity. The analyzer is only applied to
the title and subtitle properties. An analyzer definition is global. Define the analyzer for an entity
and reuse the definition for other entities as required.
Let us start with the most commonly used annotations for mapping an entity.
The Lucene-based Query API uses the following common annotations to map entities:
@Indexed
262
CHAPTER 13. HIBERNATE SEARCH
@Field
@NumericField
@Id
13.4.1.2. @Indexed
Foremost we must declare a persistent class as indexable. This is done by annotating the class with
@Indexed (all entities not annotated with @Indexed will be ignored by the indexing process):
@Entity
@Indexed
public class Essay {
...
}
You can optionally specify the index attribute of the @Indexed annotation to change the default name of
the index.
13.4.1.3. @Field
For each property (or attribute) of your entity, you have the ability to describe how it will be indexed. The
default (no annotation present) means that the property is ignored by the indexing process.
NOTE
Prior to Hibernate Search 5, numeric field encoding was only chosen if explicitly
requested via @NumericField. As of Hibernate Search 5 this encoding is automatically
chosen for numeric types. To avoid numeric encoding you can explicitly specify a non
numeric field bridge via @Field.bridge or @FieldBridge. The package
org.hibernate.search.bridge.builtin contains a set of bridges which encode
numbers as strings, for example
org.hibernate.search.bridge.builtin.IntegerBridge.
@Field does declare a property as indexed and allows to configure several aspects of the indexing
process by setting one or more of the following attributes:
name : describe under which name, the property should be stored in the Lucene Document. The
default value is the property name (following the JavaBeans convention)
store : describe whether or not the property is stored in the Lucene index. You can store the
value Store.YES (consuming more space in the index but allowing projection, store it in a
compressed way Store.COMPRESS (this does consume more CPU), or avoid any storage
Store.NO (this is the default value). When a property is stored, you can retrieve its original
value from the Lucene Document. This is not related to whether the element is indexed or not.
index: describe whether the property is indexed or not. The different values are Index.NO (no
indexing, ie cannot be found by a query), Index.YES (the element gets indexed and is
searchable). The default value is Index.YES. Index.NO can be useful for cases where a
property is not required to be searchable, but should be available for projection.
263
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
NOTE
Whether or not you want to analyze a property depends on whether you wish to
search the element as is, or by the words it contains. It make sense to analyze a
text field, but probably not a date field.
NOTE
norms: describes whether index time boosting information should be stored (Norms.YES) or not
(Norms.NO). Not storing it can save a considerable amount of memory, but there will not be any
index time boosting information available. The default value is Norms.YES.
termVector: describes collections of term-frequency pairs. This attribute enables the storing of
the term vectors within the documents during indexing. The default value is TermVector.NO.
The different values of this attribute are:
Value Definition
TermVector.WITH_OFFSETS Store the term vector and token offset information. This is
the same as TermVector.YES plus it contains the starting
and ending offset position information for the terms.
TermVector.WITH_POSITIONS Store the term vector and token position information. This
is the same as TermVector.YES plus it contains the ordinal
positions of each occurrence of a term in a document.
TermVector.WITH_POSITION_OFFS Store the term vector, token position and offset information.
ETS This is a combination of the YES, WITH_OFFSETS and
WITH_POSITIONS.
indexNullAs : Per default null values are ignored and not indexed. However, using
indexNullAs you can specify a string which will be inserted as token for the null value. Per
default this value is set to Field.DO_NOT_INDEX_NULL indicating that null values should not
be indexed. You can set this value to Field.DEFAULT_NULL_TOKEN to indicate that a default
264
CHAPTER 13. HIBERNATE SEARCH
null token should be used. This default null token can be specified in the configuration using
hibernate.search.default_null_token. If this property is not set and you specify
Field.DEFAULT_NULL_TOKEN the string "null" will be used as default.
NOTE
When the indexNullAs parameter is used it is important to use the same token
in the search query to search for null values. It is also advisable to use this
feature only with un-analyzed fields (Analyze.NO).
WARNING
13.4.1.4. @NumericField
There is a companion annotation to @Field called @NumericField that can be specified in the same
scope as @Field or @DocumentId. It can be specified for Integer, Long, Float, and Double properties. At
index time the value will be indexed using a Trie structure. When a property is indexed as numeric field,
it enables efficient range query and sorting, orders of magnitude faster than doing the same query on
standard @Field properties. The @NumericField annotation accept the following parameters:
Value Definition
forField (Optional) Specify the name of the related @Field that will be indexed as
numeric. It is only mandatory when the property contains more than a
@Field declaration
precisionStep (Optional) Change the way that the Trie structure is stored in the index.
Smaller precisionSteps lead to more disk space usage and faster range and
sort queries. Larger values lead to less space used and range query
performance more close to the range query in normal @Fields. Default
value is 4.
@NumericField supports only Double, Long, Integer and Float. It is not possible to take any advantage
from similar functionality in Lucene for the other numeric types, so remaining types should use the string
encoding via the default or custom TwoWayFieldBridge.
It is possible to use a custom NumericFieldBridge assuming you can deal with the approximation during
type transformation:
265
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Override
public void set(String name, Object value, Document document,
LuceneOptions luceneOptions) {
if ( value != null ) {
BigDecimal decimalValue = (BigDecimal) value;
Long indexedValue = Long.valueOf( decimalValue.multiply(
storeFactor ).longValue() );
luceneOptions.addNumericFieldToDocument( name, indexedValue,
document );
}
}
@Override
public Object get(String name, Document document) {
String fromLucene = document.get( name );
BigDecimal storedBigDecimal = new BigDecimal( fromLucene );
return storedBigDecimal.divide( storeFactor );
}
13.4.1.5. @Id
Finally, the id (identifier) property of an entity is a special property used by Hibernate Search to ensure
index uniqueness of a given entity. By design, an id must be stored and must not be tokenized. To mark
a property as an index identifier, use the @DocumentId annotation. If you are using JPA and you have
specified @Id you can omit @DocumentId. The chosen entity identifier will also be used as the document
identifier.
Infinispan Query uses the entity’s id property to ensure the index is uniquely identified. By design, an ID
is stored and must not be converted into a token. To mark a property as index ID, use the @DocumentId
annotation.
@Entity
@Indexed
public class Essay {
...
@Id
@DocumentId
public Long getId() { return id; }
@Field(name="Abstract", store=Store.YES)
public String getSummary() { return summary; }
@Lob
@Field
public String getText() { return text; }
266
CHAPTER 13. HIBERNATE SEARCH
The example above defines an index with four fields: id , Abstract, text and grade . Note that by
default the field name is not capitalized, following the JavaBean specification. The grade field is
annotated as numeric with a slightly larger precision step than the default.
Sometimes you need to map a property multiple times per index, with slightly different indexing
strategies. For example, sorting a query by field requires the field to be un-analyzed. To search by words
on this property and still sort it, it needs to be indexed - once analyzed and once un-analyzed. @Fields
allows you to achieve this goal.
@Entity
@Indexed(index = "Book" )
public class Book {
@Fields( {
@Field,
@Field(name = "summary_forSort", analyze = Analyze.NO, store
= Store.YES)
} )
public String getSummary() {
return summary;
}
...
}
In this example the field summary is indexed twice, once as summary in a tokenized way, and once as
summary_forSort in an untokenized way.
Associated objects as well as embedded objects can be indexed as part of the root entity index. This is
useful if you expect to search a given entity based on properties of associated objects. The aim is to
return places where the associated city is Atlanta (In the Lucene query parser language, it would
translate into address.city:Atlanta). The place fields will be indexed in the Place index. The
Place index documents will also contain the fields address.id, address.street, and
address.city which you will be able to query.
@Entity
@Indexed
public class Place {
@Id
@GeneratedValue
@DocumentId
private Long id;
@Field
private String name;
267
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Entity
public class Address {
@Id
@GeneratedValue
private Long id;
@Field
private String street;
@Field
private String city;
@ContainedIn
@OneToMany(mappedBy="address")
private Set<Place> places;
...
}
Because the data is denormalized in the Lucene index when using the @IndexedEmbedded technique,
Hibernate Search must be aware of any change in the Place object and any change in the Address
object to keep the index up to date. To ensure the Lucene document is updated when it is Address
changes, mark the other side of the bidirectional relationship with @ContainedIn.
NOTE
@Entity
@Indexed
public class Place {
@Id
@GeneratedValue
@DocumentId
private Long id;
@Field
private String name;
@Entity
public class Address {
268
CHAPTER 13. HIBERNATE SEARCH
@Id
@GeneratedValue
private Long id;
@Field
private String street;
@Field
private String city;
@ContainedIn
@OneToMany(mappedBy="address")
private Set<Place> places;
...
}
@Embeddable
public class Owner {
@Field
private String name;
...
}
Any @*ToMany, @*ToOne and @Embedded attribute can be annotated with @IndexedEmbedded. The
attributes of the associated class will then be added to the main entity index. The index will contain the
following fields:
id
name
address.street
address.city
address.ownedBy_name
The default prefix is propertyName., following the traditional object navigation convention. You can
override it using the prefix attribute as it is shown on the ownedBy property.
NOTE
The depth property is necessary when the object graph contains a cyclic dependency of classes (not
instances). For example, if Owner points to Place. Hibernate Search will stop including Indexed
embedded attributes after reaching the expected depth (or the object graph boundaries are reached). A
class having a self reference is an example of cyclic dependency. In our example, because depth is set
to 1, any @IndexedEmbedded attribute in Owner (if any) will be ignored.
Using @IndexedEmbedded for object associations allows you to express queries (using Lucene’s query
syntax) such as:
269
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Return places where name contains JBoss and where address city is Atlanta. In Lucene query
this would be:
+name:jboss +address.city:atlanta
Return places where name contains JBoss and where owner’s name contain Joe. In Lucene
query this would be
+name:jboss +address.ownedBy_name:joe
This behavior mimics the relational join operation in a more efficient way (at the cost of data duplication).
Remember that, out of the box, Lucene indexes have no notion of association, the join operation does
not exist. It might help to keep the relational model normalized while benefiting from the full text index
speed and feature richness.
NOTE
An associated object can itself (but does not have to) be @Indexed
When @IndexedEmbedded points to an entity, the association has to be directional and the other side
has to be annotated @ContainedIn (as seen in the previous example). If not, Hibernate Search has no
way to update the root index when the associated entity is updated (in our example, a Place index
document has to be updated when the associated Address instance is updated).
Sometimes, the object type annotated by @IndexedEmbedded is not the object type targeted by
Hibernate and Hibernate Search. This is especially the case when interfaces are used in lieu of their
implementation. For this reason you can override the object type targeted by Hibernate Search using the
targetElement parameter.
@Entity
@Indexed
public class Address {
@Id
@GeneratedValue
@DocumentId
private Long id;
@Field
private String street;
...
}
@Embeddable
public class Owner implements Person { ... }
270
CHAPTER 13. HIBERNATE SEARCH
The @IndexedEmbedded annotation provides also an attribute includePaths which can be used as an
alternative to depth, or be combined with it.
When using only depth all indexed fields of the embedded type will be added recursively at the same
depth. This makes it harder to select only a specific path without adding all other fields as well, which
might not be needed.
To avoid unnecessarily loading and indexing entities you can specify exactly which paths are needed. A
typical application might need different depths for different paths, or in other words it might need to
specify paths explicitly, as shown in the example below:
@Entity
@Indexed
public class Person {
@Id
public int getId() {
return id;
}
@Field
public String getName() {
return name;
}
@Field
public String getSurname() {
return surname;
}
@OneToMany
@IndexedEmbedded(includePaths = { "name" })
public Set<Person> getParents() {
return parents;
}
@ContainedIn
@ManyToOne
public Human getChild() {
return child;
}
Using a mapping as in the example above, you would be able to search on a Person by name and/or
surname, and/or the name of the parent. It will not index the surname of the parent, so searching on
parent’s surnames will not be possible but speeds up indexing, saves space and improve overall
performance.
The @IndexedEmbeddedincludePaths will include the specified paths in addition to what you would
index normally specifying a limited value for depth. When using includePaths, and leaving depth
undefined, behavior is equivalent to setting depth=0: only the included paths are indexed.
271
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Entity
@Indexed
public class Human {
@Id
public int getId() {
return id;
}
@Field
public String getName() {
return name;
}
@Field
public String getSurname() {
return surname;
}
@OneToMany
@IndexedEmbedded(depth = 2, includePaths = { "parents.parents.name" })
public Set<Human> getParents() {
return parents;
}
@ContainedIn
@ManyToOne
public Human getChild() {
return child;
}
In the example above, every human will have its name and surname attributes indexed. The name and
surname of parents will also be indexed, recursively up to second line because of the depth attribute. It
will be possible to search by name or surname, of the person directly, his parents or of his grand
parents. Beyond the second level, we will in addition index one more level but only the name, not the
surname.
272
CHAPTER 13. HIBERNATE SEARCH
Having explicit control of the indexed paths might be easier if you are designing your application by
defining the needed queries first, as at that point you might know exactly which fields you need, and
which other fields are unnecessary to implement your use case.
13.4.2. Boosting
Lucene has the notion of boosting which allows you to give certain documents or fields more or less
importance than others. Lucene differentiates between index and search time boosting. The following
sections show you how you can achieve index time boosting using Hibernate Search.
To define a static boost value for an indexed class or property you can use the @Boost annotation. You
can use this annotation within @Field or specify it directly on method or class level.
@Entity
@Indexed
@Id
@DocumentId
public Long getId() { return id; }
@Lob
@Field(boost=@Boost(1.2f))
public String getText() { return text; }
@Field
public String getISBN() { return isbn; }
In the example above, Essay’s probability to reach the top of the search list will be multiplied by 1.7. The
summary field will be 3.0 (2 * 1.5, because @Field.boost and @Boost on a property are cumulative) more
important than the isbn field. The text field will be 1.2 times more important than the isbn field. Note that
this explanation is wrong in strictest terms, but it is simple and close enough to reality for all practical
purposes.
The @Boost annotation used in Static Index Time Boosting defines a static boost factor which is
independent of the state of the indexed entity at runtime. However, there are use cases in which the
273
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
boost factor may depend on the actual state of the entity. In this case you can use the @DynamicBoost
annotation together with an accompanying custom BoostStrategy.
@Entity
@Indexed
@DynamicBoost(impl = VIPBoostStrategy.class)
public class Person {
private PersonType type;
// ....
}
In the example above, a dynamic boost is defined on class level specifying VIPBoostStrategy as
implementation of the BoostStrategy interface to be used at indexing time. You can place the
@DynamicBoost either at class or field level. Depending on the placement of the annotation either the
whole entity is passed to the defineBoost method or just the annotated field/property value. It is up to you
to cast the passed object to the correct type. In the example all indexed values of a VIP person would be
double as important as the values of a normal person.
NOTE
Of course you can mix and match @Boost and @DynamicBoost annotations in your entity. All defined
boost factors are cumulative.
13.4.3. Analysis
Analysis is the process of converting text into single terms (words) and can be considered as one of
the key features of a full-text search engine. Lucene uses the concept of Analyzers to control this
process. In the following section we cover the multiple ways Hibernate Search offers to configure the
analyzers.
274
CHAPTER 13. HIBERNATE SEARCH
The default analyzer class used to index tokenized fields is configurable through the
hibernate.search.analyzer property. The default value for this property is
org.apache.lucene.analysis.standard.StandardAnalyzer.
You can also define the analyzer class per entity, property and even per @Field (useful when multiple
fields are indexed from a single property).
@Entity
@Indexed
@Analyzer(impl = EntityAnalyzer.class)
public class MyEntity {
@Id
@GeneratedValue
@DocumentId
private Integer id;
@Field
private String name;
@Field
@Analyzer(impl = PropertyAnalyzer.class)
private String summary;
...
}
In this example, EntityAnalyzer is used to index tokenized property (name), except summary and body
which are indexed with PropertyAnalyzer and FieldAnalyzer respectively.
WARNING
Mixing different analyzers in the same entity is most of the time a bad practice. It
makes query building more complex and results less predictable (for the novice),
especially if you are using a QueryParser (which uses the same analyzer for the
whole query). As a rule of thumb, for any given field the same analyzer should be
used for indexing and querying.
Analyzers can become quite complex to deal with. For this reason introduces Hibernate Search the
notion of analyzer definitions. An analyzer definition can be reused by many @Analyzer declarations and
is composed of:
275
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
a list of char filters: each char filter is responsible to pre-process input characters before the
tokenization. Char filters can add, change, or remove characters; one common usage is for
characters normalization
a tokenizer: responsible for tokenizing the input stream into individual words
a list of filters: each filter is responsible to remove, modify, or sometimes even add words into
the stream provided by the tokenizer
This separation of tasks - a list of char filters, and a tokenizer followed by a list of filters - allows for easy
reuse of each individual component and lets you build your customized analyzer in a very flexible way
(like Lego). Generally speaking the char filters do some pre-processing in the character input, then the
Tokenizer starts the tokenizing process by turning the character input into tokens which are then further
processed by the TokenFilters. Hibernate Search supports this infrastructure by utilizing the Solr
analyzer framework.
Let us review a concrete example stated below. First a char filter is defined by its factory. In our
example, a mapping char filter is used, and will replace characters in the input based on the rules
specified in the mapping file. Next a tokenizer is defined. This example uses the standard tokenizer. Last
but not least, a list of filters is defined by their factories. In our example, the StopFilter filter is built
reading the dedicated words property file. The filter is also expected to ignore case.
@AnalyzerDef(name="customanalyzer",
charFilters = {
@CharFilterDef(factory = MappingCharFilterFactory.class, params = {
@Parameter(name = "mapping",
value = "org/hibernate/search/test/analyzer/solr/mapping-
chars.properties")
})
},
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = ISOLatin1AccentFilterFactory.class),
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = StopFilterFactory.class, params = {
@Parameter(name="words",
value=
"org/hibernate/search/test/analyzer/solr/stoplist.properties" ),
@Parameter(name="ignoreCase", value="true")
})
})
public class Team {
...
}
NOTE
Filters and char filters are applied in the order they are defined in the @AnalyzerDef
annotation. Order matters!
Some tokenizers, token filters or char filters load resources like a configuration or metadata file. This is
the case for the stop filter and the synonym filter. If the resource charset is not using the VM default, you
can explicitly specify it by adding a resource_charset parameter.
276
CHAPTER 13. HIBERNATE SEARCH
@AnalyzerDef(name="customanalyzer",
charFilters = {
@CharFilterDef(factory = MappingCharFilterFactory.class, params = {
@Parameter(name = "mapping",
value = "org/hibernate/search/test/analyzer/solr/mapping-
chars.properties")
})
},
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = ISOLatin1AccentFilterFactory.class),
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = StopFilterFactory.class, params = {
@Parameter(name="words",
value=
"org/hibernate/search/test/analyzer/solr/stoplist.properties" ),
@Parameter(name="resource_charset", value = "UTF-16BE"),
@Parameter(name="ignoreCase", value="true")
})
})
public class Team {
...
}
Once defined, an analyzer definition can be reused by an @Analyzer declaration as seen in the
following example.
@Entity
@Indexed
@AnalyzerDef(name="customanalyzer", ... )
public class Team {
@Id
@DocumentId
@GeneratedValue
private Integer id;
@Field
private String name;
@Field
private String location;
@Field
@Analyzer(definition = "customanalyzer")
private String description;
}
Analyzer instances declared by @AnalyzerDef are also available by their name in the SearchFactory
which is quite useful when building queries.
277
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Analyzer analyzer =
fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
Fields in queries must be analyzed with the same analyzer used to index the field so that they speak a
common "language": the same tokens are reused between the query and the indexing process. This rule
has some exceptions but is true most of the time. Respect it unless you know what you are doing.
Solr and Lucene come with many useful default char filters, tokenizers, and filters. You can find a
complete list of char filter factories, tokenizer factories and filter factories at
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Let us check a few of them.
278
CHAPTER 13. HIBERNATE SEARCH
So far all the introduced ways to specify an analyzer were static. However, there are use cases where it
is useful to select an analyzer depending on the current state of the entity to be indexed, for example in a
multilingual applications. For an BlogEntry class for example the analyzer could depend on the language
property of the entry. Depending on this property the correct language specific stemmer should be
chosen to index the actual text.
To enable this dynamic analyzer selection Hibernate Search introduces the AnalyzerDiscriminator
annotation. Following example demonstrates the usage of this annotation.
@Entity
@Indexed
@AnalyzerDefs({
@AnalyzerDef(name = "en",
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = EnglishPorterFilterFactory.class
)
}),
@AnalyzerDef(name = "de",
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = GermanStemFilterFactory.class)
})
})
279
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Id
@GeneratedValue
@DocumentId
private Integer id;
@Field
@AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
private String language;
@Field
private String text;
// standard getter/setter
...
}
}
}
The prerequisite for using @AnalyzerDiscriminator is that all analyzers which are going to be used
dynamically are predefined via @AnalyzerDef definitions. If this is the case, one can place the
@AnalyzerDiscriminator annotation either on the class or on a specific property of the entity for
which to dynamically select an analyzer. Via the impl parameter of the AnalyzerDiscriminator you
specify a concrete implementation of the Discriminator interface. It is up to you to provide an
implementation for this interface. The only method you have to implement is
getAnalyzerDefinitionName() which gets called for each field added to the Lucene document.
The entity which is getting indexed is also passed to the interface method. The value parameter is only
set if the AnalyzerDiscriminator is placed on property level instead of class level. In this case the
value represents the current value of this property.
An implementation of the Discriminator interface has to return the name of an existing analyzer definition
or null if the default analyzer should not be overridden. The example above assumes that the language
parameter is either 'de' or 'en' which matches the specified names in the @AnalyzerDefs.
Retrieving an analyzer can be used when multiple analyzers have been used in a domain model, in
order to benefit from stemming or phonetic approximation, etc. In this case, use the same analyzers to
building a query. Alternatively, use the Hibernate Search query DSL, which selects the correct analyzer
automatically. See
Whether you are using the Lucene programmatic API or the Lucene query parser, you can retrieve the
280
CHAPTER 13. HIBERNATE SEARCH
scoped analyzer for a given entity. A scoped analyzer is an analyzer which applies the right analyzers
depending on the field indexed. Remember, multiple analyzers can be defined on a given entity each one
working on an individual field. A scoped analyzer unifies all these analyzers into a context-aware
analyzer. While the theory seems a bit complex, using the right analyzer in a query is very easy.
NOTE
When you use programmatic mapping for a child entity, you can only see the fields
defined by the child entity. Fields or methods inherited from a parent entity (annotated
with @MappedSuperclass) are not configurable. To configure properties inherited from a
parent entity, either override the property in the child entity or create a programmatic
mapping for the parent entity. This mimics the usage of annotations where you cannot
annotate a field or method of a parent entity unless it is redefined in the child entity.
org.apache.lucene.search.Query luceneQuery =
parser.parse( "title:sky Or title_stemmed:diamond" );
org.hibernate.Query fullTextQuery =
fullTextSession.createFullTextQuery( luceneQuery, Song.class );
In the example above, the song title is indexed in two fields: the standard analyzer is used in the field
title and a stemming analyzer is used in the field title_stemmed. By using the analyzer provided
by the search factory, the query uses the appropriate analyzer depending on the field targeted.
NOTE
You can also retrieve analyzers defined via @AnalyzerDef by their definition name using
searchFactory.getAnalyzer(String).
13.4.4. Bridges
When discussing the basic mapping for an entity one important fact was so far disregarded. In Lucene all
index fields have to be represented as strings. All entity properties annotated with @Field have to be
converted to strings to be indexed. The reason we have not mentioned it so far is, that for most of your
properties Hibernate Search does the translation job for you thanks to set of built-in bridges. However, in
some cases you need a more fine grained control over the translation process.
Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full
text representation.
null
281
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Per default null elements are not indexed. Lucene does not support null elements. However, in
some situation it can be useful to insert a custom token representing the null value. See for more
information.
java.lang.String
Strings are indexed as are short, Short, integer, Integer, long, Long, float, Float, double,
Double, BigInteger, BigDecimal
Numbers are converted into their string representation. Note that numbers cannot be compared by
Lucene (that is, used in ranged queries) out of the box: they have to be padded.
NOTE
Using a Range query has drawbacks, an alternative approach is to use a Filter query
which will filter the result query to the appropriate range. Hibernate Search also
supports the use of a custom StringBridge as described in Custom Bridges.
java.util.Date
Dates are stored as yyyyMMddHHmmssSSS in GMT time (200611072203012 for Nov 7th of 2006
4:03PM and 12ms EST). You should not really bother with the internal format. What is important is
that when using a TermRangeQuery, you should know that the dates have to be expressed in GMT
time.
Usually, storing the date up to the millisecond is not necessary. @DateBridge defines the
appropriate resolution you are willing to store in the index
(@DateBridge(resolution=Resolution.DAY)). The date pattern will then be truncated
accordingly.
@Entity
@Indexed
public class Meeting {
@Field(analyze=Analyze.NO)
WARNING
IMPORTANT
The default Date bridge uses Lucene’s DateTools to convert from and to String. This
means that all dates are expressed in GMT time. If your requirements are to store dates in
a fixed time zone you have to implement a custom date bridge. Make sure you
understand the requirements of your applications regarding to date indexing and
searching.
java.net.URI, java.net.URL
282
CHAPTER 13. HIBERNATE SEARCH
Sometimes, the built-in bridges of Hibernate Search do not cover some of your property types, or the
String representation used by the bridge does not meet your requirements. The following paragraphs
describe several solutions to this problem.
13.4.4.2.1. StringBridge
The simplest custom solution is to give Hibernate Search an implementation of your expected Object to
String bridge. To do so you need to implement the
org.hibernate.search.bridge.StringBridge interface. All implementations have to be thread-
safe as they are used concurrently.
/**
* Padding Integer bridge.
* All numbers will be padded with 0 to match 5 digits
*
* @author Emmanuel Bernard
*/
public class PaddedIntegerBridge implements StringBridge {
Given the string bridge defined in the previous example, any property or field can use this bridge thanks
to the @FieldBridge annotation:
@FieldBridge(impl = PaddedIntegerBridge.class)
private Integer length;
283
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Parameters can also be passed to the bridge implementation making it more flexible. Following example
implements a ParameterizedBridge interface and parameters are passed through the @FieldBridge
annotation.
//property
@FieldBridge(impl = PaddedIntegerBridge.class,
params = @Parameter(name="padding", value="10")
)
private Integer length;
All implementations have to be thread-safe, but the parameters are set during initialization and no
special care is required at this stage.
An example is a bridge that deals with enums in a custom fashion but needs to access the actual enum
type. Any bridge implementing AppliedOnTypeAwareBridge will get the type the bridge is applied on
injected. Like parameters, the type injected needs no particular care with regard to thread-safety.
284
CHAPTER 13. HIBERNATE SEARCH
If you expect to use your bridge implementation on an id property (that is, annotated with @DocumentId
), you need to use a slightly extended version of StringBridge named TwoWayStringBridge.
Hibernate Search needs to read the string representation of the identifier and generate the object out of
it. There is no difference in the way the @FieldBridge annotation is used.
//id property
@DocumentId
@FieldBridge(impl = PaddedIntegerBridge.class,
params = @Parameter(name="padding", value="10")
private Integer id;
IMPORTANT
13.4.4.2.5. FieldBridge
Some use cases require more than a simple object to string translation when mapping a property to a
Lucene index. To give you the greatest possible flexibility you can also implement a bridge as a
FieldBridge. This interface gives you a property value and let you map it the way you want in your
285
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Lucene Document. You can for example store a property in two different document fields. The interface
is very similar in its concept to the Hibernate UserTypes.
/**
* Store the date in 3 different fields - year, month, day - to ease Range
Query per
* year, month or day (eg get all the elements of December for the last 5
years).
* @author Emmanuel Bernard
*/
public class DateSplitBridge implements FieldBridge {
private final static TimeZone GMT = TimeZone.getTimeZone("GMT");
// set year
luceneOptions.addFieldToDocument(
name + ".year",
String.valueOf( year ),
document );
//property
@FieldBridge(impl = DateSplitBridge.class)
private Date date;
In the example above, the fields are not added directly to Document. Instead the addition is delegated to
the LuceneOptions helper; this helper will apply the options you have selected on @Field, like Store or
TermVector, or apply the chosen @Boost value. It is especially useful to encapsulate the complexity of
COMPRESS implementations. Even though it is recommended to delegate to LuceneOptions to add fields
to the Document, nothing stops you from editing the Document directly and ignore the LuceneOptions in
case you need to.
286
CHAPTER 13. HIBERNATE SEARCH
NOTE
Classes like LuceneOptions are created to shield your application from changes in
Lucene API and simplify your code. Use them if you can, but if you need more flexibility
you are not required to.
13.4.4.2.6. ClassBridge
It is sometimes useful to combine more than one property of a given entity and index this combination in
a specific way into the Lucene index. The @ClassBridge and @ClassBridges annotations can be
defined at the class level, as opposed to the property level. In this case the custom field bridge
implementation receives the entity instance as the value parameter instead of a particular property.
Though not shown in following example, @ClassBridge supports the termVector attribute discussed
in section Basic Mapping.
@Entity
@Indexed
(name="branchnetwork",
store=Store.YES,
impl = CatFieldsClassBridge.class,
params = @Parameter( name="sepChar", value=" " ) )
public class Department {
private int id;
private String network;
private String branchHead;
private String branch;
private Integer maxEmployees
...
}
287
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
In this example, the particular CatFieldsClassBridge is applied to the department instance, the
field bridge then concatenate both branch and network and index the concatenation.
13.5. QUERYING
Hibernate SearchHibernate Search can execute Lucene queries and retrieve domain objects managed
by an InfinispanHibernate session. The search provides the power of Lucene without leaving the
Hibernate paradigm, giving another dimension to the Hibernate classic search mechanisms (HQL,
Criteria query, native SQL query).
Creating a FullTextSession
Creating a Lucene query using either Hibernate QueryHibernate Search query DSL
(recommended) or using the Lucene Query API
To access the querying facilities, use a FullTextSession. This Search specific session wraps a regular
org.hibernate.Session in order to provide query and indexing capabilities.
Use the FullTextSession to build a full-text query using either the Hibernate SearchHibernate Search
query DSL or the native Lucene query.
Use the following code when using the Hibernate SearchHibernate Search query DSL:
final QueryBuilder b =
fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(
Myth.class ).get();
org.apache.lucene.search.Query luceneQuery =
b.keyword()
.onField("history").boostedTo(3)
.matching("storm")
.createQuery();
288
CHAPTER 13. HIBERNATE SEARCH
As an alternative, write the Lucene query using either the Lucene query parser or the Lucene
programmatic API.
org.hibernate.Query fullTextQuery =
fullTextSession.createFullTextQuery(luceneQuery);
List result = fullTextQuery.list(); //return a list of managed objects
A Hibernate query built on the Lucene query is a org.hibernate.Query. This query remains in the same
paradigm as other Hibernate query facilities, such as HQL (Hibernate Query Language), Native, and
Criteria. Use methods such as list(), uniqueResult(), iterate() and scroll() with the query.
The same extensions are available with the Hibernate Java Persistence APIs:
EntityManager em = entityManagerFactory.createEntityManager();
FullTextEntityManager fullTextEntityManager =
org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
...
final QueryBuilder b = fullTextEntityManager.getSearchFactory()
.buildQueryBuilder().forEntity( Myth.class ).get();
org.apache.lucene.search.Query luceneQuery =
b.keyword()
.onField("history").boostedTo(3)
.matching("storm")
.createQuery();
javax.persistence.Query fullTextQuery =
fullTextEntityManager.createFullTextQuery( luceneQuery );
289
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
In these examples, the Hibernate API has been used. The same examples can also be
written with the Java Persistence API by adjusting the way the FullTextQuery is
retrieved.
With the Lucene API, use either the query parser (simple queries) or the Lucene programmatic API
(complex queries). Building a Lucene query is out of scope for the Hibernate Search documentation. For
details, see the online Lucene documentation or a copy of Lucene in Action or Hibernate Search in
Action.
The Lucene programmatic API enables full-text queries. However, when using the Lucene programmatic
API, the parameters must be converted to their string equivalent and must also apply the correct
analyzer to the right field. A ngram analyzer for example uses several ngrams as the tokens for a given
word and should be searched as such. It is recommended to use the QueryBuilder for this task.
The Hibernate Search query API is fluent, with the following key characteristics:
Method names are in English. As a result, API operations can be read and understood as a
series of English phrases and instructions.
It uses IDE autocompletion which helps possible completions for the current input prefix and
allows the user to choose the right option.
To use the API, first create a query builder that is attached to a given indexedentitytype. This
QueryBuilder knows what analyzer to use and what field bridge to apply. Several QueryBuilders (one for
each entity type involved in the root of your query) can be created. The QueryBuilder is derived from the
SearchFactory.
The analyzer used for a given field or fields can also be overridden.
290
CHAPTER 13. HIBERNATE SEARCH
The query builder is now used to build Lucene queries. Customized queries generated using Lucene’s
query parser or Query objects assembled using the Lucene programmatic API are used with the
Hibernate Search DSL.
Query luceneQuery =
mythQB.keyword().onField("history").matching("storm").createQuery();
Parameter Description
onField() Use this parameter to specify in which lucene field to search the word
matching() use this parameter to specify the match for search string
The value "storm" is passed through the history FieldBridge. This is useful when numbers or
dates are involved.
The field bridge value is then passed to the analyzer used to index the field history. This
ensures that the query uses the same term transformation than the indexing (lower case, ngram,
stemming and so on). If the analyzing process generates several terms for a given word, a
boolean query is used with the SHOULD logic (roughly an OR logic).
@Indexed
public class Myth {
@Field(analyze = Analyze.NO)
@DateBridge(resolution = Resolution.YEAR)
public Date getCreationDate() { return creationDate; }
public Date setCreationDate(Date creationDate) { this.creationDate =
creationDate; }
private Date creationDate;
...
}
291
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
In plain Lucene, the Date object had to be converted to its string representation (in this
case the year)
This conversion works for any object, provided that the FieldBridge has an objectToString method (and
all built-in FieldBridge implementations do).
The next example searches a field that uses ngram analyzers. The ngram analyzers index succession of
ngrams of words, which helps to avoid user typos. For example, the 3-grams of the word hibernate are
hib, ibe, ber, ern, rna, nat, ate.
@AnalyzerDef(name = "ngram",
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class ),
filters = {
@TokenFilterDef(factory = StandardFilterFactory.class),
@TokenFilterDef(factory = LowerCaseFilterFactory.class),
@TokenFilterDef(factory = StopFilterFactory.class),
@TokenFilterDef(factory = NGramFilterFactory.class,
params = {
@Parameter(name = "minGramSize", value = "3"),
@Parameter(name = "maxGramSize", value = "3") } )
}
)
...
}
The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, iph, phu, hus.
Each of these ngram will be part of the query. The user is then able to find the Sysiphus myth (with a y).
All that is transparently done for the user.
NOTE
If the user does not want a specific field to use the field bridge or the analyzer then the
ignoreAnalyzer() or ignoreFieldBridge() functions can be called.
To search for multiple possible words in the same field, add them all in the matching clause.
292
CHAPTER 13. HIBERNATE SEARCH
To search the same word on multiple fields, use the onFields method.
Sometimes, one field should be treated differently from another field even if searching the same term,
use the andField() method for that.
To execute a fuzzy query (based on the Levenshtein distance algorithm), start with a keyword query
and add the fuzzy flag.
The threshold is the limit above which two terms are considering matching. It is a decimal between 0
and 1 and the default value is 0.5. The prefixLength is the length of the prefix ignored by the
"fuzzyness". While the default value is 0, a nonzero value is recommended for indexes containing a huge
number of distinct terms.
Wildcard queries are useful in circumstances where only part of the word is known. The ? represents a
single character and * represents multiple characters. Note that for performance purposes, it is
recommended that the query does not start with either ? or *.
293
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
Wildcard queries do not apply the analyzer on the matching terms. The risk of * or ?
being mangled is too high.
So far we have been looking for words or sets of words, the user can also search exact or approximate
sentences. Use phrase() to do so.
Approximate sentences can be searched by adding a slop factor. The slop factor represents the number
of other words permitted in the sentence: this works like a within or near operator.
A range query searches for a value in between given boundaries (included or not) or for a value below or
above a given boundary (included or not).
Queries can be aggregated (combined) to create more complex queries. The following aggregation
operators are available:
SHOULD: the query should contain the matching elements of the subquery.
MUST: the query must contain the matching elements of the subquery.
294
CHAPTER 13. HIBERNATE SEARCH
MUST NOT: the query must not contain the matching elements of the subquery.
The subqueries can be any Lucene query including a boolean query itself.
The Hibernate Search query DSL is an easy to use and easy to read query API. In accepting and
producing Lucene queries, you can incorporate query types not yet supported by the DSL.
The following is a summary of query options for query types and fields:
boostedTo (on query type and on field) boosts the whole query or the specific field to a given
factor.
withConstantScore (on query) returns all results that match the query have a constant score
equals to the boost.
295
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
ignoreAnalyzer (on field) ignores the analyzer when processing this field.
ignoreFieldBridge (on field) ignores field bridge when processing this field.
13.5.1.10.1. Generality
After building the Lucene query, wrap it within a Hibernate query. The query searches all indexed
entities and returns all types of indexed classes unless explicitly configured not to do so.
fullTextQuery = fullTextSession
.createFullTextQuery( luceneQuery, Customer.class );
// or
fullTextQuery = fullTextSession
.createFullTextQuery( luceneQuery, Item.class, Actor.class );
The first part of the second example only returns the matching Customers. The second part of the same
example returns matching Actors and Items. The type restriction is polymorphic. As a result, if the two
subclasses Salesman and Customer of the base class Person return, specify Person.class to filter based
on result types.
13.5.1.10.2. Pagination
296
CHAPTER 13. HIBERNATE SEARCH
To avoid performance degradation, it is recommended to restrict the number of returned objects per
query. A user navigating from one page to another page is a very common use case. The way to define
pagination is similar to defining pagination in a plain HQL or Criteria query.
org.hibernate.Query fullTextQuery =
fullTextSession.createFullTextQuery( luceneQuery, Customer.class );
fullTextQuery.setFirstResult(15); //start from the 15th element
fullTextQuery.setMaxResults(10); //return 10 elements
NOTE
It is still possible to get the total number of matching elements regardless of the pagination
via fulltextQuery.getResultSize().
13.5.1.10.3. Sorting
Apache Lucene contains a flexible and powerful result sorting mechanism. The default sorting is by
relevance and is appropriate for a large variety of use cases. The sorting mechanism can be changed to
sort by other properties using the Lucene Sort object to apply a Lucene sorting strategy.
NOTE
Fields used for sorting must not be tokenized. For more information about tokenizing, see
@Field.
Hibernate SearchHibernate Search loads objects using a single query if the return types are restricted to
one class. Hibernate SearchHibernate Search is restricted by the static fetching strategy defined in the
domain model. It is useful to refine the fetching strategy for a specific use case as follows:
Criteria criteria =
s.createCriteria( Book.class ).setFetchMode( "authors", FetchMode.JOIN
);
s.createFullTextQuery( luceneQuery ).setCriteriaQuery( criteria );
In this example, the query will return all Books matching the LuceneQuery. The authors collection will be
loaded from the same query using an SQL outer join.
In a criteria query definition, the type is guessed based on the provided criteria query. As a result, it is
not necessary to restrict the return entity types.
297
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
IMPORTANT
The fetch mode is the only adjustable property. Do not use a restriction (a where clause)
on the Criteria query because the getResultSize() throws a SearchException if used in
conjunction with a Criteria with restriction.
13.5.1.10.5. Projection
In some cases, only a small subset of the properties is required. Use Hibernate Search to return a subset
of properties as follows:
Hibernate Search extracts properties from the Lucene index and converts them to their object
representation and returns a list of Object[]. Projections prevent a time consuming database round-trip.
However, they have following constraints:
NOTE
Only the simple properties of the indexed entity or its embedded associations can be projected.
Therefore a whole embedded entity cannot be projected.
Projection does not work on collections or maps which are indexed via @IndexedEmbedded
Lucene provides metadata information about query results. Use projection constants to retrieve the
metadata.
org.hibernate.search.FullTextQuery query =
s.createFullTextQuery( luceneQuery, Book.class );
query.;
List results = query.list();
Object[] firstResult = (Object[]) results.get(0);
float score = firstResult[0];
Book book = firstResult[1];
String authorName = firstResult[2];
FullTextQuery.THIS: returns the initialized and managed entity (as a non projected query would
have done).
298
CHAPTER 13. HIBERNATE SEARCH
FullTextQuery.SCORE: returns the document score in the query. Scores are handy to compare
one result against an other for a given query but are useless when comparing the result of
different queries.
By default, Hibernate Search uses the most appropriate strategy to initialize entities matching the full text
query. It executes one (or several) queries to retrieve the required entities. This approach minimizes
database trips where few of the retrieved entities are present in the persistence context (the session) or
the second level cache.
If entities are present in the second level cache, force Hibernate Search to look into the cache before
retrieving a database object.
ObjectLookupMethod defines the strategy to check if an object is easily accessible (without fetching it
from the database). Other options are:
Enable the second-level cache for the relevant entity. This is done using annotations such as
@Cacheable.
Enable second-level cache read access for either Session, EntityManager or Query. Use
CacheMode.NORMAL in Hibernate native APIs or CacheRetrieveMode.USE in Java
Persistence APIs.
299
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
WARNING
Customize how objects are loaded from the database using DatabaseRetrievalMethod as follows:
QUERY (default) uses a set of queries to load several objects in each batch. This approach is
recommended.
Limit to the number of results retrieved when the time limit is raised.
If a query uses more than the defined amount of time, a QueryTimeoutException is raised
(org.hibernate.QueryTimeoutException or javax.persistence.QueryTimeoutException depending on the
programmatic API).
To define the limit when using the native Hibernate APIs, use one of the following approaches:
try {
query.list();
}
catch (org.hibernate.QueryTimeoutException e) {
//do something, too slow
}
300
CHAPTER 13. HIBERNATE SEARCH
The getResultSize(), iterate() and scroll() honor the timeout until the end of the method call. As a result,
Iterable or the ScrollableResults ignore the timeout. Additionally, explain() does not honor this timeout
period. This method is used for debugging and to check the reasons for slow performance of a query.
The following is the standard way to limit execution time using the Java Persistence API (JPA):
try {
query.getResultList();
}
catch (javax.persistence.QueryTimeoutException e) {
//do something, too slow
}
IMPORTANT
The example code does not guarantee that the query stops at the specified results
amount.
If you expect a reasonable number of results (for example using pagination) and expect to work on all of
them, list() or uniqueResult() are recommended. list() work best if the entity batch-size is
set up properly. Note that Hibernate Search has to process all Lucene Hits elements (within the
pagination) when using list() , uniqueResult() and iterate().
If you wish to minimize Lucene document loading, scroll() is more appropriate. Do not forget to close
the ScrollableResults object when you are done, since it keeps Lucene resources. If you expect to use
scroll, but wish to load objects in batch, you can use query.setFetchSize(). When an object is
accessed, and if not already loaded, Hibernate Search will load the next fetchSize objects in one
pass.
IMPORTANT
301
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
to provide a total search results feature, as provided by Google searches. For example, "1-10 of
about 888,000,000 results"
to implement a multi-step search engine that adds approximation if the restricted query returns
zero or not enough results
Of course it would be too costly to retrieve all the matching documents. Hibernate Search allows you to
retrieve the total number of matching documents regardless of the pagination parameters. Even more
interesting, you can retrieve the number of matching elements without triggering a single object load.
org.hibernate.search.FullTextQuery query =
s.createFullTextQuery( luceneQuery, Book.class );
//return the number of matching books without loading a single one
assert 3245 == ;
org.hibernate.search.FullTextQuery query =
s.createFullTextQuery( luceneQuery, Book.class );
query.setMaxResult(10);
List results = query.list();
//return the total number of matching books regardless of pagination
assert 3245 == ;
NOTE
Like Google, the number of results is approximation if the index is not fully up-to-date with
the database (asynchronous cluster for example).
13.5.2.3. ResultTransformer
Projection results are returned as Object arrays. If the data structure used for the object does not match
the requirements of the application, apply a ResultTransformer. The ResultTransformer builds the
required data structure after the query execution.
Projection results are returned as Object arrays. If the data structure used for the object does not match
the requirements of the application, apply a ResultTransformer. The ResultTransformer builds the
required data structure after the query execution.
org.hibernate.search.FullTextQuery query =
s.createFullTextQuery( luceneQuery, Book.class );
query.setProjection( "title", "mainAuthor.name" );
302
CHAPTER 13. HIBERNATE SEARCH
If the results of a query are not what you expected, the Luke tool is useful in understanding the outcome.
However, Hibernate Search also gives you access to the Lucene Explanation object for a given result (in
a given query). This class is considered fairly advanced to Lucene users but can provide a good
understanding of the scoring of an object. You have two ways to access the Explanation object for a
given result:
Use projection
The first approach takes a document ID as a parameter and return the Explanation object. The document
ID can be retrieved using projection and the FullTextQuery.DOCUMENT_ID constant.
WARNING
The Document ID is unrelated to the entity ID. Be careful not to confuse these
concepts.
In the second approach you project the Explanation object using the FullTextQuery.EXPLANATION
constant.
Use the Explanation object only when required as it is roughly as expensive as running the Lucene query
again.
13.5.2.5. Filters
Apache Lucene has a powerful feature that allows you to filter query results according to a custom
filtering process. This is a very powerful way to apply additional data restrictions, especially since filters
can be cached and reused. Use cases include:
security
303
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Hibernate Search pushes the concept further by introducing the notion of parameterizable named filters
which are transparently cached. For people familiar with the notion of Hibernate Core filters, the API is
very similar:
In this example we enabled two filters on top of the query. You can enable (or disable) as many filters as
you like.
Declaring filters is done through the @FullTextFilterDef annotation. This annotation can be on any
@Indexed entity regardless of the query the filter is later applied to. This implies that filter definitions are
global and their names must be unique. A SearchException is thrown in case two different
@FullTextFilterDef annotations with the same name are defined. Each named filter has to specify its
actual filter implementation.
@FullTextFilterDefs( {
@FullTextFilterDef(name = "bestDriver", impl =
BestDriversFilter.class),
@FullTextFilterDef(name = "security", impl =
SecurityFilterFactory.class)
})
public class Driver { ... }
BestDriversFilter is an example of a simple Lucene filter which reduces the result set to drivers whose
score is 5. In this example the specified filter implements the org.apache.lucene.search.Filter
directly and contains a no-arg constructor.
If your Filter creation requires additional steps or if the filter you want to use does not have a no-arg
constructor, you can use the factory pattern:
304
CHAPTER 13. HIBERNATE SEARCH
@Factory
public Filter getFilter() {
//some additional steps to cache the filter results per
IndexReader
Filter bestDriversFilter = new BestDriversFilter();
return new CachingWrapperFilter(bestDriversFilter);
}
}
Hibernate Search will look for a @Factory annotated method and use it to build the filter instance. The
factory must have a no-arg constructor.
Infinispan Query uses a @Factory annotated method to build the filter instance. The factory must have a
no argument constructor.
Named filters come in handy where parameters have to be passed to the filter. For example a security
filter might want to know which security level you want to apply:
Each parameter name should have an associated setter on either the filter or filter factory of the targeted
named filter definition. ] Example: Using parameters in the actual filter implementation
/**
* injected parameter
*/
public void setLevel(Integer level) {
this.level = level;
}
@Factory
public Filter getFilter() {
Query query = new TermQuery( new Term("level", level.toString() )
);
return new CachingWrapperFilter( new QueryWrapperFilter(query) );
}
}
Note the method annotated @Key returns a FilterKey object. The returned object has a special contract:
305
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
the key object must implement equals() / hashCode() so that two keys are equal if and only if the given
Filter types are the same and the set of parameters are the same. In other words, two filter keys are
equal if and only if the filters from which the keys are generated can be interchanged. The key object is
used as a key in the cache mechanism.
In most cases, using the StandardFilterKey implementation will be good enough. It delegates the
equals() / hashCode() implementation to each of the parameters equals and hashcode methods.
As mentioned before the defined filters are per default cached and the cache uses a combination of hard
and soft references to allow disposal of memory when needed. The hard reference cache keeps track of
the most recently used filters and transforms the ones least used to SoftReferences when needed. Once
the limit of the hard reference cache is reached additional filters are cached as SoftReferences. To adjust
the size of the hard reference cache, use hibernate.search.filter.cache_strategy.size
(defaults to 128). For advanced use of filter caching, implement your own FilterCachingStrategy. The
classname is defined by hibernate.search.filter.cache_strategy.
This filter caching mechanism should not be confused with caching the actual filter results. In Lucene it is
common practice to wrap filters using the IndexReader around a CachingWrapperFilter. The wrapper will
cache the DocIdSet returned from the getDocIdSet(IndexReader reader) method to avoid expensive
recomputation. It is important to mention that the computed DocIdSet is only cachable for the same
IndexReader instance, because the reader effectively represents the state of the index at the moment it
was opened. The document list cannot change within an opened IndexReader. A different/new
IndexReader instance, however, works potentially on a different set of Documents (either from a different
index or simply because the index has changed), hence the cached DocIdSet has to be recomputed.
Hibernate Search also helps with this aspect of caching. Per default the cache flag of
@FullTextFilterDef is set to FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS which will
automatically cache the filter instance as well as wrap the specified filter around a Hibernate specific
implementation of CachingWrapperFilter. In contrast to Lucene’s version of this class SoftReferences are
used together with a hard reference count (see discussion about filter cache). The hard reference count
can be adjusted using hibernate.search.filter.cache_docidresults.size (defaults to 5).
The wrapping behaviour can be controlled using the @FullTextFilterDef.cache parameter. There
are three different values for this parameter:
Value Definition
306
CHAPTER 13. HIBERNATE SEARCH
Value Definition
FilterCacheModeType.INSTANCE_AND_ Both the filter instance and the DocIdSet results are cached.
DOCIDSETRESULTS This is the default value.
Last but not least - why should filters be cached? There are two areas where filter caching shines:
the system does not update the targeted entity index often (in other words, the IndexReader is
reused a lot)
the Filter’s DocIdSet is expensive to compute (compared to the time spent to execute the query)
In a sharded environment it is possible to execute queries on a subset of the available shards. This can
be done in two steps:
1. Create a sharding strategy that does select a subset of IndexManagers depending on a filter
configuration.
307
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
return getIndexManagersForAllShards();
}
/**
* Optimization; don't search ALL shards and union the results; in
this case, we
* can be certain that all the data for a particular customer Filter
is in a single
* shard; simply return that shard by customerID.
*/
public IndexManager[] getIndexManagersForQuery(
FullTextFilterImplementor[] filters) {
FullTextFilter filter = getCustomerFilter(filters, "customer");
if (filter == null) {
return getIndexManagersForAllShards();
}
else {
return new IndexManager[] { indexManagers[Integer.parseInt(
filter.getParameter("customerID").toString())] };
}
}
In this example, if the filter named customer is present, only the shard dedicated to this customer is
queried, otherwise, all shards are returned. A given Sharding strategy can react to one or more filters and
depends on their parameters.
The second step is to activate the filter at query time. While the filter can be a regular filter (as defined in
) which also filters Lucene results after the query, you can make use of a special filter that will only be
passed to the sharding strategy (and is otherwise ignored).
To use this feature, specify the ShardSensitiveOnlyFilter class when declaring your filter.
@Indexed
@FullTextFilterDef(name="customer", impl=ShardSensitiveOnlyFilter.class)
public class Customer {
...
}
Note that by using the ShardSensitiveOnlyFilter, you do not have to implement any Lucene filter. Using
filters and sharding strategy reacting to these filters is recommended to speed up queries in a sharded
environment.
308
CHAPTER 13. HIBERNATE SEARCH
13.5.3. Faceting
Faceted search is a technique which allows the results of a query to be divided into multiple categories.
This categorization includes the calculation of hit counts for each category and the ability to further
restrict search results based on these facets (categories). The example below shows a faceting
example. The search results in fifteen hits which are displayed on the main part of the page. The
navigation bar on the left, however, shows the category Computers & Internet with its subcategories
Programming, Computer Science, Databases, Software, Web Development, Networking and Home
Computing. For each of these subcategories the number of books is shown matching the main search
criteria and belonging to the respective subcategory. This division of the category Computers & Internet
is one concrete search facet. Another one is for example the average customer review.
Faceted search divides the results of a query into categories. The categorization includes the calculation
of hit counts for each category and the further restricts search results based on these facets (categories).
The following example displays a faceting search results in fifteen hits displayed on the main page.
The left side navigation bar diplays the categories and subcategories. For each of these subcategories
the number of books matches the main search criteria and belongs to the respective subcategory. This
division of the category Computers & Internet is one concrete search facet. Another example is the
average customer review.
In Hibernate Search, the classes QueryBuilder and FullTextQuery are the entry point into the faceting
API. The former creates faceting requests and the latter accesses the FacetManager. The
FacetManager applies faceting requests on a query and selects facets that are added to an existing
query to refine search results. The examples use the entity Cd as shown in the example below:
309
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Example: Entity Cd
@Indexed
public class Cd {
@Fields( {
@Field,
@Field(name = "name_un_analyzed", analyze = Analyze.NO)
})
private String name;
@Field(analyze = Analyze.NO)
310
CHAPTER 13. HIBERNATE SEARCH
@NumericField
private int price;
Field(analyze = Analyze.NO)
@DateBridge(resolution = Resolution.YEAR)
private Date releaseYear;
@Field(analyze = Analyze.NO)
private String label;
// setter/getter
...
NOTE
Prior to Hibernate Search 5.2, there was no need to explicitly use a @Facet annotation. In
Hibernate Search 5.2 it became necessary in order to use Lucene’s native faceting API.
The first step towards a faceted search is to create the FacetingRequest. Currently two types of faceting
requests are supported. The first type is called discrete faceting and the second type range faceting
request. In the case of a discrete faceting request you specify on which index field you want to facet
(categorize) and which faceting options to apply. An example for a discrete faceting request can be seen
in the following example:
When executing this faceting request a Facet instance will be created for each discrete value for the
indexed field label. The Facet instance will record the actual field value including how often this
particular field value occurs within the original query results. orderedBy, includeZeroCounts and
maxFacetCount are optional parameters which can be applied on any faceting request. orderedBy
allows to specify in which order the created facets will be returned. The default is
FacetSortOrder.COUNT_DESC, but you can also sort on the field value or the order in which ranges
were specified. includeZeroCount determines whether facets with a count of 0 will be included in the
result (per default they are) and maxFacetCount allows to limit the maximum amount of facets returned.
311
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
At the moment there are several preconditions an indexed field has to meet in order to
apply faceting on it. The indexed property must be of type String, Date or a subtype of
Number and null values should be avoided. Furthermore the property has to be indexed
with Analyze.NO and in case of a numeric property @NumericField needs to be
specified.
The creation of a range faceting request is quite similar except that we have to specify ranges for the
field values we are faceting on. A range faceting request can be seen below where three different price
ranges are specified. The below and above can only be specified once, but you can specify as many
from - to ranges as you want. For each range boundary you can also specify via excludeLimit whether
it is included into the range or not.
A faceting request is applied to a query via the FacetManager class which can be retrieved via the
FullTextQuery class.
You can enable as many faceting requests as you like and retrieve them afterwards via getFacets()
specifying the faceting request name. There is also a disableFaceting() method which allows you to
disable a faceting request by specifying its name.
A faceting request can be applied on a query using the FacetManager, which can be retrieved via the
FullTextQuery.
312
CHAPTER 13. HIBERNATE SEARCH
Multiple faceting requests can be retrieved using getFacets() and specifying the faceting request name.
Last but not least, you can apply any of the returned Facets as additional criteria on your original query in
order to implement a "drill-down" functionality. For this purpose FacetSelection can be utilized.
FacetSelections are available via the FacetManager and allow you to select a facet as query criteria
(selectFacets), remove a facet restriction (deselectFacets), remove all facet restrictions
(clearSelectedFacets) and retrieve all currently selected facets (getSelectedFacets). The following
snippet shows an example.
The number of objects loaded: use pagination (always) or index projection (if needed).
The way Hibernate Search interacts with the Lucene readers: defines the appropriate reader
strategy.
Caching frequently extracted values from the index: see Caching Index Values: FieldCache
313
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
The primary function of a Lucene index is to identify matches to your queries. After the query is
performed the results must be analyzed to extract useful information. Hibernate Search would typically
need to extract the Class type and the primary key.
Extracting the needed values from the index has a performance cost, which in some cases might be very
low and not noticeable, but in some other cases might be a good candidate for caching.
The requirements depend on the kind of Projections being used, as in some cases the Class type is not
needed as it can be inferred from the query context or other means.
Using the @CacheFromIndex annotation you can experiment with different kinds of caching of the main
metadata fields required by Hibernate Search:
@Indexed
@CacheFromIndex( { CLASS, ID } )
public class Essay {
...
CLASS: Hibernate Search will use a Lucene FieldCache to improve performance of the Class
type extraction from the index.
This value is enabled by default, and is what Hibernate Search will apply if you do not specify the
@CacheFromIndex annotation.
ID: Extracting the primary identifier will use a cache. This is likely providing the best performing
queries, but will consume much more memory which in turn might reduce performance.
NOTE
Measure the performance and memory consumption impact after warmup (executing
some queries). Performance may improve by enabling Field Caches but this is not always
the case.
Memory usage: these caches can be quite memory hungry. Typically the CLASS cache has
lower requirements than the ID cache.
Index warmup: when using field caches, the first query on a new index or segment will be slower
than when you do not have caching enabled.
With some queries the classtype will not be needed at all, in that case even if you enabled the CLASS
field cache, this might not be used; for example if you are targeting a single class, obviously all returned
values will be of that type (this is evaluated at each Query execution).
For the ID FieldCache to be used, the ids of targeted entities must be using a TwoWayFieldBridge (as all
builting bridges), and all types being loaded in a specific query must use the fieldname for the id, and
have ids of the same type (this is evaluated at each Query execution).
314
CHAPTER 13. HIBERNATE SEARCH
All these methods affect the Lucene Index only, no changes are applied to the database.
Directly add an object or instance to the index using FullTextSession.index(T entity). The index is
updated when the entity is indexed. Infinispan Query applies changes to the index during the transaction
commit.
In case you want to add all instances for a type, or for all indexed types, the recommended approach is
to use a MassIndexer: see for more details.
Use a MassIndexer to add all instances for a type (or for all indexed types). See Using a MassIndexer for
more information.
The purging operation permits the removal of a single entity or all entities of a given type from a Lucene
index without physically removing them from the database. This operation is performed using the
FullTextSession.
315
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
NOTE
NOTE
All manual indexing methods (index, purge, and purgeAll) only affect the index, not the
database, nevertheless they are transactional and as such they will not be applied until the
transaction is successfully committed, or you make use of flushToIndexes.
Changing the entity mapping in the indexer may require the entire index to be updated. For example, if
an existing field is to be indexed using a different analyzer, the index will need to be rebuilt for affected
types.
Additionally, if the database is replaced by restoring from a backup or being imported from a legacy
system, the index will need to be rebuilt from existing data. Infinispan Query provides two main
strategies:
Use a MassIndexer.
This strategy consists of removing the existing index and then adding all entities back to the index using
FullTextSession.purgeAll() and FullTextSession.index(), however there are some
memory and efficiency constraints. For maximum efficiency Hibernate Search batches index operations
and executes them at commit time. If you expect to index a lot of data you need to be careful about
memory consumption since all documents are kept in a queue until the transaction commit. You can
potentially face an OutOfMemoryException if you do not empty the queue periodically; to do this use
fullTextSession.flushToIndexes(). Every time fullTextSession.flushToIndexes() is
called (or if the transaction is committed), the batch queue is processed, applying all index changes. Be
aware that, once flushed, the changes cannot be rolled back.
fullTextSession.setFlushMode(FlushMode.MANUAL);
fullTextSession.setCacheMode(CacheMode.IGNORE);
transaction = fullTextSession.beginTransaction();
//Scrollable results will avoid loading too many objects in memory
ScrollableResults results = fullTextSession.createCriteria( Email.class )
.setFetchSize(BATCH_SIZE)
.scroll( ScrollMode.FORWARD_ONLY );
int index = 0;
316
CHAPTER 13. HIBERNATE SEARCH
while( results.next() ) {
index++;
fullTextSession.index( results.get(0) ); //index each element
if (index % BATCH_SIZE == 0) {
fullTextSession.flushToIndexes(); //apply changes to indexes
fullTextSession.clear(); //free memory since the queue is
processed
}
}
transaction.commit();
NOTE
Try to use a batch size that guarantees that your application will not be out of memory: with a bigger
batch size objects are fetched faster from database but more memory is needed.
Hibernate Search’s MassIndexer uses several parallel threads to rebuild the index. You can optionally
select which entities need to be reloaded or have it reindex all entities. This approach is optimized for
best performance but requires to set the application in maintenance mode. Querying the index is not
recommended when a MassIndexer is busy.
fullTextSession.createIndexer().startAndWait();
This will rebuild the index, deleting it and then reloading all entities from the database. Although it is
simple to use, some tweaking is recommended to speed up the process.
WARNING
fullTextSession
.createIndexer( User.class )
.batchSizeToLoadObjects( 25 )
.cacheMode( CacheMode.NORMAL )
.threadsToLoadObjects( 12 )
.idFetchSize( 150 )
.progressMonitor( monitor ) //a MassIndexerProgressMonitor implementation
.startAndWait();
317
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
This will rebuild the index of all User instances (and subtypes), and will create 12 parallel threads to load
the User instances using batches of 25 objects per query. These same 12 threads will also need to
process indexed embedded relations and custom FieldBridges or ClassBridges to output a
Lucene document. The threads trigger lazy loading of additional attributes during the conversion
process. Because of this, a high number of threads working in parallel is required. The number of threads
working on actual index writing is defined by the back-end configuration of each index.
NOTE
The ideal of number of threads to achieve best performance is highly dependent on your
overall architecture, database design and data values. All internal thread groups have
meaningful names so they should be easily identified with most diagnostic tools, including
thread dumps.
NOTE
Other parameters which affect indexing time and memory consumption are:
hibernate.search.[default|<indexname>].exclusive_index_use
hibernate.search.[default|<indexname>].indexwriter.max_buffered_docs
hibernate.search.[default|<indexname>].indexwriter.max_merge_docs
hibernate.search.[default|<indexname>].indexwriter.merge_factor
hibernate.search.[default|<indexname>].indexwriter.merge_min_size
hibernate.search.[default|<indexname>].indexwriter.merge_max_size
hibernate.search.[default|
<indexname>].indexwriter.merge_max_optimize_size
hibernate.search.[default|
<indexname>].indexwriter.merge_calibrate_by_deletes
hibernate.search.[default|<indexname>].indexwriter.ram_buffer_size
hibernate.search.[default|<indexname>].indexwriter.term_index_interval
Previous versions also had a max_field_length but this was removed from Lucene, it is possible to
obtain a similar effect by using a LimitTokenCountAnalyzer.
318
CHAPTER 13. HIBERNATE SEARCH
All .indexwriter parameters are Lucene specific and Hibernate Search passes these parameters
through.
The MassIndexer uses a forward only scrollable result to iterate on the primary keys to be loaded, but
MySQL’s JDBC driver will load all values in memory. To avoid this "optimization" set idFetchSize to
Integer.MIN_VALUE.
Optimizing the Lucene index speeds up searches but has no effect on the indexation (update)
performance. During an optimization, searches can be performed, but will most likely be slowed down. All
index updates will be stopped. It is recommended to schedule optimization:
Optimizing the Lucene index speeds up searches, but has no effect on the index update performance.
Searches can be performed during an optimization process, however they will be slower than expected.
All index updates are on hold during the optimization. It is therefore recommended to schedule
optimization:
MassIndexer (see Using a MassIndexer) optimizes indexes by default at the start and at the end of
processing. Use MassIndexer.optimizeAfterPurge and MassIndexer.optimizeOnFinish to
change this default behavior.
The configuration for automatic index optimization can be defined either globally or per index:
hibernate.search.default.optimizer.operation_limit.max = 1000
hibernate.search.default.optimizer.transaction_limit.max = 100
hibernate.search.Animal.optimizer.transaction_limit.max = 50
319
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
hibernate.search.default.optimizer.implementation =
com.acme.worlddomination.SmartOptimizer
hibernate.search.default.optimizer.SomeOption = CustomConfigurationValue
hibernate.search.humans.optimizer.implementation = default
The keyword default can be used to select the Hibernate Search default implementation; all properties
after the .optimizer key separator will be passed to the implementation’s initialize method at start.
FullTextSession fullTextSession =
Search.getFullTextSession(regularSession);
SearchFactory searchFactory = fullTextSession.getSearchFactory();
searchFactory.optimize(Order.class);
// or
searchFactory.optimize();
The first example optimizes the Lucene index holding Orders and the second optimizes all indexes.
NOTE
searchFactory.optimize() has no effect on a JMS back end. You must apply the
optimize operation on the Master node.
searchFactory.optimize() is applied to the master node because it does not affect the JMC back
end.
320
CHAPTER 13. HIBERNATE SEARCH
hibernate.search.[default|<indexname>].indexwriter.max_buffered_docs
hibernate.search.[default|<indexname>].indexwriter.max_merge_docs
hibernate.search.[default|<indexname>].indexwriter.merge_factor
hibernate.search.[default|<indexname>].indexwriter.ram_buffer_size
hibernate.search.[default|<indexname>].indexwriter.term_index_interval
FullTextSession fullTextSession =
Search.getFullTextSession(regularSession);
SearchFactory searchFactory = fullTextSession.getSearchFactory();
IndexReader reader =
searchFactory.getIndexReaderAccessor().open(Order.class);
try {
//perform read-only operations on the reader
}
finally {
searchFactory.getIndexReaderAccessor().close(reader);
}
In this example the SearchFactory determines which indexes are needed to query this entity (considering
a Sharding strategy). Using the configured ReaderProvider on each index, it returns a compound
IndexReader on top of all involved indexes. Because this IndexReader is shared amongst several
clients, you must adhere to the following rules:
Don not use this IndexReader for modification operations (it is a readonly IndexReader, and any
such attempt will result in an exception).
321
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Aside from those rules, you can use the IndexReader freely, especially to do native Lucene queries.
Using the shared IndexReaders will make most queries more efficient than by opening one directly from,
for example, the filesystem.
As an alternative to the method open(Class… types) you can use open(String… indexNames), allowing
you to pass in one or more index names. Using this strategy you can also select a subset of the indexes
for any indexed type if sharding is used.
IndexReader reader =
searchFactory.getIndexReaderAccessor().open("Products.1", "Products.3");
If you know your index is represented as a Directory and need to access it, you can get a reference to
the Directory via the IndexManager. Cast the IndexManager to a DirectoryBasedIndexManager and then
use getDirectoryProvider().getDirectory() to get a reference to the underlying Directory.
This is not recommended, we would encourage to use the IndexReader instead.
WARNING
A single index is so large that index update times are slowing the application down.
A typical search will only hit a subset of the index, such as when data is naturally segmented by
customer, region or application.
By default sharding is not enabled unless the number of shards is configured. To do this use the
hibernate.search.<indexName>.sharding_strategy.nbr_of_shards property.
hibernate.search.<indexName>.sharding_strategy.nbr_of_shards = 5
322
CHAPTER 13. HIBERNATE SEARCH
Responsible for splitting the data into sub-indexes is the IndexShardingStrategy. The default sharding
strategy splits the data according to the hash value of the ID string representation (generated by the
FieldBridge). This ensures a fairly balanced sharding. You can replace the default strategy by
implementing a custom IndexShardingStrategy. To use your custom strategy you have to set the
hibernate.search.<indexName>.sharding_strategy property.
hibernate.search.<indexName>.sharding_strategy =
my.shardingstrategy.Implementation
The IndexShardingStrategy property also allows for optimizing searches by selecting which shard to run
the query against. By activating a filter a sharding strategy can select a subset of the shards used to
answer a query (IndexShardingStrategy.getIndexManagersForQuery) and thus speed up the query
execution.
Each shard has an independent IndexManager and so can be configured to use a different directory
provider and back-end configuration. The IndexManager index names for the Animal entity in the
example below are Animal.0 to Animal.4. In other words, each shard has the name of its owning
index followed by . (dot) and its index number.
hibernate.search.default.indexBase = /usr/lucene/indexes
hibernate.search.Animal.sharding_strategy.nbr_of_shards = 5
hibernate.search.Animal.directory_provider = filesystem
hibernate.search.Animal.0.indexName = Animal00
hibernate.search.Animal.3.indexBase = /usr/lucene/sharded
hibernate.search.Animal.3.indexName = Animal03
In the example above, the configuration uses the default id string hashing strategy and shards the
Animal index into 5 sub-indexes. All sub-indexes are filesystem instances and the directory where each
sub-index is stored is as followed:
When implementing a IndexShardingStrategy any field can be used to determine the sharding selection.
Consider that to handle deletions, purge and purgeAll operations, the implementation might need to
return one or more indexes without being able to read all the field values or the primary identifier. In that
case the information is not enough to pick a single index, all indexes should be returned, so that the
delete operation will be propagated to all indexes potentially containing the documents to be deleted.
323
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Factor Description
tf(t ind) Term frequency factor for the term (t) in the document (d).
coord(q,d) Score factor based on how many of the query terms are found in the
specified document.
It is beyond the scope of this manual to explain this formula in more detail. Please refer to Similarity’s
Javadocs for more information.
First you can set the default similarity by specifying the fully specified classname of your Similarity
implementation using the property hibernate.search.similarity. The default value is
org.apache.lucene.search.DefaultSimilarity.
You can also override the similarity used for a specific index by setting the similarity property
hibernate.search.default.similarity = my.custom.Similarity
Finally you can override the default similarity on class level using the @Similarity annotation.
@Entity
@Indexed
@Similarity(impl = DummySimilarity.class)
public class Book {
...
}
As an example, let us assume it is not important how often a term appears in a document. Documents
with a single occurrence of the term should be scored the same as documents with multiple occurrences.
In this case your custom implementation of the method tf(float freq) should return 1.0.
324
CHAPTER 13. HIBERNATE SEARCH
WARNING
When two entities share the same index they must declare the same Similarity
implementation. Classes in the same class hierarchy always share the index, so it is
not allowed to override the Similarity implementation in a subtype.
Likewise, it does not make sense to define the similarity via the index setting and the
class-level setting as they would conflict. Such a configuration will be rejected.
hibernate.search.error_handler = log
The default exception handling occurs for both synchronous and asynchronous indexing. Hibernate
Search provides an easy mechanism to override the default error handling implementation.
In order to provide your own implementation you must implement the ErrorHandler interface, which
provides the handle(ErrorContext context) method. ErrorContext provides a reference to the
primary LuceneWork instance, the underlying exception and any subsequent LuceneWork instances
that could not be processed due to the primary exception.
To register this error handler with Hibernate Search you must declare the fully qualified classname of
your ErrorHandler implementation in the configuration properties:
hibernate.search.error_handler = CustomerErrorHandler
Disable Indexing
To disable Hibernate Search indexing, change the indexing_strategy configuration option to
manual, then restart JBoss EAP.
hibernate.search.indexing_strategy = manual
325
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
hibernate.search.autoregister_listeners = false
13.9. MONITORING
Hibernate Search offers access to a Statistics object via SearchFactory.getStatistics(). It
allows you, for example, to determine which classes are indexed and how many entities are in the index.
This information is always available. However, by specifying the
hibernate.search.generate_statistics property in your configuration you can also collect total
and average Lucene query and object loading timings.
Monitoring Indexing
If the mass indexer API is used, you can monitor indexing progress using the
IndexingProgressMonitorMBean bean. The bean is only bound to JMX while indexing is in
progress.
NOTE
JMX beans can be accessed remotely using JConsole by setting the system property
com.sun.management.jmxremote to true.
326
CHAPTER 14. BEAN VALIDATION
Hibernate Validator is the JBoss EAP implementation of Bean Validation. It is also the reference
implementation of the JSR.
JBoss EAP is 100% compliant with JSR 349 Bean Validation 1.1 specification. Hibernate Validator also
provides additional features to the specification.
To get started with Bean Validation, see the bean-validation quickstart that ships with JBoss EAP.
For information about how to download and run the quickstarts, see Using the Quickstart Examples.
The built-in validation constraints for Hibernate Validator are listed here: Hibernate Validator Constraints.
NOTE
327
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
328
CHAPTER 14. BEAN VALIDATION
NOTE
The parameter @Valid is a part of the Bean Validation specification, even though it is
located in the javax.validation.constraints package.
@NotEmpty Check if the string is not Columns are not null for
null nor empty. Check if String.
the connection is not null
nor empty.
329
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Creating a Bean Validation custom constraint requires that you create a constraint annotation and
implement a constraint validator. The following abbreviated code examples are taken from the bean-
validation-custom-constraint quickstart that ships with JBoss EAP. See that quickstart for a
complete working example.
The following example shows the personAddress field of entity Person is validated using a set of
custom constraints defined in the class AddressValidator.
package org.jboss.as.quickstarts.bean_validation_custom_constraint;
@Entity
@Table(name = "person")
public class Person implements Serializable {
@Id
@GeneratedValue
@Column(name = "person_id")
private Long personId;
@NotNull
@Size(min = 4)
private String firstName;
@NotNull
@Size(min = 4)
private String lastName;
public Person() {
330
CHAPTER 14. BEAN VALIDATION
this.lastName = lastName;
this.personAddress = address;
}
package org.jboss.as.quickstarts.bean_validation_custom_constraint;
import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import javax.validation.Constraint;
import javax.validation.Payload;
package org.jboss.as.quickstarts.bean_validation_custom_constraint;
import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToOne;
import javax.persistence.PrimaryKeyJoinColumn;
import javax.persistence.Table;
@Entity
331
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
@Table(name = "person_address")
public class PersonAddress implements Serializable {
@Id
@Column(name = "person_id", unique = true, nullable = false)
@GeneratedValue(strategy = GenerationType.SEQUENCE)
private Long personId;
@OneToOne
@PrimaryKeyJoinColumn
private Person person;
public PersonAddress() {
Having defined the annotation, you need to create a constraint validator that is able to validate elements
with an @Address annotation. To do so, implement the interface ConstraintValidator as shown
below:
package org.jboss.as.quickstarts.bean_validation_custom_constraint;
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;
import
org.jboss.as.quickstarts.bean_validation_custom_constraint.PersonAddress;
332
CHAPTER 14. BEAN VALIDATION
/**
* 1. A null address is handled by the @NotNull constraint on the
@Address.
* 2. The address should have all the data values specified.
* 3. Pin code in the address should be of at least 6 characters.
* 4. The country in the address should be of at least 4 characters.
*/
public boolean isValid(PersonAddress value, ConstraintValidatorContext
context) {
if (value == null) {
return true;
}
if (value.getCity().isEmpty()
|| value.getCountry().isEmpty() ||
value.getLocality().isEmpty()
|| value.getPinCode().isEmpty() || value.getState().isEmpty()
|| value.getStreetAddress().isEmpty()) {
return false;
}
if (value.getPinCode().length() < 6) {
return false;
}
if (value.getCountry().length() < 4) {
return false;
}
return true;
}
}
333
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<validation-config
xmlns="http://jboss.org/xml/ns/javax/validation/configuration"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration
">
<default-provider>
org.hibernate.validator.HibernateValidator
</default-provider>
<message-interpolator>
org.hibernate.validator.messageinterpolation.ResourceBundleMessageInterpol
ator
</message-interpolator>
<constraint-validator-factory>
org.hibernate.validator.engine.ConstraintValidatorFactoryImpl
</constraint-validator-factory>
<constraint-mapping>
/constraints-example.xml
</constraint-mapping>
<property name="prop1">value1</property>
<property name="prop2">value2</property>
</validation-config>
The node default-provider allows to choose the bean validation provider. This is useful if there is
more than one provider on the classpath. The message-interpolator and constraint-
validator-factory properties are used to customize the used implementations for the interfaces
MessageInterpolator and ConstraintValidatorFactory, which are defined in the
javax.validation package. The constraint-mapping element lists additional XML files
containing the actual constraint configuration.
334
CHAPTER 15. CREATING WEBSOCKET APPLICATIONS
A connection is first established between client and server as an HTTP connection. The client then
requests a WebSocket connection using the Upgrade header. All communications are then full-duplex
over the same TCP/IP connection, with minimal data overhead. Because each message does not
include unnecessary HTTP header content, Websocket communications require smaller bandwidth. The
result is a low latency communications path suited to applications, which require real-time
responsiveness.
The JBoss EAP WebSocket implementation provides full dependency injection support for server
endpoints, however, it does not provide CDI services for client endpoints.
A Java client or a WebSocket enabled HTML client. You can verify HTML client browser support
at this location: http://caniuse.com/#feat=websockets
connect(): This function creates the WebSocket connection passing the WebSocket URI.
The resource location matches the resource defined in the server endpoint class. This
function also intercepts and handles the WebSocket onopen, onmessage, onerror, and
onclose.
sendMessage(): This function gets the name entered in the form, creates a message, and
sends it using a WebSocket.send() command.
displayMessage(): This function sets the display message on the page to the value
returned by the WebSocket endpoint method.
<html>
<head>
<title>WebSocket: Say Hello</title>
<link rel="stylesheet" type="text/css"
335
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
href="resources/css/hello.css" />
<script type="text/javascript">
var websocket = null;
function connect() {
var wsURI = 'ws://' + window.location.host + '/jboss-
websocket-hello/websocket/helloName';
websocket = new WebSocket(wsURI);
websocket.onopen = function() {
displayStatus('Open');
document.getElementById('sayHello').disabled =
false;
displayMessage('Connection is now open. Type a name
and click Say Hello to send a message.');
};
websocket.onmessage = function(event) {
// log the event
displayMessage('The response was received! ' +
event.data, 'success');
};
websocket.onerror = function(event) {
// log the event
displayMessage('Error! ' + event.data, 'error');
};
websocket.onclose = function() {
displayStatus('Closed');
displayMessage('The connection was closed or timed
out. Please click the Open Connection button to reconnect.');
document.getElementById('sayHello').disabled = true;
};
}
function disconnect() {
if (websocket !== null) {
websocket.close();
websocket = null;
}
message.setAttribute("class", "message");
message.value = 'WebSocket closed.';
// log the event
}
function sendMessage() {
if (websocket !== null) {
var content = document.getElementById('name').value;
websocket.send(content);
} else {
displayMessage('WebSocket connection is not
established. Please click the Open Connection button.', 'error');
}
}
function displayMessage(data, style) {
var message = document.getElementById('hellomessage');
message.setAttribute("class", style);
message.value = data;
}
function displayStatus(status) {
var currentStatus =
document.getElementById('currentstatus');
336
CHAPTER 15. CREATING WEBSOCKET APPLICATIONS
currentStatus.value = status;
}
</script>
</head>
<body>
<div>
<h1>Welcome to Red Hat JBoss Enterprise Application
Platform!</h1>
<div>This is a simple example of a WebSocket
implementation.</div>
<div id="connect-container">
<div>
<fieldset>
<legend>Connect or disconnect using websocket
:</legend>
<input type="button" id="connect"
onclick="connect();" value="Open Connection" />
<input type="button" id="disconnect"
onclick="disconnect();" value="Close Connection" />
</fieldset>
</div>
<div>
<fieldset>
<legend>Type your name below, then click the `Say
Hello` button :</legend>
<input id="name" type="text" size="40"
style="width: 40%"/>
<input type="button" id="sayHello"
onclick="sendMessage();" value="Say Hello" disabled="disabled"/>
</fieldset>
</div>
<div>Current WebSocket Connection Status: <output
id="currentstatus" class="message">Closed</output></div>
<div>
<output id="hellomessage" />
</div>
</div>
</div>
</body>
</html>
Annotated Endpoint: The endpoint class uses annotations to interact with the
WebSocket events. It is simpler to code than the programmatic endpoint.
The code example below uses the annotated endpoint approach and handles the following
events.
337
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
package org.jboss.as.quickstarts.websocket_hello;
import javax.websocket.CloseReason;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;
@ServerEndpoint("/websocket/helloName")
public class HelloName {
@OnMessage
public String sayHello(String name) {
System.out.println("Say hello to '" + name + "'");
return ("Hello" + name);
}
@OnOpen
public void helloOnOpen(Session session) {
System.out.println("WebSocket opened: " +
session.getId());
}
@OnClose
public void helloOnClose(CloseReason reason) {
System.out.println("WebSocket connection closed with
CloseCode: " + reason.getCloseCode());
}
}
<dependency>
<groupId>org.jboss.spec.javax.websocket</groupId>
<artifactId>jboss-websocket-api_1.0_spec</artifactId>
<version>1.0.0.Final</version>
<scope>provided</scope>
</dependency>
The quickstarts that ship with JBoss EAP include additional WebSocket client and endpoint code
examples.
338
CHAPTER 16. JAVA AUTHORIZATION CONTRACT FOR CONTAINERS (JACC)
JBoss EAP implements support for JACC within the security functionality of the security subsystem.
<jboss-web>
<security-domain>jacc</security-domain>
<use-jboss-authorization>true</use-jboss-authorization>
</jboss-web>
339
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<ejb-jar>
<assembly-descriptor>
<method-permission>
<description>The employee and temp-employee roles may access any
method of the EmployeeService bean </description>
<role-name>employee</role-name>
<role-name>temp-employee</role-name>
<method>
<ejb-name>EmployeeService</ejb-name>
<method-name>*</method-name>
</method>
</method-permission>
</assembly-descriptor>
</ejb-jar>
You can also constrain the authentication and authorization mechanisms for an EJB by using a security
domain, just as you can do for a web application. Security domains are declared in the jboss-
ejb3.xml descriptor, in the <security> child element. In addition to the security domain, you can also
specify the <run-as-principal>, which changes the principal that the EJB runs as.
<ejb-jar>
<assembly-descriptor>
<security>
<ejb-name>*</ejb-name>
<security-domain>myDomain</security-domain>
<run-as-principal>myPrincipal</run-as-principal>
</security>
</assembly-descriptor>
</ejb-jar>
340
CHAPTER 17. JAVA AUTHENTICATION SPI FOR CONTAINERS (JASPI)
<authentication-jaspi>
<login-module-stack name="...">
<login-module code="..." flag="...">
<module-option name="..." value="..."/>
</login-module>
</login-module-stack>
<auth-module code="..." login-module-stack-ref="...">
<module-option name="..." value="..."/>
</auth-module>
</authentication-jaspi>
The login module itself is configured the same way as a standard authentication module.
The web-based management console does not expose the configuration of JASPI authentication
modules. You must stop the JBoss EAP running instance completely before adding the configuration
directly to /domain/configuration/domain.xml or
/standalone/configuration/standalone.xml.
341
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
To configure your application to use batch processing on JBoss EAP, you must specify the required
dependencies. Additional JBoss EAP features for batch processing include Job Specification Language
(JSL) inheritance, and batch property injections.
<dependencies>
<dependency>
<groupId>org.jboss.spec.javax.batch</groupId>
<artifactId>jboss-batch-api_1.0_spec</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.enterprise</groupId>
<artifactId>cdi-api</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jboss.spec.javax.annotation</groupId>
<artifactId>jboss-annotations-api_1.2_spec</artifactId>
<scope>provided</scope>
</dependency>
Example: Inherit Step and Flow Within the Same Job XML File
Parent elements (step, flow, etc.) are marked with the attribute abstract="true" to exclude them
from direct execution. Child elements contain a parent attribute, which points to the parent element.
342
CHAPTER 18. JAVA BATCH APPLICATION DEVELOPMENT
version="1.0">
<!-- abstract step and flow -->
<step id="step0" abstract="true">
<batchlet ref="batchlet0"/>
</step>
a jsl-name attribute, which specifies the job XML file name (without the .xml extension)
containing the parent element, and
a parent attribute, which points to the parent element in the job XML file specified by jsl-
name.
Parent elements are marked with the attribute abstract="true" to exclude them from direct
execution.
chunk-child.xml
chunk-parent.xml
<checkpoint-algorithm ref="parent">
<properties>
<property name="parent" value="parent"></property>
</properties>
</checkpoint-algorithm>
<skippable-exception-classes>
<include class="java.lang.Exception"></include>
<exclude class="java.io.IOException"></exclude>
</skippable-exception-classes>
343
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<retryable-exception-classes>
<include class="java.lang.Exception"></include>
<exclude class="java.io.IOException"></exclude>
</retryable-exception-classes>
<no-rollback-exception-classes>
<include class="java.lang.Exception"></include>
<exclude class="java.io.IOException"></exclude>
</no-rollback-exception-classes>
</chunk>
</step>
</job>
java.lang.String
java.lang.StringBuilder
java.lang.StringBuffer
boolean, Boolean
int, Integer
double, Double
long, Long
char, Character
float, Float
short, Short
byte, Byte
java.math.BigInteger
java.math.BigDecimal
java.net.URL
java.net.URI
java.io.File
java.util.jar.JarFile
344
CHAPTER 18. JAVA BATCH APPLICATION DEVELOPMENT
java.util.Date
java.lang.Class
java.net.Inet4Address
java.net.Inet6Address
java.util.logging.Logger
java.util.regex.Pattern
javax.management.ObjectName
java.lang.String[]
boolean[], Boolean[]
int[], Integer[]
double[], Double[]
long[], Long[]
char[], Character[]
float[], Float[]
short[], Short[]
byte[], Byte[]
java.math.BigInteger[]
java.math.BigDecimal[]
java.net.URL[]
java.net.URI[]
java.io.File[]
java.util.jar.JarFile[]
java.util.zip.ZipFile[]
345
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
java.util.Date[]
java.lang.Class[]
<batchlet ref="myBatchlet">
<properties>
<property name="number" value="10"/>
</properties>
</batchlet>
Artifact Class
@Named
public class MyBatchlet extends AbstractBatchlet {
@Inject
@BatchProperty
int number; // Field name is the same as batch property name.
@Inject
@BatchProperty (name = "number") // Use the name attribute to locate
the batch property.
long asLong; // Inject it as a specific data type.
@Inject
@BatchProperty (name = "number")
Double asDouble;
@Inject
@BatchProperty (name = "number")
private String asString;
@Inject
@BatchProperty (name = "number")
BigInteger asBigInteger;
@Inject
@BatchProperty (name = "number")
BigDecimal asBigDecimal;
}
346
CHAPTER 18. JAVA BATCH APPLICATION DEVELOPMENT
<batchlet ref="myBatchlet">
<properties>
<property name="weekDays" value="1,2,3,4,5,6,7"/>
</properties>
</batchlet>
Artifact Class
@Named
public class MyBatchlet extends AbstractBatchlet {
@Inject
@BatchProperty
int[] weekDays; // Array name is the same as batch property name.
@Inject
@BatchProperty (name = "weekDays") // Use the name attribute to
locate the batch property.
Integer[] asIntegers; // Inject it as a specific array type.
@Inject
@BatchProperty (name = "weekDays")
String[] asStrings;
@Inject
@BatchProperty (name = "weekDays")
byte[] asBytes;
@Inject
@BatchProperty (name = "weekDays")
BigInteger[] asBigIntegers;
@Inject
@BatchProperty (name = "weekDays")
BigDecimal[] asBigDecimals;
@Inject
@BatchProperty (name = "weekDays")
List asList;
@Inject
@BatchProperty (name = "weekDays")
List<String> asListString;
@Inject
@BatchProperty (name = "weekDays")
Set asSet;
@Inject
@BatchProperty (name = "weekDays")
Set<String> asSetString;
}
347
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
<batchlet ref="myBatchlet">
<properties>
<property name="myClass" value="org.jberet.support.io.Person"/>
</properties>
</batchlet>
Artifact Class
@Named
public class MyBatchlet extends AbstractBatchlet {
@Inject
@BatchProperty
private Class myClass;
}
Artifact Class
/**
Comment character. If commentChar batch property is not specified in job
XML file, use the default value '#'.
*/
@Inject
@BatchProperty
private char commentChar = '#';
348
APPENDIX A. REFERENCE MATERIAL
Name: access-control
Handler that can accept or reject a request based on an attribute of the remote peer.
Name Description
AccessLogHandler
Class Name: io.undertow.server.handlers.accesslog.AccessLogHandler
Name: access-log
Access log handler. This handler will generate access log messages based on the provided format string
and pass these messages into the provided AccessLogReceiver.
This handler can log any attribute that is provides via the ExchangeAttribute mechanism.
Pattern Description
%a Remote IP address
%A Local IP address
%H Request protocol
349
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Pattern Description
%m Request method
%p Local port
common %h %l %u %t "%r" %s %b
There is also support to write information from the cookie, incoming header, or the session.
350
APPENDIX A. REFERENCE MATERIAL
Name Description
AllowedMethodsHandler
Handler that whitelists certain HTTP methods. Only requests with a method in the allowed methods set
will be allowed to continue.
Name: allowed-methods
Name Description
BlockingHandler
An HttpHandler that initiates a blocking request. If the thread is currently running in the I/O thread it will
be dispatched.
Name: blocking
ByteRangeHandler
Handler for range requests. This is a generic handler that can handle range requests to any resource of
a fixed content length, for example, any resource where the content-length header has been set.
This is not necessarily the most efficient way to handle range requests, as the full content will be
generated and then discarded. At present this handler can only handle simple, single range requests. If
multiple ranges are requested the Range header will be ignored.
Name: byte-range
Name Description
CanonicalPathHandler
This handler transforms a relative path to a canonical path.
351
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Name: canonical-path
DisableCacheHandler
Handler that disables response caching by browsers and proxies.
Name: disable-cache
DisallowedMethodsHandler
Handler that blacklists certain HTTP methods.
Name: disallowed-methods
Name Description
EncodingHandler
This handler serves as the basis for content encoding implementations. Encoding handlers are added as
delegates to this handler, with a specified server side priority.
The q value will be used to determine the correct handler. If a request comes in with no q value then the
server will pick the handler with the highest priority as the encoding to use.
If no handler matches then the identity encoding is assumed. If the identity encoding has been
specifically disallowed due to a q value of 0 then the handler will set the response code 406 (Not
Acceptable) and return.
Name: compress
FileErrorPageHandler
Handler that serves up a file from disk to serve as an error page. This handler does not serve up any
response codes by default, you must configure the response codes it responds to.
Name: error-file
352
APPENDIX A. REFERENCE MATERIAL
Name Description
HttpTraceHandler
A handler that handles HTTP trace requests.
Name: trace
IPAddressAccessControlHandler
Handler that can accept or reject a request based on the IP address of the remote peer.
Name: ip-access-control
Name Description
JDBCLogHandler
Class Name: io.undertow.server.handlers.JDBCLogHandler
Name: jdbc-access-log
Name Description
353
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Name Description
userField Username.
timestampField Timestamp.
virtualHostField VirtualHost.
methodField Method.
queryField Query.
statusField Status.
bytesField Bytes.
refererField Referrer.
userAgentField UserAgent.
LearningPushHandler
Handler that builds up a cache of resources that a browser requests, and uses server push to push them
when supported.
Name: learning-push
Name Description
LocalNameResolvingHandler
354
APPENDIX A. REFERENCE MATERIAL
A handler that performs DNS lookup to resolve a local address. Unresolved local address may be
created when a front end server has sent a X-forwarded-host header or AJP is in use.
Name: resolve-local-name
PathSeparatorHandler
A handler that translates non slash separator characters in the URL into a slash. In general this will
translate backslash into slash on Windows systems.
Name: path-separator
PeerNameResolvingHandler
A handler that performs reverse DNS lookup to resolve a peer address.
Name: resolve-peer-name
ProxyPeerAddressHandler
Handler that sets the peer address to the value of the X-Forwarded-For header. This should only be
used behind a proxy that always sets this header, otherwise it is possible for an attacker to forge their
peer address.
Name: proxy-peer-address
RedirectHandler
A redirect handler that redirects to the specified location via a 302 redirect. The location is specified as
an exchange attribute string.
Name: redirect
Name Description
RequestBufferingHandler
Handler that will buffer all request data.
355
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
Name: buffer-request
Name Description
RequestDumpingHandler
Handler that dumps an exchange to a log.
Name: dump-request
RequestLimitingHandler
A handler which limits the maximum number of concurrent requests. Requests beyond the limit will block
until the previous request is complete.
Name: request-limit
Name Description
ResourceHandler
A handler for serving resources.
Name: resource
Name Description
356
APPENDIX A. REFERENCE MATERIAL
ResponseRateLimitingHandler
Handler that limits the download rate to a set number of bytes/time.
Name: response-rate-limit
Name Description
SetHeaderHandler
A handler that sets a fixed response header.
Name: header
Name Description
SSLHeaderHandler
Handler that sets SSL information on the connection based on the following headers:
SSL_CLIENT_CERT
SSL_CIPHER
SSL_SESSION_ID
If this handler is present in the chain it will always override the SSL session information, even if these
headers are not present.
This handler MUST only be used on servers that are behind a reverse proxy, where the reverse proxy
has been configured to always set these headers for EVERY request (or strip existing headers with
these names if no SSL information is present). Otherwise it may be possible for a malicious client to
spoof an SSL connection.
Name: ssl-headers
357
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
StuckThreadDetectionHandler
This handler detects requests that take a long time to process, which might indicate that the thread that
is processing it is stuck.
Name: stuck-thread-detector
Name Description
URLDecodingHandler
A handler that will decode the URL and query parameters to the specified charset. If you are using this
handler you must set the UndertowOptions.DECODE_URL parameter to false.
This is not as efficient as using the parser’s built in UTF-8 decoder. Unless you need to decode to
something other than UTF-8 you should rely on the parsers decoding instead.
Name: url-decoding
Name Description
358
APPENDIX A. REFERENCE MATERIAL
hibernate.format_sql Boolean. Pretty print the SQL in the log and console.
hibernate.max_fetch_depth Sets a maximum depth for the outer join fetch tree for single-
ended associations (one-to-one, many-to-one). A 0 disables
default outer join fetching. The recommended value is between
0 and 3 .
hibernate.default_entity_mode Sets a default mode for entity representation for all sessions
opened from this SessionFactory. Values include:
dynamic-map , dom4j, pojo.
359
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
360
APPENDIX A. REFERENCE MATERIAL
default -
ImplicitNamingStrategyJpaCompliantImp
l
jpa -
ImplicitNamingStrategyJpaCompliantImp
l
legacy-jpa -
ImplicitNamingStrategyLegacyJpaImpl
legacy-hbm -
ImplicitNamingStrategyLegacyHbmImpl
component-path -
ImplicitNamingStrategyComponentPathIm
pl
hibernate.physical_naming_strategy Pluggable strategy contract for applying physical naming rules for
database object names. Specifies the PhysicalNamingStrategy
class to be used.
PhysicalNamingStrategyStandardImpl is used by
default. hibernate.physical_naming_strategy can
also be used to configure a custom class that implements
PhysicalNamingStrategy.
IMPORTANT
361
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
362
APPENDIX A. REFERENCE MATERIAL
363
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
364
APPENDIX A. REFERENCE MATERIAL
365
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
RDBMS Dialect
DB2 org.hibernate.dialect.DB2Dialect
Firebird org.hibernate.dialect.FirebirdDialect
FrontBase org.hibernate.dialect.FrontbaseDialect
H2 Database org.hibernate.dialect.H2Dialect
HypersonicSQL org.hibernate.dialect.HSQLDialect
Informix org.hibernate.dialect.InformixDialect
Ingres org.hibernate.dialect.IngresDialect
Interbase org.hibernate.dialect.InterbaseDialect
MariaDB 10 org.hibernate.dialect.MySQL57InnoDBDialect
366
APPENDIX A. REFERENCE MATERIAL
RDBMS Dialect
MySQL5 org.hibernate.dialect.MySQL5Dialect
MySQL5.7 org.hibernate.dialect.MySQL57InnoDBDialect
Oracle 9i org.hibernate.dialect.Oracle9iDialect
Pointbase org.hibernate.dialect.PointbaseDialect
PostgreSQL org.hibernate.dialect.PostgreSQLDialect
Progress org.hibernate.dialect.ProgressDialect
SAP DB org.hibernate.dialect.SAPDBDialect
Sybase org.hibernate.dialect.SybaseASE15Dialect
367
Red Hat JBoss Enterprise Application Platform 7.0 Development Guide
IMPORTANT
368