Glassfish 2.1 Tuning
Glassfish 2.1 Tuning
Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A.
Part No: 820434310 January 2009
Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more U.S. patents or pending patent applications in the U.S. and in other countries. U.S. Government Rights Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. This distribution may include materials developed by third parties. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, the Solaris logo, the Java Coffee Cup logo, docs.sun.com, OpenSolaris, Java, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. or its subsidiaries in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. The OPEN LOOK and SunTM Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Sun's licensees who implement OPEN LOOK GUIs and otherwise comply with Sun's written license agreements. Products covered by and information contained in this publication are controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially designated nationals lists is strictly prohibited. DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. Copyright 2009 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 U.S.A. Tous droits rservs.
Sun Microsystems, Inc. dtient les droits de proprit intellectuelle relatifs la technologie incorpore dans le produit qui est dcrit dans ce document. En particulier, et ce sans limitation, ces droits de proprit intellectuelle peuvent inclure un ou plusieurs brevets amricains ou des applications de brevet en attente aux Etats-Unis et dans d'autres pays. Cette distribution peut comprendre des composants dvelopps par des tierces personnes. Certaines composants de ce produit peuvent tre drives du logiciel Berkeley BSD, licencis par l'Universit de Californie. UNIX est une marque dpose aux Etats-Unis et dans d'autres pays; elle est licencie exclusivement par X/Open Company, Ltd. Sun, Sun Microsystems, le logo Sun, le logo Solaris, le logo Java Coffee Cup, docs.sun.com, OpenSolaris, Java et Solaris sont des marques de fabrique ou des marques dposes de Sun Microsystems, Inc., ou ses filiales, aux Etats-Unis et dans d'autres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques dposes de SPARC International, Inc. aux Etats-Unis et dans d'autres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc. L'interface d'utilisation graphique OPEN LOOK et Sun a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces d'utilisation visuelle ou graphique pour l'industrie de l'informatique. Sun dtient une licence non exclusive de Xerox sur l'interface d'utilisation graphique Xerox, cette licence couvrant galement les licencis de Sun qui mettent en place l'interface d'utilisation graphique OPEN LOOK et qui, en outre, se conforment aux licences crites de Sun. Les produits qui font l'objet de cette publication et les informations qu'il contient sont rgis par la legislation amricaine en matire de contrle des exportations et peuvent tre soumis au droit d'autres pays dans le domaine des exportations et importations. Les utilisations finales, ou utilisateurs finaux, pour des armes nuclaires, des missiles, des armes chimiques ou biologiques ou pour le nuclaire maritime, directement ou indirectement, sont strictement interdites. Les exportations ou rexportations vers des pays sous embargo des Etats-Unis, ou vers des entits figurant sur les listes d'exclusion d'exportation amricaines, y compris, mais de manire non exclusive, la liste de personnes qui font objet d'un ordre de ne pas participer, d'une faon directe ou indirecte, aux exportations des produits ou des services qui sont rgis par la legislation amricaine en matire de contrle des exportations et la liste de ressortissants spcifiquement designs, sont rigoureusement interdites. LA DOCUMENTATION EST FOURNIE "EN L'ETAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L'APTITUDE A UNE UTILISATION PARTICULIERE OU A L'ABSENCE DE CONTREFACON.
090304@21990
Contents
Preface ...................................................................................................................................................13
Overview of Enterprise Server Performance Tuning ..................................................................... 17 Process Overview ................................................................................................................................ 17 Performance Tuning Sequence .................................................................................................. 18 Understanding Operational Requirements ..................................................................................... 19 Application Architecture ............................................................................................................ 19 Security Requirements ................................................................................................................ 21 Hardware Resources .................................................................................................................... 22 Administration ............................................................................................................................. 23 General Tuning Concepts .................................................................................................................. 23 Capacity Planning ........................................................................................................................ 24 User Expectations ........................................................................................................................ 25 Further Information ............................................................................................................................ 26
Tuning Your Application .....................................................................................................................27 Java Programming Guidelines ........................................................................................................... 27 Avoid Serialization and Deserialization .................................................................................... 27 Java Server Page and Servlet Tuning ................................................................................................. 29 Suggested Coding Practices ........................................................................................................ 30 EJB Performance Tuning .................................................................................................................... 32 Goals .............................................................................................................................................. 32 Monitoring EJB Components .................................................................................................... 32 General Guidelines ...................................................................................................................... 35 Using Local and Remote Interfaces ........................................................................................... 36 Improving Performance of EJB Transactions .......................................................................... 38 Using Special Techniques ........................................................................................................... 39
3
Contents
Tuning Tips for Specific Types of EJB Components ............................................................... 42 JDBC and Database Access ......................................................................................................... 46 Tuning Message-Driven Beans .................................................................................................. 47
Tuning the Enterprise Server ............................................................................................................. 49 Deployment Settings ........................................................................................................................... 49 Disable Auto-deployment ........................................................................................................... 50 Use Pre-compiled JavaServer Pages ........................................................................................... 50 Disable Dynamic Application Reloading .................................................................................. 50 Logger Settings ..................................................................................................................................... 50 General Settings ........................................................................................................................... 51 Log Levels ...................................................................................................................................... 51 Web Container Settings ...................................................................................................................... 51 Session Properties: Session Timeout ......................................................................................... 51 Manager Properties: Reap Interval ............................................................................................ 52 Disable Dynamic JSP Reloading ................................................................................................ 52 EJB Container Settings ........................................................................................................................ 53 Monitoring the EJB Container ................................................................................................... 53 Tuning the EJB Container ........................................................................................................... 53 Java Message Service Settings ............................................................................................................. 58 Transaction Service Settings .............................................................................................................. 58 Monitoring the Transaction Service .......................................................................................... 58 Tuning the Transaction Service ................................................................................................. 59 HTTP Service Settings ........................................................................................................................ 60 Monitoring the HTTP Service .................................................................................................... 60 Connection Queue ....................................................................................................................... 64 Tuning the HTTP Service ........................................................................................................... 64 Tuning HTTP Listener Settings ................................................................................................. 69 ORB Settings ........................................................................................................................................ 70 Overview ....................................................................................................................................... 70 How a Client Connects to the ORB ............................................................................................ 70 Monitoring the ORB .................................................................................................................... 70 Tuning the ORB ........................................................................................................................... 71 Thread Pool Sizing ....................................................................................................................... 74 Examining IIOP Messages .......................................................................................................... 74
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Contents
Improving ORB Performance with Java Serialization ............................................................. 75 Thread Pool Settings ........................................................................................................................... 76 Tuning Thread Pools (Unix /Linux only) ................................................................................. 76 Resources .............................................................................................................................................. 77 JDBC Connection Pool Settings ................................................................................................. 77 Connector Connection Pool Settings ........................................................................................ 80
Tuning the Java Runtime System ...................................................................................................... 83 Java Virtual Machine Settings ............................................................................................................ 83 Managing Memory and Garbage Collection .................................................................................... 84 Tuning the Garbage Collector .................................................................................................... 84 Tracing Garbage Collection ........................................................................................................ 86 Other Garbage Collector Settings .............................................................................................. 86 Tuning the Java Heap .................................................................................................................. 87 Rebasing DLLs on Windows ...................................................................................................... 89 Further Information ............................................................................................................................ 91
Tuning the Operating System and Platform ................................................................................... 93 Server Scaling ....................................................................................................................................... 93 Processors ..................................................................................................................................... 93 Memory ......................................................................................................................................... 94 Disk Space ..................................................................................................................................... 94 Networking ................................................................................................................................... 94 Solaris 10 Platform-Specific Tuning Information ........................................................................... 95 Tuning for the Solaris OS ................................................................................................................... 95 Tuning Parameters ...................................................................................................................... 95 File Descriptor Setting ................................................................................................................. 97 Linux Configuration ........................................................................................................................... 97 Tuning for Solaris on x86 ................................................................................................................... 98 File Descriptors ............................................................................................................................ 99 IP Stack Settings ........................................................................................................................... 99 Tuning for Linux platforms ............................................................................................................. 100 File Descriptors .......................................................................................................................... 100 Virtual Memory ......................................................................................................................... 101 Network Interface ...................................................................................................................... 102
5
Contents
Disk I/O Settings ........................................................................................................................ 102 TCP/IP Settings .......................................................................................................................... 102 Tuning UltraSPARC T1Based Systems ........................................................................................ 103 Tuning Operating System and TCP Settings .......................................................................... 103 Disk Configuration .................................................................................................................... 105 Network Configuration ............................................................................................................. 105 Start Options ............................................................................................................................... 105
Tuning for High-Availability ............................................................................................................107 Tuning HADB .................................................................................................................................... 107 Disk Use ...................................................................................................................................... 107 Memory Allocation .................................................................................................................... 109 Performance ............................................................................................................................... 110 Operating System Configuration ............................................................................................. 116 Tuning the Enterprise Server for High-Availability ...................................................................... 116 Tuning Session Persistence Frequency .................................................................................... 117 Session Persistence Scope .......................................................................................................... 118 Session Size ................................................................................................................................. 118 Checkpointing Stateful Session Beans ..................................................................................... 119 Configuring the JDBC Connection Pool ................................................................................. 119 Configuring the Load Balancer ........................................................................................................ 120 Enabling the Health Checker .................................................................................................... 120
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Figures
FIGURE 11
Tables
Performance Tuning Roadmap ............................................................................... 17 Factors That Affect Performance ............................................................................. 24 Bean Type Pooling or Caching ................................................................................ 53 EJB Cache and Pool Settings .................................................................................... 56 Tunable ORB Settings ............................................................................................... 71 Connection Pool Sizing ............................................................................................ 78 Maximum Address Space Per Process .................................................................... 87 Tuning Parameters for Solaris ................................................................................. 95 Tuning 64bit Systems for Performance Benchmarking ................................... 104
10
Examples
EXAMPLE 41 EXAMPLE 42
11
12
Preface
The Performance Tuning Guide describes how to get the best performance with Enterprise Server. This preface contains information about and conventions for the entire Sun GlassFishTM Enterprise Server documentation set.
Enterprise Server documentation topics organized by task and subject. Late-breaking information about the software and the documentation. Includes a comprehensive, table-based summary of the supported hardware, operating system, JavaTM Development Kit (JDKTM), and database drivers. How to get started with the Enterprise Server product. Installing the software and its components. Deployment of applications and application components to the Enterprise Server. Includes information about deployment descriptors. Creating and implementing Java Platform, Enterprise Edition (Java EE platform) applications intended to run on the Enterprise Server that follow the open Java standards model for Java EE components and APIs. Includes information about developer tools, security, debugging, and creating lifecycle modules. Using Java EE 5 platform technologies and APIs to develop Java EE applications. Developing web applications using the Web Service Interoperability Technologies (WSIT). Describes how, when, and why to use the WSIT technologies and the features and options that each technology supports. System administration for the Enterprise Server, including configuration, monitoring, security, resource management, and web services management.
Quick Start Guide Installation Guide Application Deployment Guide Developers Guide
Administration Guide
13
Preface
(Continued)
High Availability Administration Guide Administration Reference Performance Tuning Guide Reference Manual
Setting up clusters, working with node agents, and using load balancers. Editing the Enterprise Server configuration file, domain.xml. Tuning the Enterprise Server to improve performance. Utility commands available with the Enterprise Server; written in man page style. Includes the asadmin command line interface.
as-install
SolarisTM and Linux installations, non-root user: users-home-directory/SUNWappserver Solaris and Linux installations, root user: /opt/SUNWappserver Windows, all installations: SystemDrive:\Sun\AppServer
domain-root-dir
Represents the directory containing all domains. Represents the directory for a domain. In configuration files, you might see domain-dir represented as follows: ${com.sun.aas.instanceRoot}
domain-dir
Represents the directory for a server instance. Represents the directory containing sample applications. Represents the directory containing documentation.
14
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Preface
Typographic Conventions
The following table describes the typographic changes that are used in this book.
TABLE P3 Typeface
Typographic Conventions
Meaning Example
AaBbCc123
The names of commands, files, and directories, and onscreen computer output
Edit your .login file. Use ls -a to list all files. machine_name% you have mail.
AaBbCc123
What you type, contrasted with onscreen computer output A placeholder to be replaced with a real name or value Book titles, new terms, and terms to be emphasized (note that some emphasized items appear bold online)
machine_name% su Password: The command to remove a file is rm filename. Read Chapter 6 in the User's Guide. A cache is a copy that is stored locally. Do not save the file.
AaBbCc123 AaBbCc123
Symbol Conventions
The following table explains symbols that might be used in this book.
TABLE P4 Symbol
Symbol Conventions
Description Example Meaning
[]
Contains optional arguments ls [-l] and command options. Contains a set of choices for a -d {y|n} required command option. Indicates a variable reference. Joins simultaneous multiple keystrokes. Joins consecutive multiple keystrokes. ${com.sun.javaRoot} Control-A Ctrl+A+N
The -l option is not required. The -d option requires that you use either the y argument or the n argument. References the value of the com.sun.javaRoot variable. Press the Control key while you press the A key. Press the Control key, release it, and then press the subsequent keys.
{|}
${ } +
15
Preface
TABLE P4 Symbol
Symbol Conventions
Description
(Continued)
Example Meaning
From the File menu, choose New. From the New submenu, choose Templates.
document. Sun does not endorse and is not responsible or liable for any content, advertising, products, or other materials that are available on or through such sites or resources. Sun will not be responsible or liable for any actual or alleged damage or loss caused or alleged to be caused by or in connection with use of or reliance on any such content, goods, or services that are available on or through such sites or resources.
16
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
C H A P T E R
You can significantly improve performance of the Sun GlassFish Enterprise Server and of applications deployed to it by adjusting a few deployment and server configuration settings. However, it is important to understand the environment and performance goals. An optimal configuration for a production environment might not be optimal for a development environment. This chapter discusses the following topics:
Process Overview on page 17 Understanding Operational Requirements on page 19 General Tuning Concepts on page 23 Further Information on page 26
Process Overview
The following table outlines the overall administration process, and shows where performance tuning fits in the sequence.
TABLE 11 Step
Design: Decide on the high-availability topology Deployment Planning Guide and set up the Application Server and, if you are using HADB for session persistence, high-availability database (HADB) systems. Capacity Planning: Make sure the systems have sufficient resources to perform well. Deployment Planning Guide
17
Process Overview
TABLE 11 Step
(Continued)
Location of Instructions
Installation: If you are using HADB for session persistence, ensure that the HADB software is installed. Deployment: Install and run your applications. Familiarize yourself with how to configure and administer the Enterprise Server. Tuning: Tune the following items: Applications Enterprise Server Java Runtime System Operating system and platform High availability features
Installation Guide
Application Deployment Guide Administration Guide The following chapters: Chapter 2, Tuning Your Application Chapter 3, Tuning the Enterprise Server Chapter 4, Tuning the Java Runtime System Chapter 5, Tuning the Operating System and Platform Chapter 6, Tuning for High-Availability
1 2
Tune your application, described in Chapter 2,Tuning Your Application Tune the server, described in Chapter 3,Tuning the Enterprise ServerChapter 3,Tuning the Enterprise Server Tune the high availability database, described in Chapter 6,Tuning for High-Availability Tune the Java runtime system, described in Chapter 4,Tuning the Java Runtime System Tune the operating system, described in Chapter 5,Tuning the Operating System and Platform
3 4 5
18
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Application Architecture
The Java EE Application model, as shown in the following figure, is very flexible; allowing the application architect to split application logic functionally into many tiers. The presentation layer is typically implemented using servlets and JSP technology and executes in the web container.
19
Client-Side Presentation
Browser Pure HTML Java Applet
Server-Side Presentation
Web Server JSP
JSP
EJB
EJB
FIGURE 11
Moderately complex enterprise applications can be developed entirely using servlets and JSP technology. More complex business applications often use Enterprise JavaBeans (EJB) components. The Application Server integrates the web and EJB containers in a single process. Local access to EJB components from servlets is very efficient. However, some application deployments may require EJB components to execute in a separate process; and be accessible from standalone client applications as well as servlets. Based on the application architecture, the server administrator can employ the Application Server in multiple tiers, or simply host both the presentation and business logic on a single tier. It is important to understand the application architecture before designing a new Application Server deployment, and when deploying a new business application to an existing application server deployment.
20
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Security Requirements
Most business applications require security. This section discusses security considerations and decisions.
Encryption
For security reasons, sensitive user inputs and application output must be encrypted. Most business-oriented web applications encrypt all or some of the communication flow between the browser and Application Server. Online shopping applications encrypt traffic when the user is completing a purchase or supplying private data. Portal applications such as news and media typically do not employ encryption. Secure Sockets Layer (SSL) is the most common security framework, and is supported by many browsers and application servers. The Application Server supports SSL 2.0 and 3.0 and contains software support for various cipher suites. It also supports integration of hardware encryption cards for even higher performance. Security considerations, particularly when using the integrated software encryption, will impact hardware sizing and capacity planning. Consider the following when assessing the encryption needs for a deployment:
Chapter 1 Overview of Enterprise Server Performance Tuning 21
What is the nature of the applications with respect to security? Do they encrypt all or only a part of the application inputs and output? What percentage of the information needs to be securely transmitted? Are the applications going to be deployed on an application server that is directly connected to the Internet? Will a web server exist in a demilitarized zone (DMZ) separate from the application server tier and backend enterprise systems? A DMZ-style deployment is recommended for high security. It is also useful when the application has a significant amount of static text and image content and some business logic that executes on the Application Server, behind the most secure firewall. Application Server provides secure reverse proxy plugins to enable integration with popular web servers. The Application Server can also be deployed and used as a web server in DMZ.
Is encryption required between the web servers in the DMZ and application servers in the next tier? The reverse proxy plugins supplied with Application Server support SSL encryption between the web server and application server tier. If SSL is enabled, hardware capacity planning must be take into account the encryption policy and mechanisms. If software encryption is to be employed:
What is the expected performance overhead for every tier in the system, given the security requirements? What are the performance and throughput characteristics of various choices?
For information on how to encrypt the communication between web servers and Application Server, please refer to Chapter 9, Configuring Security, in Sun GlassFish Enterprise Server 2.1 Administration Guide.
Hardware Resources
The type and quantity of hardware resources available greatly influence performance tuning and site planning. The Application Server provides excellent vertical scalability. It can scale to efficiently utilize multiple high-performance CPUs, using just one application server process. A smaller number of application server instances makes maintenance easier and administration less expensive. Also, deploying several related applications on fewer application servers can improve performance, due to better data locality, and reuse of cached data between co-located applications. Such servers must also contain large amounts of memory, disk space, and network capacity to cope with increased load. The Application Server can also be deployed on large farms of relatively modest hardware units. Business applications can be partitioned across various server instances. Using one or more external load balancers can efficiently spread user access across all the application server instances. A horizontal scaling approach may improve availability, lower hardware costs and is suitable for some types of applications. However, this approach requires administration of more application server instances and hardware nodes.
22 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Administration
A single Application Server installation on a server can encompass multiple instances. A group of one or more instances that are administered by a single Administration Server is called a domain. Grouping server instances into domains permits different people to independently administer the groups. You can use a single-instance domain to create a sandbox for a particular developer and environment. In this scenario, each developer administers his or her own application server, without interfering with other application server domains. A small development group may choose to create multiple instances in a shared administrative domain for collaborative development. In a deployment environment, an administrator can create domains based on application and business function. For example, internal Human Resources applications may be hosted on one or more servers in one Administrative domain, while external customer applications are hosted on several administrative domains in a server farm. The Application Server supports virtual server capability for web applications. For example, a web application hosting service provider can host different URL domains on a single Application Server process for efficient administration. For detailed information on administration, see Sun GlassFish Enterprise Server 2.1 Administration Guide.
The following table describes these concepts, and how they are measured in practice. The left most column describes the general concept, the second column gives the practical ramifications of the concept, the third column describes the measurements, and the right most column describes the value sources.
23
TABLE 12 Concept
User Load
Transactions Per Minute (TPM) (Max. number of concurrent users) * (expected response time) / (time between clicks) Web Interactions Per Second (WIPS) Example: (100 users * 2 sec) / 10 sec = 20
Transaction rate measured on one CPU Increase in performance from additional CPUs Increase in performance from additional servers
TPM or WIPS
Based on curve fitting from benchmark. Perform tests while gradually increasing the number of CPUs. Identify the knee of the curve, where additional CPUs are providing uneconomical gains in performance. Requires tuning as described in this guide. Perform at each tier and iterate if necessary. Stop here if this meets performance requirements. Use a well-tuned single application server instance, as in previous step. Measure how much each additional server instance and hardware node improves performance.
Horizontal scalability
If the system must cope with failures, size the system to meet performance requirements assuming that one or more application server instances are non functional
Excess capacity It is desirable to operate a server 80% system capacity utilization at peak loads may work for most for unexpected at less than its benchmarked installations. Measure your deployment under real and peaks peak, for some safety margin simulated peak loads.
Capacity Planning
The previous discussion guides you towards defining a deployment architecture. However, you determine the actual size of the deployment by a process called capacity planning. Capacity planning enables you to predict:
The performance capacity of a particular hardware configuration. The hardware resources required to sustain specified application load and performance.
You can estimate these values through careful performance benchmarking, using an application with realistic data sets and workloads.
24 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
To Determine Capacity
1
Determine performance on a single CPU. First determine the largest load that a single processor can sustain. You can obtain this figure by measuring the performance of the application on a single-processor machine. Either leverage the performance numbers of an existing application with similar processing characteristics or, ideally, use the actual application and workload in a testing environment. Make sure that the application and data resources are tiered exactly as they would be in the final deployment. Determine vertical scalability. Determine how much additional performance you gain when you add processors. That is, you are indirectly measuring the amount of shared resource contention that occurs on the server for a specific workload. Either obtain this information based on additional load testing of the application on a multiprocessor system, or leverage existing information from a similar application that has already been load tested. Running a series of performance tests on one to eight CPUs, in incremental steps, generally provides a sense of the vertical scalability characteristics of the system. Be sure to properly tune the application, Application Server, backend database resources, and operating system so that they do not skew the results.
Determine horizontal scalability. If sufficiently powerful hardware resources are available, a single hardware node may meet the performance requirements. However for better availability, you can cluster two or more systems. Employing external load balancers and workload simulation, determine the performance benefits of replicating one well-tuned application server node, as determined in step (2).
User Expectations
Application end-users generally have some performance expectations. Often you can numerically quantify them. To ensure that customer needs are met, you must understand these expectations clearly, and use them in capacity planning. Consider the following questions regarding performance expectations:
What do users expect the average response times to be for various interactions with the application? What are the most frequent interactions? Are there any extremely time-critical interactions? What is the length of each transaction, including think time? In many cases, you may need to perform empirical user studies to get good estimates. What are the anticipated steady-state and peak user loads? Are there are any particular times of the day, week, or year when you observe or expect to observe load peaks? While there may be several million registered customers for an online business, at any one time only a
25
Further Information
fraction of them are logged in and performing business transactions. A common mistake during capacity planning is to use the total size of customer population as the basis and not the average and peak numbers for concurrent users. The number of concurrent users also may exhibit patterns over time.
What is the average and peak amount of data transferred per request? This value is also application-specific. Good estimates for content size, combined with other usage patterns, will help you anticipate network capacity needs. What is the expected growth in user load over the next year? Planning ahead for the future will help avoid crisis situations and system downtimes for upgrades.
Further Information
For more information on Java performance, see Java Performance Documentation and Java Performance BluePrints. For details on optimizing EJB components, see Seven Rules for Optimizing Entity Beans For details on profiling, see Profiling Tools in Sun GlassFish Enterprise Server 2.1 Developers Guide For more details on the domain.xml file see Sun GlassFish Enterprise Server 2.1 Administration Reference.
26
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
C H A P T E R
This chapter provides information on tuning applications for maximum performance. A complete guide to writing high performance Java and Java EE applications is beyond the scope of this document. This chapter discusses the following topics:
Java Programming Guidelines on page 27 Java Server Page and Servlet Tuning on page 29 EJB Performance Tuning on page 32
Therefore, copying is inherently expensive and overusing it can reduce performance significantly.
Avoid Finalizers
Adding finalizers to code makes the garbage collector more expensive and unpredictable. The virtual machine does not guarantee the time at which finalizers are run. Finalizers may not always be executed, before the program exits. Releasing critical resources in finalize() methods may lead to unpredictable application behavior.
28 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
A mapping of certain MIME types to Java types. Any MIME type is mappable to a javax.activation.DataHandler .
As a result, send an attachment (.gif or XML document) as a SOAP attachment to an RPC style web service by utilizing the Java type mappings. When passing in any of the mandated Java type mappings (appropriate for the attachments MIME type) as an argument for the web service, the JAX-RPC runtime handles these as SOAP attachments. For example, to send out an image/gif attachment, use java.awt.Image, or create a DataHandler wrapper over your image. The advantages of using the wrapper are:
Reduced coding: You can reuse generic attachment code to handle the attachments because the DataHandler determines the content type of the contained data automatically. This feature is especially useful when using a document style service. Since the content is known at runtime, there is no need to make calls to attachment.setContent(stringContent, "image/gif"), for example. Improved Performance: Informal tests have shown that using DataHandler wrappers doubles throughput for image/gif MIME types, and multiplies throughput by approximately 1.5 for text/xml or java.awt.Image for image/* types.
General Guidelines
Follow these general guidelines to increase performance of the presentation tier:
Minimize Java synchronization in servlets. Dont use the single thread model for servlets. Use the servlets init() method to perform expensive one-time initialization. Avoid using System.out.println() calls.
Create sessions sparingly. Session creation is not free. If a session is not required, do not create one. Use javax.servlet.http.HttpSession.invalidate() to release sessions when they are no longer needed. Keep session size small, to reduce response times. If possible, keep session size below seven KB. Use the directive <%page session="false"%> in JSP files to prevent the Enterprise Server from automatically creating sessions when they are not necessary. Avoid large object graphs in an HttpSession . They force serialization and add computational overhead. Generally, do not store large objects as HttpSession variables. Dont cache transaction data in HttpSession. Access to data in an HttpSession is not transactional. Do not use it as a cache of transactional data, which is better kept in the database and accessed using entity beans. Transactions will rollback upon failures to their original state. However, stale and inaccurate data may remain in HttpSession objects. The Enterprise Server provides read-only bean-managed persistence entity beans for cached access to read-only data.
30
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
To improve class loading time, avoid having excessive directories in the server CLASSPATH. Put application-related classes into JAR files. HTTP response times are dependent on how the keep-alive subsystem and the HTTP server is tuned in general. For more information, see HTTP Service Settings on page 60. Cache servlet results when possible. For more information, see Chapter 8, Developing Web and SIP Applications, in Sun GlassFish Enterprise Server 2.1 Developers Guide. If an application does not contain any EJB components, deploy the application as a WAR file, not an EAR file.
Optimize SSL
Optimize SSL by using routines in the appropriate operating system library for concurrent access to heap space. The library to use depends on the version of the SolarisTM Operating System (SolarisOS) that you are using. To ensure that you use the correct library, set the LD_PRELOAD environment variable to specify the correct library file. For mor information, see the following table.
Solaris OS Version Library Setting of LD_PRELOAD Environment Variable
10 9
libumem3LIB libmtmalloc-3LIB
/usr/lib/libumem.so /usr/lib/libmtmalloc.so
To set the LD_PRELOAD environment variable, edit the entry for this environment variable in the startserv script. The startserv script is located is located in the bin/startserv directory of your domain. The exact syntax to define an environment variable depends on the shell that you are using.
Goals on page 32 Monitoring EJB Components on page 32 General Guidelines on page 35 Using Local and Remote Interfaces on page 36 Improving Performance of EJB Transactions on page 38 Using Special Techniques on page 39 Tuning Tips for Specific Types of EJB Components on page 42 JDBC and Database Access on page 46 Tuning Message-Driven Beans on page 47
Goals
The goals of EJB performance tuning are:
Increased speed - Cache as many beans in the EJB caches as possible to increase speed (equivalently, decrease response time). Caching eliminates CPU-intensive operations. However, since memory is finite, as the caches become larger, housekeeping for them (including garbage collection) takes longer. Decreased memory consumption - Beans in the pools or caches consume memory from the Java virtual machine heap. Very large pools and caches degrade performance because they require longer and more frequent garbage collection cycles. Improved functional properties - Functional properties such as user time-out, commit options, security, and transaction options, are mostly related to the functionality and configuration of the application. Generally, they do not compromise functionality for performance. In some cases, you might be forced to make a trade-off decision between functionality and performance. This section offers suggestions in such cases.
asadmin get --user admin --host e4800-241-a --port 4848 -m specjcmp.application.SPECjAppServer.ejb-module. supplier_jar.stateful-session-bean.BuyerSes.bean-cache.*
The monitoring command below gives the bean pool statistics for an entity bean:
asadmin get --user admin --host e4800-241-a --port 4848 -m specjcmp.application.SPECjAppServer.ejb-module. supplier_jar.stateful-entity-bean.ItemEnt.bean-pool.* idle-timeout-in-seconds = 0 steady-pool-size = 0 total-beans-destroyed = 0 num-threads-waiting = 0 num-beans-in-pool = 54 max-pool-size = 2147483647 pool-resize-quantity = 0 total-beans-created = 255
The monitoring command below gives the bean pool statistics for a stateless bean.
asadmin get --user admin --host e4800-241-a --port 4848 -m test.application.testEjbMon.ejb-module.slsb.stateless-session-bean.slsb.bean-pool.* idle-timeout-in-seconds = 200 steady-pool-size = 32 total-beans-destroyed = 12 num-threads-waiting = 0 num-beans-in-pool = 4 max-pool-size = 1024 pool-resize-quantity = 12 total-beans-created = 42
Tuning the bean involves charting the behavior of the cache and pool for the bean in question over a period of time. If too many passivations are happening and the JVM heap remains fairly small, then the max-cache-size or the cache-idle-timeout-in-seconds can be increased. If garbage
Chapter 2 Tuning Your Application 33
collection is happening too frequently, and the pool size is growing, but the cache hit rate is small, then the pool-idle-timeout-in-seconds can be reduced to destroy the instances.
Note Specifying a max-pool-size of zero (0) means that the pool is unbounded. The pooled
beans remain in memory unless they are removed by specifying a small interval for pool-idle-timeout-in-seconds. For production systems, specifying the pool as unbounded is NOT recommended.
where monitorableObject is a fully-qualified identifier from the hierarchy of objects that can be monitored, shown below.
serverInstance.application.applicationName.ejb-module.moduleName
.stateless-session-bean.beanName .bean-pool .bean-method.methodName .stateful-session-bean.beanName .bean-cache .bean-method.methodName .entity-bean.beanName .bean-cache .bean-pool .bean-method.methodName .message-driven-bean.beanName .bean-pool .bean-method.methodName (methodName = onMessage)
The possible identifiers are the same as for ejb-module. For example, to get statistics for a method in an entity bean, use this command:
asadmin get -m serverInstance.application.appName.ejb-module.moduleName .entity-bean.beanName.bean-method.methodName.*
34 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
To find the possible objects (applications, modules, beans, and methods) and object attributes that can be monitored, use the Admin Console. For more information, see Chapter 18, Monitoring Components and Services, in Sun GlassFish Enterprise Server 2.1 Administration Guide. Alternatively, use the asadmin list command. For more information, see list(1). For statistics on stateful session bean passivations, use this command:
asadmin get -m serverInstance.application.appName.ejb-module.moduleName .stateful-session-bean.beanName.bean-cache.*
From the attribute values that are returned, use this command: num-passivationsnum-passivation-errorsnum-passivation-success
General Guidelines
The following guidelines can improve performance of EJB components. Keep in mind that decomposing an application into many EJB components creates overhead and can degrade performance. EJB components are not simply Java objects. They are components with semantics for remote call interfaces, security, and transactions, as well as properties and methods.
Use Caching
Caching can greatly improve performance when used wisely. For example:
Cache EJB references: To avoid a JNDI lookup for every request, cache EJB references in servlets. Cache home interfaces: Since repeated lookups to a home interface can be expensive, cache references to EJBHomes in the init() methods of servlets.
35
Cache EJB resources: Use setSessionContext() or ejbCreate() to cache bean resources. This is again an example of using bean lifecycle methods to perform application actions only once where possible. Remember to release acquired resources in the ejbRemove() method.
Explicitly call remove(): Allow stateful session EJB components to be removed from the container cache by explicitly calling of the remove() method in the client. Tune the entity EJB components pool size: Entity Beans use both the EJB pool and cache settings. Tune the entity EJB components pool size to minimize the creation and destruction of beans. Populating the pool with a non-zero steady size before hand is useful for getting better response for initial requests. Cache bean-specific resources: Use the setEntityContext() method to cache bean specific resources and release them using the unSetEntityContext() method. Load related data efficiently for container-managed relationships (CMRs). For more information, see Pre-fetching Container Managed Relationship (CMR) Beans on page 44 Identify read-only beans: Configure read-only entity beans for read only operations. For more information, see Read-Only Entity Beans on page 43
The application predates the EJB 2.0 specification and was written without any local interfaces. There are bean-to-bean calls and the client beans are written without making any co-location assumptions about the called beans.
For these cases, the Enterprise Server provides a pass-by-reference option that clients can use to pass arguments by reference to the remote interface of a co-located EJB component. You can specify the pass-by-reference option for an entire application or a single EJB component. When specified at the application level, all beans in the application use pass-by-reference semantics when passing arguments to their remote interfaces. When specified at the bean level, all calls to the remote interface of the bean use pass-by-reference
Chapter 2 Tuning Your Application 37
semantics. See Value Added Features in Sun GlassFish Enterprise Server 2.1 Developers Guide for more details about the pass-by-reference flag. To specify that an EJB component will use pass by reference semantics, use the following tag in the sun-ejb-jar.xml deployment descriptor:
<pass-by-reference>true</pass-by-reference>.
This avoids copying arguments when the EJB components methods are invoked and avoids copying results when methods return. However, problems will arise if the data is modified by another source during the invocation.
source are going to be involved in a transaction. If a database participates in some distributed transactions, but mostly in local or single database transactions, it is advisable to register two separate JDBC resources and use the appropriate resource in the application.
Version Consistency
Note The technique in section applies only to the EJB 2.1 architecture. In the EJB 3.0
39
Use version consistency to improve performance while protecting the integrity of data in the database. Since the application server can use multiple copies of an EJB component simultaneously, an EJB components state can potentially become corrupted through simultaneous access. The standard way of preventing corruption is to lock the database row associated with a particular bean. This prevents the bean from being accessed by two simultaneous transactions and thus protects data. However, it also decreases performance, since it effectively serializes all EJB access. Version consistency is another approach to protecting EJB data integrity. To use version consistency, you specify a column in the database to use as a version number. The EJB lifecycle then proceeds like this:
The first time the bean is used, the ejbLoad() method loads the bean as normal, including loading the version number from the database. The ejbStore() method checks the version number in the database versus its value when the EJB component was loaded.
If the version number has been modified, it means that there has been simultaneous access to the EJB component and ejbStore() throws a ConcurrentModificationException. Otherwise, ejbStore() stores the data and completes as normal.
The ejbStore() method performs this validation at the end of the transaction regardless of whether any data in the bean was modified. Subsequent uses of the bean behave similarly, except that the ejbLoad() method loads its initial data (including the version number) from an internal cache. This saves a trip to the database. When the ejbStore() method is called, the version number is checked to ensure that the correct data was used in the transaction. Version consistency is advantageous when you have EJB components that are rarely modified, because it allows two transactions to use the same EJB component at the same time. Because neither transaction modifies the data, the version number is unchanged at the end of both transactions, and both succeed. But now the transactions can run in parallel. If two transactions occasionally modify the same EJB component, one will succeed and one will fail and can be retried using the new valueswhich can still be faster than serializing all access to the EJB component if the retries are infrequent enough (though now your application logic has to be prepared to perform the retry operation). To use version consistency, the database schema for a particular table must include a column where the version can be stored. You then specify that table in the sun-cmp-mapping.xml deployment descriptor for a particular bean:
<entity-mapping> <cmp-field-mapping>
40 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
In addition, you must establish a trigger on the database to automatically update the version column when data in the specified table is modified. The Application Server requires such a trigger to use version consistency. Having such a trigger also ensures that external applications that modify the EJB data will not conflict with EJB transactions in progress. For example, the following DDL illustrates how to create a trigger for the Order table:
CREATE TRIGGER OrderTrigger BEFORE UPDATE ON OrderTable FOR EACH ROW WHEN (new.VC_VERSION_NUMBER = old.VC_VERSION_NUMBER) DECLARE BEGIN :NEW.VC_VERSION_NUMBER := :OLD.VC_VERSION_NUMBER + 1; END;
Request Partitioning
Request partitioning enables you to assign a request priority to an EJB component. This gives you the flexibility to make certain EJB components execute with higher priorities than others. An EJB component which has a request priority assigned to it will have its requests (services) executed within an assigned threadpool. By assigning a threadpool to its execution, the EJB component can execute independently of other pending requests. In short, request partitioning enables you to meet service-level agreements that have differing levels of priority assigned to different services. Request partitioning applies only to remote EJB components (those that implement a remote interface). Local EJB components are executed in their calling thread (for example, when a servlet calls a local bean, the local bean invocation occurs on the servlets thread).
Configure additional threadpools for EJB execution using the Admin Console. Add the additional threadpool IDs to the Application Servers ORB. You can do this by editing the domain.xml file or through the Admin Console.
Chapter 2 Tuning Your Application 41
For example, enable threadpools named priority-1 and priority-2 to the <orb> element as follows:
<orb max-connections="1024" message-fragment-size="1024" use-thread-pool-ids="thread-pool-1,priority-1,priority-2"> 3
Include the threadpool ID in the use-thread-pool-id element of the EJB components sun-ejb-jar.xml deployment descriptor. For example, the following sun-ejb-jar.xml deployment descriptor for an EJB component named TheGreeter is assigned to a thread pool named priority-2:
<sun-ejb-jar> <enterprise-beans> <unique-id>1</unique-id> <ejb> <ejb-name>TheGreeter</ejb-name> <jndi-name>greeter</jndi-name> <use-thread-pool-id>priority-1</use-thread-pool-id> </ejb> </enterprise-beans> </sun-ejb-jar>
Entity Beans on page 42 Stateful Session Beans on page 42 Stateless Session Beans on page 43 Read-Only Entity Beans on page 43 Pre-fetching Container Managed Relationship (CMR) Beans on page 44
Entity Beans
Depending on the usage of a particular entity bean, one should tune max-cache-size so that beans that are used less (for example, an order that is created and never used after the transaction is over) are cached less, and beans that are used frequently (for example, an item in the inventory that gets referenced very often), are cached more in numbers.
to the steady load of users), beans would be frequently passivated and activated, causing a negative impact on the response times, due to CPU intensive serialization and deserialization as well as disk I/O. Another important variable for tuning is cache-idle-timeout-in-seconds where at periodic intervals of cache-idle-timeout-in-seconds, all the beans in the cache that have not been accessed for more than cache-idle-timeout-in-seconds time, are passivated. Similar to an HTTP session time-out, the bean is removed after it has not been accessed for removal-timeout-in-seconds. Passivated beans are stored on disk in serialized form. A large number of passivated beans could not only mean many files on the disk system, but also slower response time as the session state has to be de-serialized before the invocation.
Database rows represented by the bean do not change. The application can tolerate using out-of-date values for the bean.
For example, an application might use a read-only bean to represent a list of best-seller books. Although the list might change occasionally in the database (say, from another bean entirely), the change need not be reflected immediately in an application. The ejbLoad() method of a read-only bean is handled differently for CMP and BMP beans. For CMP beans, the EJB container calls ejbLoad() only once to load the data from the database; subsequent uses of the bean just copy that data. For BMP beans, the EJB container calls ejbLoad() the first time a bean is used in a transaction. Subsequent uses of that bean within the transaction use the same values. The container calls ejbLoad() for a BMP bean that doesnt run within a transaction every time the bean is used. Therefore, read-only BMP beans still make a number of calls to the database. To create a read-only bean, add the following to the EJB deployment descriptor sun-ejb-jar.xml:
<is-read-only-bean>true</is-read-only-bean> <refresh-period-in-seconds>600</refresh-period-in-seconds>
Refresh period
An important parameter for tuning read-only beans is the refresh period, represented by the deployment descriptor entity refresh-period-in-seconds. For CMP beans, the first access to a bean loads the beans state. The first access after the refresh period reloads the data from the database. All subsequent uses of the bean uses the newly refreshed data (until another refresh period elapses). For BMP beans, an ejbLoad() method within an existing transaction uses the cached data unless the refresh period has expired (in which case, the container calls ejbLoad() again). This parameter enables the EJB component to periodically refresh its snapshot of the database values it represents. If the refresh period is less than or equal to 0, the bean is never refreshed from the database (the default behavior if no refresh period is given).
For example, you have this relationship defined in the ejb-jar.xml file:
<relationships> <ejb-relation> <description>Order-OrderLine</description> <ejb-relation-name>Order-OrderLine</ejb-relation-name> <ejb-relationship-role> <ejb-relationship-role-name> Order-has-N-OrderLines </ejb-relationship-role-name> <multiplicity>One</multiplicity> <relationship-role-source> <ejb-name>OrderEJB</ejb-name> </relationship-role-source> <cmr-field> <cmr-field-name>orderLines</cmr-field-name> <cmr-field-type>java.util.Collection</cmr-field-type> </cmr-field> </ejb-relationship-role> </ejb-relation> </relationships>
When a particular Order is loaded, you can load its related OrderLines by adding this to the sun-cmp-mapping.xml file for the application:
<entity-mapping> <ejb-name>Order</ejb-name> <table-name>...</table-name> <cmp-field-mapping>...</cmp-field-mapping> <cmr-field-mapping> <cmr-field-name>orderLines</cmr-field-name> <column-pair> <column-name>OrderTable.OrderID</column-name> <column-name>OrderLineTable.OrderLine_OrderID</column-name> </column-pair> <fetched-with> <default> </fetched-with> </cmr-field-mapping> </entity-mappping>
Now when an Order is retrieved, the CMP engine issues SQL to retrieve all related OrderLines with a SELECT statement that has the following WHERE clause:
OrderTable.OrderID = OrderLineTable.OrderLine_OrderID
Pre-fetching generally improves performance because it reduces the number of database accesses. However, if the business logic often uses Orders without referencing their OrderLines, then this can have a performance penalty, that is, the system has spent the effort to pre-fetch the OrderLines that are not actually needed. Avoid pre-fetching for specific finder methods; this can often avoid that penalty. For example, consider an order bean has two finder methods: a findByPrimaryKey method that uses the orderlines, and a findByCustomerId method that returns only order information and hence doesnt use the orderlines. If youve enabled CMR pre-fetching for the orderlines, both finder methods will pre-fetch the orderlines. However, you can prevent pre-fetching for the findByCustomerId method by including this information in the sun-ejb-jar.xml descriptor:
<ejb> <ejb-name>OrderBean</ejb-name> ... <cmp> <prefetch-disabled> <query-method> <method-name>findByCustomerId</method-name> </query-method> </prefetch-disabled> </cmp> </ejb>
Close Connections
To ensure that connections are returned to the pool, always close the connections after use.
Reduce the database transaction isolation level when appropriate. Reduced isolation levels reduce work in the database tier, and could lead to better application performance. However, this must be done after carefully analyzing the database table usage patterns. Set the database transaction isolation level with the Admin Console on the Resources > JDBC > Connection Pools > PoolName page. For more information on tuning JDBC connection pools, see JDBC Connection Pool Settings on page 77 .
Use getConnection()
JMS connections are served from a connection pool. This means that calling getConnection() on a Queue connection factory is fast.
Caution Previous to version 8.1, it was possible to reuse a connection with a servlet or EJB
component. That is, the servlet could call getConnection() in its init() method and then continually call getSession() for each servlet invocation. If you use JMS within a global transaction, that no longer works: applications can only call getSession() once for each connection. After than, the connection must be closed (which doesnt actually close the connection; it merely returns it to the pool). This is a general feature of portable Java EE 1.4 applications; the Sun Java System Application Server enforces that restriction where previous (Java EE 1.3-based) application servers did not.
48
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
C H A P T E R
This chapter describes some ways to tune the Enterprise Server for optimum performance, including the following topics:
Deployment Settings on page 49 Logger Settings on page 50 Web Container Settings on page 51 EJB Container Settings on page 53 Java Message Service Settings on page 58 Transaction Service Settings on page 58 HTTP Service Settings on page 60 ORB Settings on page 70 Thread Pool Settings on page 76 Resources:
JDBC Connection Pool Settings on page 77 Connector Connection Pool Settings on page 80
Deployment Settings
Deployment settings can have significant impact on performance. Follow these guidelines when configuring deployment settings for best performance:
Disable Auto-deployment on page 50 Use Pre-compiled JavaServer Pages on page 50 Disable Dynamic Application Reloading on page 50
49
Logger Settings
Disable Auto-deployment
Enabling auto-deployment will adversely affect deployment, though it is a convenience in a development environment. For a production system, disable auto-deploy to optimize performance. If auto-deployment is enabled, then the Reload Poll Interval setting can have a significant performance impact. Disable auto-deployment with the Admin Console under Stand-Alone Instances > server (Admin Server) on the Advanced/Applications Configuration tab.
Logger Settings
The Application Server produces writes log messages and exception stack trace output to the log file in the logs directory of the instance, appserver-root/domains/domain-name/logs. Naturally, the volume of log activity can impact server performance; particularly in benchmarking situations.
50 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
General Settings
In general, writing to the system log slows down performance slightly; and increased disk access (increasing the log level, decreasing the file rotation limit or time limit) also slows down the application. Also, make sure that any custom log handler doesnt log to a slow device like a network file system since this can adversely affect performance.
Log Levels
Set the log level for the server and its subsystems in the Admin Console Logger Settings page, Log Levels tab. The page enables you to specify the default log level for the server (labeled Root), the default log level for javax.enterprise.system subsystems (labeled Server) such as the EJB Container, MDB Container, Web Container, Classloader, JNDI naming system, and Security, and for each individual subsystem. Log levels vary from FINEST, which provides maximum log information, through SEVERE, which logs only events that interfere with normal program execution. The default log level is INFO. The individual subsystem log level overrides the Server setting, which in turn overrides the Root setting. For example, the MDB container can produce log messages at a different level than server default. To get more debug messages, set the log level to FINE, FINER, or FINEST. For best performance under normal conditions, set the log level to WARNING. Under benchmarking conditions, it is often appropriate to set the log level to SEVERE.
Session Properties: Session Timeout on page 51 Manager Properties: Reap Interval on page 52 Disable Dynamic JSP Reloading on page 52
Bidding starts at $5, in 60 seconds the value recorded will be $8 (three 20 second intervals). During the next 40 seconds, the client starts incrementing the price. The value the client sees is $10. During the clients 20 second rest, the Application Server stops and starts in 10 seconds. As a result, the latest value recorded at the 60 second interval ($8) is be loaded into the session. The client clicks again expecting to see $11; but instead sees is $9, which is incorrect. So, to avoid data inconsistencies, take into the account the expected behavior of the application when adjusting the reap interval.
Yes No Yes
No Yes Yes
The difference between a pooled bean and a cached bean is that pooled beans are all equivalent and indistinguishable from one another. Cached beans, on the contrary, contain conversational state in the case of stateful session beans, and are associated with a primary key in the case of entity beans. Entity beans are removed from the pool and added to the cache on ejbActivate() and removed from the cache and added to the pool on ejbPassivate(). ejbActivate() is called by the container when a needed entity bean is not in the cache. ejbPassivate() is called by the container when the cache grows beyond its configured limits.
Chapter 3 Tuning the Enterprise Server 53
Note If you develop and deploy your EJB components using Sun Java Studio, then you need to edit the individual bean descriptor settings for bean pool and bean cache. These settings might not be suitable for production-level deployment.
Initial and Minimum Pool Size: the initial and minimum number of beans maintained in the pool. Valid values are from 0 to MAX_INTEGER, and the default value is 8. The corresponding EJB deployment descriptor attribute is steady-pool-size. Set this property to a number greater than zero for a moderately loaded system. Having a value greater than zero ensures that there is always a pooled instance to process an incoming request.
Maximum Pool Size: the maximum number of connections that can be created to satisfy client requests. Valid values are from zero to MAX_INTEGER., and the default is 32. A value of zero means that the size of the pool is unbounded. The potential implication is that the JVM heap will be filled with objects in the pool. The corresponding EJB deployment descriptor attribute is max-pool-size. Set this property to be representative of the anticipated high load of the system. An very large pool wastes memory and can slow down the system. A very small pool is also inefficient due to contention.
Pool Resize Quantity: the number of beans to be created or deleted when the cache is being serviced by the server. Valid values are from zero to MAX_INTEGER and default is 16. The corresponding EJB deployment descriptor attribute is resize-quantity. Be sure to re-calibrate the pool resize quantity when you change the maximum pool size, to maintain an equilibrium. Generally, a larger maximum pool size should have a larger pool resize quantity.
54
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Pool Idle Timeout: the maximum time that a stateless session bean, entity bean, or message-driven bean is allowed to be idle in the pool. After this time, the bean is destroyed if the bean in case is a stateless session bean or a message driver bean. This is a hint to server. The default value is 600 seconds. The corresponding EJB deployment descriptor attribute is pool-idle-timeout-in-seconds. If there are more beans in the pool than the maximum pool size, the pool drains back to initial and minimum pool size, in steps of pool resize quantity at an interval specified by the pool idle timeout. If the resize quantity is too small and the idle timeout large, you will not see the pool draining back to steady size quickly enough.
Memory consumed by all the beans affects the heap available in the Virtual Machine. Increasing objects and memory taken by cache means longer, and possibly more frequent, garbage collection. The application server might run out of memory unless the heap is carefully tuned for peak loads.
Keeping in mind how your application uses stateful session beans and entity beans, and the amount of traffic your server handles, tune the EJB cache size and time-out settings to minimize the number of activations and passivations.
Maximum number of beans in the cache. Make this setting greater than one. The default value is 512. A value of zero indicates the cache is unbounded, which means the size of the cache is governed by Cache Idle Timeout and Cache Resize Quantity. The corresponding EJB deployment descriptor attribute is max-cache-size. Number of beans to be created or deleted when the cache is serviced by the server. Valid values are from zero to MAX_INTEGER, and the default is 16. The corresponding EJB deployment descriptor attribute is resize-quantity. Amount of time that a stateful session bean remains passivated (idle in the backup store). If a bean was not accessed after this interval of time, then it is removed from the backup store and will not be accessible to the client. The default value is 60 minutes. The corresponding EJB deployment descriptor attribute is removal-timeout-in-seconds. Algorithm used to remove objects from the cache. The corresponding EJB deployment descriptor attribute is victim-selection-policy.Choices are:
NRU (not recently used). This is the default, and is actually pseudo-random selection policy. FIFO (first in, first out) LRU (least recently used)
Maximum time that a stateful session bean or entity bean is allowed to be idle in the cache. After this time, the bean is passivated to the backup store. The default value is 600 seconds. The corresponding EJB deployment descriptor attribute is cache-idle-timeout-in-seconds. Rate at which a read-only-bean is refreshed from the data source. Zero (0) means that the bean is never refreshed. The default is 600 seconds. The corresponding EJB deployment descriptor attribute is refresh-period-in-seconds. Note: this setting does not have a custom field in the Admin Console. To set it, use the Add Property button in the Additional Properties section.
Type of Bean
cacheresizequantity
max- cachesize
removaltimeout- inseconds
victimselectionpolicy
refreshperiodinseconds
steadypool-size
poolresizequantity
maxpoolsize
pool-idletimeout-inseconds
X X X X X
56
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Refresh period
TABLE 32
(Continued)
Pool Settings
Type of Bean
cacheresizequantity
max- cachesize
removaltimeout- inseconds
victimselectionpolicy
refreshperiodinseconds
steadypool-size
poolresizequantity
maxpoolsize
pool-idletimeout-inseconds
X X
X X
X X
X X
X X X
X X
X X
X X
X X
Commit Option
The commit option controls the action taken by the EJB container when an EJB component completes a transaction. The commit option has a significant impact on performance. There are two possible values for the commit option:
Commit option B: When a transaction completes, the bean is kept in the cache and retains its identity. The next invocation for the same primary key can use the cached instance. The EJB container will call the beans ejbLoad() method before the method invocation to synchronize with the database. Commit option C: When a transaction completes, the EJB container calls the beans ejbPassivate() method, the bean is disassociated from its primary key and returned to the free pool. The next invocation for the same primary key will have to get a free bean from the pool, set the PrimaryKey on this instance, and then call ejbActivate() on the instance. Again, the EJB container will call the beans ejbLoad() before the method invocation to synchronize with the database.
Option B avoids ejbAcivate() and ejbPassivate() calls. So, in most cases it performs better than option C since it avoids some overhead in acquiring and releasing objects back to pool. However, there are some cases where option C can provide better performance. If the beans in the cache are rarely reused and if beans are constantly added to the cache, then it makes no sense to cache beans. With option C is used, the container puts beans back into the pool (instead of caching them) after method invocation or on transaction completion. This option reuses instances better and reduces the number of live objects in the JVM, speeding garbage collection.
than cache misses, then option B is an appropriate choice. You might still have to change the max-cache-size and cache-resize-quantity to get the best result. If the cache hits are too low and cache misses are very high, then the application is not reusing the bean instances and hence increasing the cache size (using max-cache-size) will not help (assuming that the access pattern remains the same). In this case you might use commit option C. If there is no great difference between cache-hits and cache-misses then tune max-cache-size, and probably cache-idle-timeout-in-seconds.
With Admin Console at Standalone Instances > server-name (Monitor | Monitor). Select transaction-service from the View dropdown. With this command:
58
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
total-tx-completed Completed transactions. total-tx-rolled-back Total rolled back transactions. total-tx-inflight Total inflight (active) transactions. isFrozen Whether transaction system is frozen (true or false) inflight-tx List of inflight (active) transactions.
Name: disable-distributed-transaction-logging Value: true You can also set this property with asadmin, for example:
asadmin set server1.transaction-service.disable-distributed-transaction-logging=true
Setting this attribute to true disables transaction logging, which can improve performance. Setting it to false (the default), makes the transaction service write transactional activity to transaction logs so that transactions can be recovered. If Recover on Restart is checked, this property is ignored. Set this property to true only if performance is more important than transaction recovery.
Chapter 3 Tuning the Enterprise Server 59
When Recover on Restart is true, the server will always perform transaction logging, regardless of the Disable Distributed Transaction Logging attribute. If Recover on Restart is false, then:
If Disable Distributed Transaction Logging is false (the default), then the server will write transaction logs. If Disable Distributed Transaction Logging is true, then the server will not write transaction logs. Not writing transaction logs will give approximately twenty percent improvement in performance, but at the cost of not being able to recover from any interrupted transactions. The performance benefit applies to transaction-intensive tests. Gains in real applications may be less.
Keypoint Interval
The keypoint interval determines how often entries for completed transactions are removed from the log file. Keypointing prevents a process log from growing indefinitely. Frequent keypointing is detrimental to performance. The default value of the Keypoint Interval is 2048, which is sufficient in most cases.
Monitoring the HTTP Service on page 60 Tuning the HTTP Service on page 64 Tuning HTTP Listener Settings on page 69
With asadmin, use the following command to list the monitoring parameters available:
list --user admin --port 4848 -m server-instance-name.http-service.*
where server-instance-name is the name of the server instance. Use the following command to get the values:
get --user admin --port 4848 -m server.http-service.parameter-name.*
where parameter-name is the name of the parameter to monitor. Statistics collection is enabled by default. Disable it by adding the following property to domain.xml and restart the server:
<property name="statsProfilingEnabled" value="false" />
Disabling statistics collection will increase performance. You can also view monitoring statistics with the Admin Console. The information is divided into the following categories:
DNS Cache Information (dns) on page 61 File Cache Information (file-cache) on page 63 Keep Alive (keep-alive) on page 63
Enabled
If the DNS cache is disabled, the rest of this section is not displayed. By default, the DNS cache is off. Enable DNS caching with the Admin Console by setting the DNS value to Perform DNS lookups on clients accessing the server.
HitRatio
The hit ratio is the number of cache hits divided by the number of cache lookups. This setting is not tunable.
Note If you turn off DNS lookups on your server, host name restrictions will not work and IP
Enabled
If asynchronous DNS is disabled, the rest of this section will not be displayed.
NameLookups
The number of name lookups (DNS name to IP address) that have been done since the server was started. This setting is not tunable.
AddrLookups
The number of address loops (IP address to DNS name) that have been done since the server was started. This setting is not tunable.
LookupsInProgress
The current number of lookups in progress.
62 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Number of Hits on Cached File Content Number of Cache Entries Number of Hits on Cached File Info Heap Space Used for Cache Number of Misses on Cached File Content Cache Lookup Misses Number of Misses on Cached File Content Max Age of a Cache Entry: The maximum age displays the maximum age of a valid cache entry. Max Number of Cache Entries Max Number of Open Entries Is File Cached Enabled?: If the cache is disabled, the other statistics are not displayed. The cache is enabled by default. Maximum Memory Map to be Used for Cache Memory Map Used for cache Cache Lookup Hits Open Cache Entries: The number of current cache entries and the maximum number of cache entries are both displayed. A single cache entry represents a single URI. This is a tunable setting. Maximum Heap Space to be Used for Cache
Connections Terminated Due to ClientConnection Timed Out Max Connection Allowed in Keep-alive Number of Hits Connections in Keep-alive Mode Connections not Handed to Keep-alive Thread Due to too Many Persistent Connections The Time in Seconds Before Idle Connections are Closed Connections Closed Due to Max Keep-alive Being Exceeded
63
Connection Queue
Total Connections Queued: Total connections queued is the total number of times a connection has been queued. This includes newly accepted connections and connections from the keep-alive system. Average Queuing Delay: Average queueing delay is the average amount of time a connection spends in the connection queue. This represents the delay between when a request connection is accepted by the server, and a request processing thread (also known as a session) begins servicing the request.
Access Log on page 64 Request Processing on page 64 Keep Alive on page 66 HTTP Protocol on page 67 HTTP File Cache on page 67
Access Log
When performing benchmarking, ensure that access logging is disabled. If you need to disable access logging, in HTTP Service click Add Property, and add the following property:
Rotation (enabled/disabled). Enable rotation to ensure that the logs dont run out of disk space. Rotation Policy:ime-based or size-based. Size-based is the default. Rotation Interval.
Request Processing
On the Request Processing tab of the HTTP Service page, tune the following HTTP request processing settings:
64
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Thread Count
The Thread Count parameter specifies the maximum number of simultaneous requests the server can handle. The default value is 5. When the server has reached the limit or request threads, it defers processing new requests until the number of active requests drops below the maximum amount. Increasing this value will reduce HTTP response latency times. In practice, clients frequently connect to the server and then do not complete their requests. In these cases, the server waits a length of time specified by the Request Timeout parameter. Also, some sites do heavyweight transactions that take minutes to complete. Both of these factors add to the maximum simultaneous requests that are required. If your site is processing many requests that take many seconds, you might need to increase the number of maximum simultaneous requests. Adjust the thread count value based on your load and the length of time for an average request. In general, increase this number if you have idle CPU time and requests that are pending; decrease it if the CPU becomes overloaded. If you have many HTTP 1.0 clients (or HTTP 1.1 clients that disconnect frequently), adjust the timeout value to reduce the time a connection is kept open. Suitable Request Thread Count values range from 100 to 500, depending on the load. If your system has extra CPU cycles, keep incrementally increasing thread count and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing thread count.
Request Timeout
The Request Timeout property specifies the number of seconds the server waits between accepting a connection to a client and receiving information from it. The default setting is 30 seconds. Under most circumstances, changing this setting is unnecessary. By setting it to less than the default 30 seconds, it is possible to free up threads sooner. However, disconnecting users with slower connections also helps.
Chapter 3 Tuning the Enterprise Server 65
Buffer Length
The size (in bytes) of the buffer used by each of the request processing threads for reading the request data from the client. Adjust the value based on the actual request size and observe the impact on performance. In most cases the default should suffice. If the request size is large, increase this parameter.
Keep Alive
Both HTTP 1.0 and HTTP 1.1 support the ability to send multiple requests across a single HTTP session. A server can receive hundreds of new HTTP requests per second. If every request was allowed to keep the connection open indefinitely, the server could become overloaded with connections. On Unix/Linux systems, this could easily lead to a file table overflow. The Application Servers Keep Alive system addresses this problem. A waiting keep alive connection has completed processing the previous request, and is waiting for a new request to arrive on the same connection. The server maintains a counter for the maximum number of waiting keep-alive connections. If the server has more than the maximum waiting connections open when a new connection waits for a keep-alive request, the server closes the oldest connection. This algorithm limits the number of open waiting keep-alive connections. If your system has extra CPU cycles, incrementally increase the keep alive settings and monitor performance after each increase. When performance saturates (stops improving), then stop increasing the settings. The following HTTP keep alive settings affect performance:
Thread Count Max Connections Time Out Keep Alive Query Mean Time Keep Alive Query Max Sleep Time
Max Connections
Max Connections controls the number of requests that a particular client can make over a keep-alive connection. The range is any positive integer, and the default is 256. Adjust this value based on the number of requests a typical client makes in your application. For best performance specify quite a large number, allowing clients to make many requests. The number of connections specified by Max Connections is divided equally among the keep alive threads. If Max Connections is not equally divisible by Thread Count, the server can allow slightly more than Max Connections simultaneous keep alive connections.
66 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Time Out
Time Out determines the maximum time (in seconds) that the server holds open an HTTP keep alive connection. A client can keep a connection to the server open so that multiple requests to one server can be serviced by a single network connection. Since the number of open connections that the server can handle is limited, a high number of open connections will prevent new clients from connecting. The default time out value is 30 seconds. Thus, by default, the server will close the connection if idle for more than 30 seconds. The maximum value for this parameter is 300 seconds (5 minutes). The proper value for this parameter depends upon how much time is expected to elapse between requests from a given client. For example, if clients are expected to make requests frequently then, set the parameter to a high value; likewise, if clients are expected to make requests rarely, then set it to a low value.
HTTP Protocol
The only HTTP Protocol attribute that significantly affects performance is DNS Lookup Enabled.
Max Age
This parameter controls how long cached information is used after a file has been cached. An entry older than the maximum age is replaced by a new entry for the same file. If your web sites content changes infrequently, increase this value for improved performance. Set the maximum age by entering or changing the value in the Maximum Age field of the File Cache Configuration page in the web-based Admin Console for the HTTP server node and selecting the File Caching Tab. Set the maximum age based on whether the content is updated (existing files are modified) on a regular schedule or not. For example, if content is updated four times a day at regular intervals, you could set the maximum age to 21600 seconds (6 hours). Otherwise, consider setting the maximum age to the longest time you are willing to serve the previous version of a content file after the file has been modified.
File Transmission
When File Transmission is enabled, the server caches open file descriptors for files in the file cache, rather than the file contents. Also, the distinction normally made between small, medium, and large files no longer applies since only the open file descriptor is being cached. By default, File Transmission is enabled on Windows, and disabled on UNIX. On UNIX, only enable File Transmission for platforms that have the requisite native OS support: HP-UX and AIX. Dont enable it for other UNIX/Linux platforms.
68 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Network Address
For machines with only one network interface card (NIC), set the network address to the IP address of the machine (for example, 192.18.80.23 instead of default 0.0.0.0). If you specify an IP address other than 0.0.0.0, the server will make one less system call per connection. Specify an IP address other than 0.0.0.0 for best possible performance. If the server has multiple NIC cards then create multiple listeners for each NIC.
Acceptor Threads
The Acceptor Threads setting specifies how many threads you want in accept mode on a listen socket at any time. It is a good practice to set this to less than or equal to the number of CPUs in your system. In the Enterprise Server, acceptor threads on an HTTP Listener accept connections and put them onto a connection queue. Session threads then pick up connections from the queue and service the requests. The server posts more session threads if required at the end of the request. The policy for adding new threads is based on the connection queue state:
Each time a new connection is returned, the number of connections waiting in the queue (the backlog of connections) is compared to the number of session threads already created. If it is greater than the number of threads, more threads are scheduled to be added the next time a request completes. The previous backlog is tracked, so that n threads are added (n is the HTTP Services Thread Increment parameter) until one of the following is true:
The number of threads increases over time. The increase is greater than n. The number of session threads minus the backlog is less than n.
To avoid creating too many threads when the backlog increases suddenly (such as the startup of benchmark loads), the server makes the decision whether more threads are needed only once every 16 or 32 connections, based on how many session threads already exist.
69
ORB Settings
ORB Settings
The Enterprise Server includes a high performance and scalable CORBA Object Request Broker (ORB). The ORB is the foundation of the EJB Container on the server.
Overview
The ORB is primarily used by EJB components via:
RMI/IIOP path from an application client (or rich client) using the application client container. RMI/IIOP path from another Enterprise Server instance ORB. RMI/IIOP path from another vendors ORB. In-process path from the Web Container or MDB (message driven beans) container.
When a server instance makes a connection to another server instance ORB, the first instance acts as a client ORB. SSL over IIOP uses a fast optimized transport with high-performance native implementations of cryptography algorithms. It is important to remember that EJB local interfaces do not use the ORB. Using a local interface passes all arguments by reference and does not require copying any objects.
ORB Settings
Connection Statistics
The following statistics are gathered on ORB connections:
total-inbound-connections Total inbound connections to ORB. total-outbound-connections Total outbound connections from ORB.
Thread Pools
The following statistics are gathered on ORB thread pools:
thread-pool-size Number of threads in ORB thread pool. waiting-thread-count Number of thread pool threads waiting for work to arrive.
71
ORB Settings
TABLE 33
(Continued)
communication infrastructure, thread pool parts of communication infrastructure, thread pool thread pool steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds steady-thread-pool-size, max-thread-pool-size, idle-thread-timeout-in-seconds
In-process
Thread Pool ID: Name of the thread pool to use. Max Message Fragment Size: Messages larger than this number of bytes will be fragmented. In CORBA GIOPv1.2, a Request, Reply, LocateRequest and LocateReply message can be broken into multiple fragments. The first message is a regular Request or Reply message with more fragments bit in the flags field set to true. If inter-ORB messages are for the most part larger than the default size (1024 bytes), increase the fragment size to decrease latencies on the network. Total Connections: Maximum number of incoming connections at any time, on all listeners. Protects the server state by allowing finite number of connections. This value equals the maximum number of threads that will actively read from the connection. IIOP Client Authentication Required (true/false)
Thus, even when one is not using ORB for remote-calls (via RMI/ IIOP), set the size of the threadpool to facilitate cleaning up the EJB pools and caches. Set ORB thread pool attributes under Configurations > config-name > Thread Pools > thread-pool-ID, where thread-pool-ID is the thread pool ID selected for the ORB. Thread pools have the following attributes that affect performance.
72 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
ORB Settings
Minimum Pool Size: The minimum number of threads in the ORB thread pool. Set to the average number of threads needed at a steady (RMI/ IIOP) load. Maximum Pool Size: The maximum number of threads in the ORB thread pool. Idle Timeout: Number of seconds to wait before removing an idle thread from pool. Allows shrinking of the thread pool. Number of Work Queues
In particular, the maximum pool size is important to performance. For more information, see Thread Pool Sizing on page 74.
When using the context factory, (com.sun.appserv.naming.S1ASCtxFactory), you can specify the number of connections to open to the server from the client ORB with the property com.sun.appserv.iiop.orbconnections. The default value is one. Using more than one connection may improve throughput for network-intense applications. The configuration changes are specified on the client ORB(s) by adding the following jvm-options:
-Djava.naming.factory.initial=com.sun.appserv.naming.S1ASCtxFactory -Dcom.sun.appserv.iiop.orbconnections=value
Chapter 3 Tuning the Enterprise Server 73
ORB Settings
Load Balancing
For information on how to configure RMI/IIOP for multiple application server instances in a cluster, Chapter 9, RMI-IIOP Load Balancing and Failover, in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide. When tuning the client ORB for load-balancing and connections, consider the number of connections opened on the server ORB. Start from a low number of connections and then increase it to observe any performance benefits. A connection to the server translates to an ORB thread reading actively from the connection (these threads are not pooled, but exist currently for the lifetime of the connection).
ORB Settings
++++++++++++++++++++++++++++++ Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): createFromStream: type is 4 < MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Message GIOP version: 1.2 MessageBase(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): ORB Max GIOP Version: 1.2 Message(Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): createFromStream: message construction complete. com.sun.corba.ee.internal.iiop.MessageMediator (Thread[ORB Client-side Reader, conn to 192.18.80.118:1050,5,main]): Received message: ----- Input Buffer ----Current index: 0 Total length : 340 47 49 4f 50 01 02 00 04 0 0 00 01 48 00 00 00 05 GIOP.......H....
Note The flag -Dcom.sun.CORBA.ORBdebug=giop generates many debug messages in the logs. This is used only when you suspect message fragmentation.
In this sample output above, the createFromStream type is shown as 4. This implies that the message is a fragment of a bigger message. To avoid fragmented messages, increase the fragment size. Larger fragments mean that messages are sent as one unit and not as fragments, saving the overhead of multiple messages and corresponding processing at the receiving end to piece the messages together. If most messages being sent in the application are fragmented, increasing the fragment size is likely to improve efficiency. On the other hand, if only a few messages are fragmented, it might be more efficient to have a lower fragment size that requires smaller buffers for writing messages.
75
In the tree component, expand the Configurations node. Expand the desired node. Select the JVM Settings node In the JVM Settings page, choose the JVM Options tab. Click Add JVM Option, and enter the following value:
-Dcom.sun.CORBA.encoding.ORBEnableJavaSerialization=true
6 7
Resources
is not offered in a Unix/Linux user interface. However, it is possible to edit the OS-scheduled thread pools and add new thread pools, if needed, using the Admin Console.
Resources
JDBC Connection Pool Settings on page 77 Connector Connection Pool Settings on page 80
numConnFailedValidation (count)Number of connections that failed validation. numConnUsed (range)Number of connections that have been used. numConnFree (count)Number of free connections in the pool. numConnTimedOut (bounded range)Number of connections in the pool that have timed out.
Timeout Settings on page 78 Isolation Level Settings on page 79 Connection Validation Settings on page 79
Upper limit of size of the pool. Number of connections to be removed when the idle timeout expires. Connections that have idled for longer than the timeout are candidates for removal. When the pool size reaches the initial and minimum pool size, removal of connections stops.
The following table summarizes pros and cons to consider when sizing connection pools.
TABLE 34
Connection pool
May not have enough connections to satisfy requests. Requests may spend more time in the queue.
More connections to fulfill requests. Requests will spend less (or no) time in the queue
Timeout Settings
There are two timeout settings:
Max Wait Time: Amount of time the caller (the code requesting a connection) will wait before getting a connection timeout. The default is 60 seconds. A value of zero forces caller to wait indefinitely. To improve performance set Max Wait Time to zero (0). This essentially blocks the caller thread until a connection becomes available. Also, this allows the server to alleviate the task of tracking the elapsed wait time for each request and increases performance.
78
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Idle Timeout: Maximum time in seconds that a connection can remain idle in the pool. After this time, the pool can close this connection. This property does not control connection timeouts on the database server. Keep this timeout shorter than the database server timeout (if such timeouts are configured on the database), to prevent accumulation of unusable connection in Enterprise Server. For best performance, set Idle Timeout to zero (0) seconds, so that idle connections will not be removed. This ensures that there is normally no penalty in creating new connections and disables the idle monitor thread. However, there is a risk that the database server will reset a connection that is unused for too long.
Transaction Isolation Level: specifies the transaction isolation level of the pooled database connections. If this parameter is unspecified, the pool uses the default isolation level provided by the JDBC Driver. Isolation Level Guaranteed: Guarantees that every connection obtained from the pool has the isolation specified by the Transaction Isolation Level parameter. Applicable only when the Transaction Isolation Level is specified. The default value is true. This setting can have some performance impact on some JDBC drivers. Set to false when certain that the application does not change the isolation level before returning the connection.
Avoid specifying Transaction Isolation Level. If that is not possible, consider setting Isolation Level Guaranteed to false and make sure applications do not programmatically alter the connections isolation level. If you must specify isolation level, specify the best-performing level possible. The isolation levels listed from best performance to worst are: 1. 2. 3. 4. READ_UNCOMMITTED READ_COMMITTED REPEATABLE_READ SERIALIZABLE
Choose the isolation level that provides the best performance, yet still meets the concurrency and consistency needs of the application.
If true, the pool validates connections (checks to find out if they are usable) before providing them to an application. If possible, keep the default value, false. Requiring connection validation forces the server to apply the validation algorithm every time the pool returns a connection, which adds overhead to the latency of getConnection(). If the database connectivity is reliable, you can omit validation.
auto-commit: attempt to perform an auto-commit on the connection. metadata: attempt to get metadata from the connection. table (performing a query on a specified table). Must also set Table Name. You may have to use this method if the JDBC driver caches calls to setAutoCommit() and getMetaData().
Table name to query when Validation Method is table. Whether to close all connections in the pool if a single validation check fails. The default is false. One attempt will be made to re-establish failed connections.
Transaction Support
You may be able to improve performance by overriding the default transaction support specified for each connector connection pool. For example, consider a case where an Enterprise Information System (EIS) has a connection factory that supports local transactions with better performance than global transactions. If a resource from this EIS needs to be mixed with a resource coming from another resource manager, the default behavior forces the use of XA transactions, leading to lower performance. However, by changing the EISs connector connection pool to use LocalTransaction transaction support and leveraging the Last Agent Optimization feature previously described, you could leverage the better-performing EIS LocalTransaction implementation. For more information on LAO, see Configure JDBC Resources as One-Phase Commit Resources on page 39 In the Admin Console, specify transaction support when you create a new connector connection pool, and when you edit a connector connection pool at Resources > Connectors > Connector Connection Pools. Also set transaction support using asadmin. For example, the following asadmin command could be used to create a connector connection pool TESTPOOL with the transaction-support as LOCAL.
80 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
81
82
C H A P T E R
Java Virtual Machine Settings on page 83 Managing Memory and Garbage Collection on page 84 Further Information on page 91
The client VM is tuned for reducing start-up time and memory footprint. Invoke it by using the -client JVM command-line option. The server VM is designed for maximum program execution speed. Invoke it by using the -server JVM command-line option.
By default, the Application Server uses the JVM setting appropriate to the purpose:
Developer Profile, targeted at application developers, uses the -client JVM flag to optimize startup performance and conserve memory resources. Enterprise Profile, targeted at production deployments, uses the default JVM startup mode. By default, Application Server uses the client Hotspot VM. However, if a server VM is needed, it can be specified by creating a <jvm-option> named -server.
You can override the default by changing the JVM settings in the Admin Console under Configurations > config-name > JVM Settings (JVM Options). For more information on server-class machine detection in J2SE 5.0, see Server-Class Machine Detection . For more information on JVMs, see JavaTM Virtual Machines.
83
Goals on page 32 Tracing Garbage Collection on page 86 Other Garbage Collector Settings on page 86 Tuning the Java Heap on page 87 Rebasing DLLs on Windows on page 89 Further Information on page 91
When the new generation fills up, it triggers a minor collection in which the surviving objects are moved to the old generation. When the old generation fills up, it triggers a major collection which involves the entire object heap. Both HotSpot and Solaris JDK use thread local object allocation pools for lock-free, fast, and scalable object allocation. So, custom object pooling is not often required. Consider pooling only if object construction cost is high and significantly affects execution profiles.
Make sure that the system is not using 100 percent of its CPU. Configure HADB timeouts, as described in the Administration Guide. Configure the CMS collector in the server instance. To do this, add the following JVM options:
-XX:+UseConcMarkSweepGC -XX:SoftRefLRUPolicyMSPerMB=1
Additional Information
Use the jvmstat utility to monitor HotSpot garbage collection. (See Further Information on page 91 For detailed information on tuning the garbage collector, see Tuning Garbage Collection with the 5.0 Java Virtual Machine.
Chapter 4 Tuning the Java Runtime System 85
On each line, the first number is the combined size of live objects before GC, the second number is the size of live objects after GC, the number in parenthesis is the total available space, which is the total heap minus one of the survivor spaces. The final figure is the amount of time that the GC took. This example shows three minor collections and one major collection. In the first GC, 50650 KB of objects existed before collection and 21808 KB of objects after collection. This means that 28842 KB of objects were dead and collected. The total heap size is 76868 KB. The collection process required 0.0478645 seconds. Other useful monitoring options include:
-XX:+PrintGCDetails for more detailed logging information -Xloggc:file to save the information in a log file
Although applications can explicitly invoke GC with the System.gc() method, doing so is a bad idea since this forces major collections, and inhibits scalability on large systems. It is best to disable explicit GC by using the flag -XX:+DisableExplicitGC. The Enterprise Server uses RMI in the Administration module for monitoring. Garbage cannot be collected in RMI-based distributed applications without occasional local collections, so RMI forces a periodic full collection. Control the frequency of these collections with the property -sun.rmi.dgc.client.gcInterval. For example, - java -Dsun.rmi.dgc.client.gcInterval=3600000 specifies explicit collection once per hour instead of the default rate of once per minute. To specify the attributes for the Java virtual machine, use the Admin Console and set the property under config-name > JVM settings (JVM options).
Guidelines for Java Heap Sizing on page 87 Heap Tuning Parameters on page 88
Operating System Redhat Linux 32 bit Redhat Linux 64 bit Windows 98/2000/NT/Me/XP Solaris x86 (32 bit) Solaris 32 bit Solaris 64 bit
Maximum heap space is always smaller than maximum address space per process, because the process also needs space for stack, libraries, and so on. To determine the maximum heap space that can be allocated, use a profiling tool to examine the way memory is used. Gauge the maximum stack space the process uses and the amount of memory taken up libraries and other
Chapter 4 Tuning the Java Runtime System 87
memory structures. The difference between the maximum address space and the total of those values is the amount of memory that can be allocated to the heap. You can improve performance by increasing your heap size or using a different garbage collector. In general, for long-running server applications, use the J2SE throughput collector on machines with multiple processors (-XX:+AggressiveHeap) and as large a heap as you can fit in the free memory of your machine.
The -Xms and -Xmx parameters define the minimum and maximum heap sizes, respectively. Since GC occurs when the generations fill up, throughput is inversely proportional to the amount of the memory available. By default, the JVM grows or shrinks the heap at each GC to try to keep the proportion of free space to the living objects at each collection within a specific range. This range is set as a percentage by the parameters -XX:MinHeapFreeRatio=minimum and -XX:MaxHeapFreeRatio=maximum; and the total size bounded by -Xms and -Xmx. Set the values of -Xms and -Xmx equal to each other for a fixed heap size. When the heap grows or shrinks, the JVM must recalculate the old and new generation sizes to maintain a predefined NewRatio. The NewSize and MaxNewSize parameters control the new generations minimum and maximum size. Regulate the new generation size by setting these parameters equal. The bigger the younger generation, the less often minor collections occur. The size of the young generation relative to the old generation is controlled by NewRatio. For example, setting -XX:NewRatio=3 means that the ratio between the old and young generation is 1:3, the combined size of eden and the survivor spaces will be fourth of the heap. By default, the Enterprise Server is invoked with the Java HotSpot Server JVM. The default NewRatio for the Server JVM is 2: the old generation occupies 2/3 of the heap while the new generation occupies 1/3. The larger new generation can accommodate many more short-lived objects, decreasing the need for slow major collections. The old generation is still sufficiently large enough to hold many long-lived objects. To size the Java heap:
88 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Decide the total amount of memory you can afford for the JVM. Accordingly, graph your own performance metric against young generation sizes to find the best setting. Make plenty of memory available to the young generation. The default is calculated from NewRatio and the -Xmx setting. Larger eden or younger generation spaces increase the spacing between full GCs. But young space collections could take a proportionally longer time. In general, keep the eden size between one fourth and one third the maximum heap size. The old generation must be larger than the new generation.
This is an exmple heap configuration used by Enterprise Server on Solaris for large applications:
-Xms3584m -Xmx3584m -verbose:gc -Dsun.rmi.dgc.client.gcInterval=3600000
To prevent load address collisions, set preferred base addresses with the rebase utilty that comes with Visual Studio and the Platform SDK. Use the rebase utility to reassign the base addresses of the Application Server DLLs to prevent relocations at load time and increase the available process memory for the Java heap. There are a few Application Server DLLs that have non-default base addresses that can cause collisions. For example:
The nspr libraries have a preferred address of 0x30000000. The icu libraries have the address of 0x4A?00000.
Move these libraries near the system DLLs (msvcrt.dll is at 0x78000000) to increase the available maximum contiguous address space substantially. Since rebasing can be done on any DLL, rebase to the DLLs after installing the Application Server.
Windows 2000 Visual Studio and the Microsoft Framework SDK rebase utility
Use the dependencywalker utility to make sure the DLLs were rebased correctly. For more information, see the Dependency Walker website. Increase the size for the Java heap, and set the JVM Option accordingly on the JVM Settings page in the Admin Console. Restart the Application Server.
Example 42
See Also
90
For more information on rebasing, see MSDN documentation for rebase utility.
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Further Information
Further Information
For more information on tuning the JVM, see:
Java HotSpot VM Options Frequently Asked Questions About the Java HotSpot Virtual Machine Performance Documentation for the Java HotSpot VM Java performance web page Monitoring and Management for the Java Platform (J2SE 5.0) The jvmstat monitoring utility
91
92
C H A P T E R
This chapter discusses tuning the operating system (OS) for optimum performance. It discusses the following topics:
Server Scaling on page 93 Solaris 10 Platform-Specific Tuning Information on page 95 Tuning for the Solaris OS on page 95 Linux Configuration on page 97 Tuning for Solaris on x86 on page 98 Tuning for Linux platforms on page 100 Tuning UltraSPARC T1Based Systems on page 103
Server Scaling
This section provides recommendations for optimal performance scaling server for the following server subsystems:
Processors
The Enterprise Server automatically takes advantage of multiple CPUs. In general, the effectiveness of multiple CPUs varies with the operating system and the workload, but more processors will generally improve dynamic content performance. Static content involves mostly input/output (I/O) rather than CPU activity. If the server is tuned properly, increasing primary memory will increase its content caching and thus increase
93
Server Scaling
the relative amount of time it spends in I/O versus CPU activity. Studies have shown that doubling the number of CPUs increases servlet performance by 50 to 80 percent.
Memory
See the section Hardware and Software Requirements in the Sun Java System Application Server Release Notes for specific memory recommendations for each supported operating system.
Disk Space
It is best to have enough disk space for the OS, document tree, and log files. In most cases 2GB total is sufficient. Put the OS, swap/paging file, Enterprise Server logs, and document tree each on separate hard drives. This way, if the log files fill up the log drive, the OS does not suffer. Also, its easy to tell if the OS paging file is causing drive activity, for example. OS vendors generally provide specific recommendations for how much swap or paging space to allocate. Based on Sun testing, Enterprise Server performs best with swap space equal to RAM, plus enough to map the document tree.
Networking
To determine the bandwidth the application needs, determine the following values:
The number of peak concurrent users (Npeak) the server needs to handle. The average request size on your site, r. The average request can include multiple documents. When in doubt, use the home page and all its associated files and graphics. Decide how long, t, the average user will be willing to wait for a document at peak utilization.
Then, the bandwidth required is: Npeakr / t For example, to support a peak of 50 users with an average document size of 24 Kbytes, and transferring each document in an average of 5 seconds, requires 240 Kbytes (1920 Kbit/s). So the site needs two T1 lines (each 1544 Kbit/s). This bandwidth also allows some overhead for growth. The servers network interface card must support more than the WAN to which it is connected. For example, if you have up to three T1 lines, you can get by with a 10BaseT interface. Up to a T3 line (45 Mbit/s), you can use 100BaseT. But if you have more than 50 Mbit/s of WAN bandwidth, consider configuring multiple 100BaseT interfaces, or look at Gigabit Ethernet technology.
94 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Tuning Parameters
Tuning Solaris TCP/IP settings benefits programs that open and close many sockets. Since the Enterprise Server operates with a small fixed set of connections, the performance gain might not be significant. The following table shows Solaris tuning parameters that affect performance and scalability benchmarking. These values are examples of how to tune your system for best performance.
TABLE 51 Parameter
rlim_fd_max
/etc/system
65536
65536
Limit of process open file descriptors. Set to account for expected load (for associated sockets, files, and pipes if any).
rlim_fd_cur sq_max_size
/etc/system /etc/system
1024 2
8192 0 Controls streams driver queue size; setting to 0 makes it infinite so the performance runs wont be hit by lack of buffer space. Set on clients too. Note that setting sq_max_size to 0 might not be optimal for production systems with high network traffic. Set on clients too. Set on clients too.
tcp_close_wait_interval tcp_time_wait_interval
240000 240000
60000 60000
95
TABLE 51 Parameter
(Continued)
Default Tuned Value Comments
1024 4096 60000 900000 3000 For high traffic web sites, lower this value. If retransmission is greater than 30-40%, you should increase this value.
tcp_rexmit_interval_initial
ndd /dev/tcp
10000 3000 1024 2 32768 32768 8192 Set on clients too. Slightly faster transmission of small amounts of data. Size of transmit buffer. Size of receive buffer. Size of connection hash table. See Sizing the Connection Hash Table on page 96.
This value does not limit the number of connections, but it can cause connection hashing to take longer. The default size is 512. To make lookups more efficient, set the value to half of the number of concurrent TCP connections that are expected on the server. You can set this value only in /etc/system, and it becomes effective at boot time. Use the following command to get the current number of TCP connections.
netstat -nP tcp|wc -l
96
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Linux Configuration
Once the above hard limit is set, increase the value of this property explicitly (up to this limit) using the following command:
ulimit -n 8192
For example, with the default ulimit of 64, a simple test driver can support only 25 concurrent clients, but with ulimit set to 8192, the same test driver can support 120 concurrent clients. The test driver spawned multiple threads, each of which performed a JNDI lookup and repeatedly called the same business method with a think (delay) time of 500 ms between business method calls, exchanging data of about 100 KB. These settings apply to RMI/IIOP clients on the Solaris OS.
Linux Configuration
The following parameters must be added to the /etc/rc.d/rc.local file that gets executed during system start-up.
<-- begin #max file count updated ~256 descriptors per 4Mb. Specify number of file descriptors based on the amount of system RAM. echo "6553" > /proc/sys/fs/file-max #inode-max 3-4 times the file-max #file not present!!!!! #echo"262144" > /proc/sys/fs/inode-max #make more local ports available echo 1024 25000 > /proc/sys/net/ipv4/ip_local_port_range #increase the memory available with socket buffers echo 2621143 > /proc/sys/net/core/rmem_max
Chapter 5 Tuning the Operating System and Platform 97
echo 262143 > /proc/sys/net/core/rmem_default #above configuration for 2.4.X kernels echo 4096 131072 262143 > /proc/sys/net/ipv4/tcp_rmem echo 4096 13107262143 > /proc/sys/net/ipv4/tcp_wmem #disable "RFC2018 TCP Selective Acknowledgements," and "RFC1323 TCP timestamps" echo 0 > /proc/sys/net/ipv4/tcp_sack echo 0 > /proc/sys/net/ipv4/tcp_timestamps #double maximum amount of memory allocated to shm at runtime echo "67108864" > /proc/sys/kernel/shmmax #improve virtual memory VM subsystem of the Linux echo "100 1200 128 512 15 5000 500 1884 2" > /proc/sys/vm/bdflush #we also do a sysctl sysctl -p /etc/sysctl.conf -- end -->
Additionally, create an /etc/sysctl.conf file and append it with the following values:
<-- begin #Disables packet forwarding net.ipv4.ip_forward = 0 #Enables source route verification net.ipv4.conf.default.rp_filter = 1 #Disables the magic-sysrq key kernel.sysrq = 0 fs.file-max=65536 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_max= 262143 net.core.rmem_default = 262143 net.ipv4.tcp_rmem = 4096 131072 262143 net.ipv4.tcp_wmem = 4096 131072 262143 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0
kernel.shmmax = 67108864 For further information on tuning Solaris system see the Solaris Tunable Parameters Reference Manual.
98
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Some of the values depend on the system resources available. After making any changes to /etc/system, reboot the machines.
File Descriptors
Add (or edit) the following lines in the /etc/system file:
set set set set set set rlim_fd_max=65536 rlim_fd_cur=65536 sq_max_size=0 tcp:tcp_conn_hash_size=8192 autoup=60 pcisch:pci_stream_buf_enable=0
IP Stack Settings
Add (or edit) the following lines in the /etc/system file:
set set set set set ip:tcp_squeue_wput=1 ip:tcp_squeue_close=1 ip:ip_squeue_bind=1 ip:ip_squeue_worker_wait=10 ip:ip_squeue_profile=0
These settings tune the IP stack. To preserve the changes to the file between system reboots, place the following changes to the default TCP variables in a startup script that gets executed when the system reboots:
ndd ndd ndd ndd ndd ndd ndd ndd ndd ndd ndd ndd -set -set -set -set -set -set -set -set -set -set -set -set /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp /dev/tcp tcp_time_wait_interval 60000 tcp_conn_req_max_q 16384 tcp_conn_req_max_q0 16384 tcp_ip_abort_interval 60000 tcp_keepalive_interval 7200000 tcp_rexmit_interval_initial 4000 tcp_rexmit_interval_min 3000 tcp_rexmit_interval_max 10000 tcp_smallest_anon_port 32768 tcp_slow_start_initial 2 tcp_xmit_hiwat 32768 tcp_recv_hiwat 32768
99
File Descriptors on page 100 Virtual Memory on page 101 Network Interface on page 102 Disk I/O Settings on page 102 TCP/IP Settings on page 102
File Descriptors
You may need to increase the number of file descriptors from the default. Having a higher number of file descriptors ensures that the server can open sockets under high load and not abort requests coming in from clients. Start by checking system limits for file descriptors with this command:
cat /proc/sys/fs/file-max 8192
The current limit shown is 8192. To increase it to 65535, use the following command (as root):
echo "65535" > /proc/sys/fs/file-max
To make this value to survive a system reboot, add it to /etc/sysctl.conf and specify the maximum number of open files permitted:
fs.file-max = 65535
Note: The parameter is not proc.sys.fs.file-max, as one might expect. To list the available parameters that can be modified using sysctl:
sysctl -a
To check and modify limits per shell, use the following command:
limit
cputime filesize datasize stacksize coredumpsize memoryuse descriptors memorylocked maxproc openfiles
unlimited unlimited unlimited 8192 kbytes 0 kbytes unlimited 1024 unlimited 8146 1024
The openfiles and descriptors show a limit of 1024. To increase the limit to 65535 for all users, edit /etc/security/limits.conf as root, and modify or add the nofile setting (number of file) entries:
* * soft hard nofile nofile 65535 65535
The character * is a wildcard that identifies all users. You could also specify a user ID instead. Then edit /etc/pam.d/login and add the line:
session required /lib/security/pam_limits.so
On Red Hat, you also need to edit /etc/pam.d/sshd and add the following line:
session required /lib/security/pam_limits.so
On many systems, this procedure will be sufficient. Log in as a regular user and try it before doing the remaining steps. The remaining steps might not be required, depending on how pluggable authentication modules (PAM) and secure shell (SSH) are configured.
Virtual Memory
To change virtual memory settings, add the following to /etc/rc.local:
echo 100 1200 128 512 15 5000 500 1884 2 > /proc/sys/vm/bdflush
For more information, view the man pages for bdflush. For HADB settings, refer to Chapter 6, Tuning for High-Availability.
101
Network Interface
To ensure that the network interface is operating in full duplex mode, add the following entry into /etc/rc.local:
mii-tool -F 100baseTx-FD eth0
Check the speed again using the hdparm command. Given that DMA is not enabled by default, the transfer rate might have improved considerably. In order to do this at every reboot, add the /sbin/hdparm -d1 /dev/hdX line to /etc/conf.d/local.start, /etc/init.d/rc.local, or whatever the startup script is called. For information on SCSI disks, see: System Tuning for Linux Servers SCSI.
TCP/IP Settings
To tune the TCP/IP settings
1
102
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
4 5
Reboot the system. Use this command to increase the size of the transmit buffer:
tcp_recv_hiwat ndd /dev/tcp 8129 32768
103
TABLE 52 Parameter
rlim_fd_max
/etc/system
65536
260000
Process open file descriptors limit; should account for the expected load (for the associated sockets, files, pipes if any).
hires_tick sq_max_size
/etc/system /etc/system 2
1 0 Controls streams driver queue size; setting to 0 makes it infinite so the performance runs wont be hit by lack of buffer space. Set on clients too. Note that setting sq_max_size to 0 might not be optimal for production systems with high network traffic.
ip:ip_squeue_bind ip:ip_squeue_fanout ipge:ipge_taskq_disable ipge:ipge_tx_ring_size ipge:ipge_srv_fifo_depth ipge:ipge_bcopy_thresh ipge:ipge_dvma_thresh ipge:ipge_tx_syncq tcp_conn_req_max_q tcp_conn_req_max_q0 tcp_max_buf tcp_cwnd_max tcp_xmit_hiwat tcp_recv_hiwat /etc/system /etc/system /etc/system /etc/system /etc/system /etc/system ndd /dev/tcp ndd /dev/tcp ndd /dev/tcp ndd/dev/tcp ndd /dev/tcp ndd /dev/tcp 8129 8129 128 1024
0 1 0 2048 2048 384 384 1 3000 3000 4194304 2097152 400000 400000 To increase the transmit buffer. To increase the receive buffer.
104
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Disk Configuration
If HTTP access is logged, follow these guidelines for the disk:
Write access logs on faster disks or attached storage. If running multiple instances, move the logs for each instance onto separate disks as much as possible. Enable the disk read/write cache. Note that if you enable write cache on the disk, some writes might be lost if the disk fails. Consider mounting the disks with the following options, which might yield better disk performance: nologging, directio, noatime.
Network Configuration
If more than one network interface card is used, make sure the network interrupts are not all going to the same core. Run the following script to disable interrupts:
allpsr=/usr/sbin/psrinfo | grep -v off-line | awk { print $1 } set $allpsr numpsr=$# while [ $numpsr -gt 0 ]; do shift numpsr=expr $numpsr - 1 tmp=1 while [ $tmp -ne 4 ]; do /usr/sbin/psradm -i $1 shift numpsr=expr $numpsr - 1 tmp=expr $tmp + 1 done done
Start Options
In some situations, performance can be improved by using large page sizes. The start options to use depend on your processor architecture. The following examples show the options to start the 32bit Enterprise Server and the 64bit Enterprise Server with 4Mbyte pages.
Chapter 5 Tuning the Operating System and Platform 105
To start the 32bit Enterprise Server with 4Mbyte pages, use the following options:
LD_PRELOAD_32=/usr/lib/mpss.so.1 ; export LD_PRELOAD_32; export MPSSHEAP=4M; ./bin/startserv; unset LD_PRELOAD_32; unset MPSSHEAP
To start the 64bit Enterprise Server with 4Mbyte pages, use the following options:
LD_PRELOAD_64=/usr/lib/64/mpss.so.1; export LD_PRELOAD_64; export MPSSHEAP=4M; ./bin/startserv; unset LD_PRELOAD_64; unset MPSSHEAP
106
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
C H A P T E R
Tuning HADB on page 107 Tuning the Enterprise Server for High-Availability on page 116 Configuring the Load Balancer on page 120
Tuning HADB
The Application Server uses the high-availability database (HADB) to store persistent session state data. To optimize performance, tune the HADB according to the load of the Enterprise Server. The data volume, transaction frequency, and size of each transaction can affect the performance of the HADB, and consequently the performance of Enterprise Server. This section discusses following topics:
Disk Use on page 107 Memory Allocation on page 109 Performance on page 110 Operating System Configuration on page 116
Disk Use
This section discusses how to calculate HADB data device size and explains the use of separate disks for multiple data devices.
Tuning HADB
If the database runs out of device space, the HADB returns error codes 4593 or 4592 to the Enterprise Server.
Note See Sun Java System Application Server Error Message Reference for more information
on these error messages. HADB also writes these error messages to history files. In this case, HADB blocks any client requests to insert, or update data. However, it will accept delete operations. HADB stores session states as binary data. It serializes the session state and stores it as a BLOB (binary large object). It splits each BLOB into chunks of approximately 7KB each and stores each chunk as a database row (context row is synonymous with tuple, or record) in pages of 16KB. There is some small memory overhead for each row (approximately 30 bytes). With the most compact allocation of rows (BLOB chunks), two rows are stored in a page. Internal fragmentation can result in each page containing only one row. On average, 50% of each page contains user data. For availability in case of node failure, HADB always replicates user data. An HADB node stores its own data, plus a copy of the data from its mirror node. Hence, all data is stored twice. Since 50% of the space on a node is user data (on average), and each node is mirrored, the data devices must have space for at least four times the volume of the user data. In the case of data refragmentation, HADB keeps both the old and the new versions of a table while the refragmentation operation is running. All application requests are performed on the old table while the new table is being created. Assuming that the database is primarily used for one huge table containing BLOB data for session states, this means the device space requirement must be multiplied by another factor of two. Consequently, if you add nodes to a running database, and want to refragment the data to use all nodes, you must have eight times the volume of user data available. Additionally, you must also account for the device space that HADB reserves for its internal use (four times that of the LogBufferSize). HADB uses this disk space for temporary storage of the log buffer during high load conditions.
This command restarts all the nodes, one by one, to apply the change. For more information on using this command, seeConfiguring HADB in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
108 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Tuning HADB
Note hadbm does not add data devices to a running database instance.
To avoid this problem, keep the HADB executable files and the history files on physical disks different from those of the data devices.
Memory Allocation
It is essential to allocate sufficient memory for HADB, especially when it is co-located with other processes. The HADB Node Supervisor Process (NSUP) tracks the time elapsed since the last time it performed monitoring. If the time exceeds a specified maximum (2500 ms, by default), NSUP restarts the node. The situation is likely when there are other processes in the system that compete for memory, causing swapping and multiple page faults. When the blocked node restarts, all active transactions on that node are aborted. If Enterprise Server throughput slows and requests abort or time out, make sure that swapping is not the cause. To monitor swapping activity on Unix systems, use this command:
vmstat -S
In addition, look for this message in the HADB history files. It is written when the HADB node is restarted, where M is greater than N:
Process blocked for .M. sec, max block time is .N. sec
Tuning HADB
Performance
For best performance, all HADB processes (clu_xxx_srv) must fit in physical memory. They should not be paged or swapped. The same applies for shared memory segments in use. You can configure the size of some of the shared memory segments. If these segments are too small, performance suffers, and user transactions are delayed or even aborted. If the segments are too large, then the physical memory is wasted. You can configure the following parameters:
DataBufferPoolSize on page 110 LogBufferSize on page 111 InternalLogbufferSize on page 112 NumberOfLocks on page 113 Timeouts on page 115
DataBufferPoolSize
The HADB stores data on data devices, which are allocated on disks. The data must be in the main memory before it can be processed. The HADB node allocates a portion of shared memory for this purpose. If the allocated database buffer is small compared to the data being processed, then disk I/O will waste significant processing capacity. In a system with write-intensive operations (for example, frequently updated session states), the database buffer must be big enough that the processing capacity used for disk I/O does not hamper request processing. The database buffer is similar to a cache in a file system. For good performance, the cache must be used as much as possible, so there is no need to wait for a disk read operation. The best performance is when the entire database contents fits in the database buffer. However, in most cases, this is not feasible. Aim to have the working set of the client applications in the buffer. Also monitor the disk I/O. If HADB performs many disk read operations, this means that the database is low on buffer space. The database buffer is partitioned into blocks of size 16KB, the same block size used on the disk. HADB schedules multiple blocks for reading and writing in one I/O operation. Use the hadbm deviceinfo command to monitor disk use. For example, hadbm deviceinfo --details will produce output similar to this:
NodeNo 0 1 TotalSize 512 512 FreeSize 504 504 Usage 1% 1%
110
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Tuning HADB
FreeSize: free size in MB. Usage: percent used. Use the hadbm resourceinfo command to monitor resource usage, for example the following command displays data buffer pool information:
%hadbm resourceinfo --databuf NodeNo Avail Free Access 0 32 0 205910260 1 32 0 218908192
Avail: Size of buffer, in Mbytes. Free: Free size, when the data volume is larger than the buffer. (The entire buffer is used at all times.) Access: Number of times blocks that have been accessed in the buffer. Misses: Number of block requests that missed the cache (user had to wait for a disk read) Copy-on-write: Number of times the block has been modified while it is being written to disk. For a well-tuned system, the number of misses (and hence the number of reads) must be very small compared to the number of writes. The example numbers above show a miss rate of about 4% (200 million access, and 8 million misses). The acceptability of these figures depends on the client application requirements.
Tuning DataBufferPoolSize
To change the size of the database buffer, use the following command:
hadbm set DataBufferPoolSize
This command restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
LogBufferSize
Before it executes them, HADB logs all operations that modify the database, such as inserting, deleting, updating, or reading data. It places log records describing the operations in a portion of shared memory referred to as the (tuple) log buffer. HADB uses these log records for undoing operations when transactions are aborted, for recovery in case of node crash, and for replication between mirror nodes.
Chapter 6 Tuning for High-Availability 111
Tuning HADB
The log records remain in the buffer until they are processed locally and shipped to the mirror node. The log records are kept until the outcome (commit or abort) of the transaction is certain. If the HADB node runs low on tuple log, the user transactions are delayed, and possibly timed out.
Tuning LogBufferSize
Begin with the default value. Look for HIGH LOAD informational messages in the history files. All the relevant messages will contain tuple log or simply log, and a description of the internal resource contention that occurred. Under normal operation the log is reported as 70 to 80% full. This is because space reclamation is said to be lazy. HADB requires as much data in the log as possible, to recover from a possible node crash. Use the following command to display information on log buffer size and use:
hadbm resourceinfo --logbuf
Node No.:The node number. Avail: Size of buffer, in megabytes. Free Size: Free size, in MB, when the data volume is larger than the buffer. The entire buffer is used at all times. Change the size of the log buffer with the following command:
hadbm set LogbufferSize
This command restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
InternalLogbufferSize
The node internal log (nilog) contains information about physical (as opposed to logical, row level) operations at the local node. For example, it provides information on whether there are disk block allocations and deallocations, and B-tree block splits. This buffer is maintained in shared memory, and is also checked to disk (a separate log device) at regular intervals. The page size of this buffer, and the associated data device is 4096 bytes.
112 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Tuning HADB
Large BLOBs necessarily allocate many disk blocks, and thus create a high load on the node internal log. This is normally not a problem, since each entry in the nilog is small.
Tuning InternalLogbufferSize
Begin with the default value. Look out for HIGH LOAD informational messages in the history files. The relevant messages contain nilog, and a description of the internal resource contention that occurred. Use the following command to display node internal log buffer information:
hadbm resourceinfo --nilogbuf
To change the size of the nilog buffer, use the following command:
hadbm set InternalLogbufferSize
The hadbm restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Note If the size of the nilog buffer is changed, the associated log device (located in the same directory as the data devices) also changes. The size of the internal log buffer must be equal to the size of the internal log device. The command hadbm set InternalLogBufferSize ensures this requirement. It stops a node, increases the InternalLogBufferSize, re initializes the internal log device, and brings up the node. This sequence is performed on all nodes.
NumberOfLocks
Each row level operation requires a lock in the database. Locks are held until a transaction commits or rolls back. Locks are set at the row (BLOB chunk) level, which means that a large session state requires many locks. Locks are needed for both primary, and mirror node operations. Hence, a BLOB operation allocates the same number of locks on two HADB nodes. When a table refragmentation is performed, HADB needs extra lock resources. Thus, ordinary user transactions can only acquire half of the locks allocated. If the HADB node has no lock objects available, errors are written to the log file. .
Chapter 6 Tuning for High-Availability 113
Tuning HADB
Number of concurrent users that request session data to be stored in HADB (one session record per user) Maximum size of the BLOB session Persistence scope (max session data size in case of session/modified session and maximum number of attributes in case of modified session). This requires setAttribute() to be called every time the session data is modified. If: x is the maximum number of concurrent users, that is, x session data records are present in the HADB, and y is the session size (for session/modified session) or attribute size (for modified attribute), Then the number of records written to HADB is: xy/7000 + 2x Record operations such as insert, delete, update and read will use one lock per record.
Note Locks are held for both primary records and hot-standby records. Hence, for insert, update and delete operations a transaction will need twice as many locks as the number of records. Read operations need locks only on the primary records. During refragmentation and creation of secondary indices, log records for the involved table are also sent to the fragment replicas being created. In that case, a transaction needs four times as many locks as the number of involved records. (Assuming all queries are for the affected table.)
Summary
If refragmentation is performed, the number of locks to be configured is: Nlocks = 4x (y/7000 + 2) = 2xy/3500 + 2x Otherwise, the number of locks to be configured is: Nlocks = 2x (y/7000 + 2) = xy/3500 + 4x
Tuning NumberOfLocks
Start with the default value. Look for exceptions with the indicated error codes in the Enterprise Server log files. Remember that under normal operations (no ongoing refragmentation) only half of the locks might be acquired by the client application. To get information on allocated locks and locks in use, use the following command: hadbm resourceinfo --locks
114 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Tuning HADB
For example, the output displayed by this command might look something like this:
Node No. 0 1
Waits na na
Avail: Number of locks available. Free: Number of locks in use. Waits: Number of transactions that have waited for a lock.na (not applicable) if all locks are available. To change the number of locks, use the following command:
hadbm set NumberOfLocks
The hadbm restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Timeouts
This section describes some of the timeout values that affect performance.
response-timeout-in-seconds -The time for which the load balancer plug-in will wait for a response before it declares an instance dead and fails over to the next instance in the cluster. Make this value large enough to accommodate the maximum latency for a request from the server instance under the worst (high load) conditions. health checker: interval-in-seconds - Determines how frequently the load balancer pings the instance to see if it is healthy. Default value is 30 seconds. If the response-timeout-in-seconds is optimally tuned, and the server doesnt have too much traffic, then the default value works well. health checker: timeout-in-seconds - How long the load balancer waits after pinging an instance. The default value is 100 seconds. The combination of the health checkers interval-in-seconds and timeout-in-seconds values determine how much additional traffic goes from the load balancer plug-in to the server instances.
115
For more information on configuring the load balancer plug-in, see Configuring the HTTP Load Balancer in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
HADB timeouts
The sql_client time out value may affect performance.
This can occur either while starting the database, or during run time. To correct this error, configure semaphore settings. Additionally, you may need to configure shared memory settings. Also, adding nodes can affect the required settings for shared memory and semaphores. For more information, see Configuring Shared Memory and Semaphores in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Tuning Session Persistence Frequency on page 117 Session Persistence Scope on page 118 Session Size on page 118 Checkpointing Stateful Session Beans on page 119 Configuring the JDBC Connection Pool on page 119 Descriptor configuration in the web application
To ensure highly available web applications with persistent session data, the high availability database (HADB) provides a backend store to save HTTP session data. However, there is a overhead involved in saving and reading the data back from HADB. Understanding the different schemes of session persistence and their impact on performance and availability will help you make decisions in configuring Enterprise Server for high availability. In general, maintain twice as many HADB nodes as there are application server instances. Every application server instance requires two HADB nodes.
116 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
web-method time-based
All else being equal, time-based persistence frequency provides better performance but less availability than web-method persistence frequency. This is because the session state is written to the persistent store (HADB) at the time interval specified by the reap interval (default is 60 seconds). If the server instance fails within that interval, the session state will lose any updates since the last time the session information was written to HADB.
Web-method
With web-method persistence frequency, the server writes the HTTP session state to HADB before it responds to each client request. This can have an impact on response time that depends on the size of the data being persisted. Use this mode of persistence frequency for applications where availability is critical and some performance degradation is acceptable. For more information on web-method persistence frequency, see Configuring Availability for the Web Container in Sun GlassFish Enterprise Server 2.1 High Availability Administration Guide.
Time-based
With time-based persistence frequency, the server stores session information to the persistence store at a constant interval, called the reap interval. You specify the reap interval under Configurations > config-name > Web Container (Manager Properties), where config-name is the name of the configuration. By default, the reap interval is 60 seconds. Every time the reap interval elapses, a special thread wakes up, iterates over all the sessions in memory, and saves the session data. In general, time-based persistence frequency will yield better performance than web-method, since the servers responses to clients are not held back by saving session information to the HADB. Use this mode of persistence frequency when performance is more important than availability.
Chapter 6 Tuning for High-Availability 117
session
With the session persistence scope, the server writes the entire session data to HADBregardless of whether it has been modified. This mode ensures that the session data in the backend store is always current, but it degrades performance, since all the session data is persisted for every request.
modified-session
With the modified-session persistence scope, the server examines the state of the HTTP session. If and only if the data has been modified, the server saves the session data to HADB. This mode yields better performance than session mode, because calls to HADB to persist data occur only when the session is modified.
modified-attribute
With the modified-attribute persistence scope, there are no cross-references for the attributes, and the application uses setAttribute() and getAttribute() to manipulate HTTP session data. Applications written this way can take advantage of this session scope behavior to obtain better performance.
Session Size
It is critical to be aware of the impact of HTTP session size on performance. Performance has an inverse relationship with the size of the session data that needs to be persisted. Session data is stored in HADB in a serialized manner. There is an overhead in serializing the data and inserting it as a BLOB and also deserializing it for retrieval. Tests have shown that for a session size up to 24KB, performance remains unchanged. When the session size exceeds 100KB, and the same back-end store is used for the same number of connections, throughput drops by 90%.
118 Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
It is important to pay attention while determining the HTTP session size. If you are creating large HTTP session objects, calculate the HADB nodes as discussed in Tuning HADB on page 107.
For the entire server instance or EJB container For the entire application For a specific EJB module Per method in an individual EJB module For best performance, specify checkpointing only for methods that alter the bean state significantly, by adding the <checkpointed-methods> tag in the sun-ejb-jar.xml file.
Initial and Minimum Pool Size: Minimum and initial number of connections maintained in the pool (default is 8) Maximum Pool Size: Maximum number of connections that can be created to satisfy client requests (default is 32) Pool Resize Quantity: Number of connections to be removed when idle timeout timer expires Idle Timeout: Maximum time (seconds) that a connection can remain idle in the pool. (default is 300) Max Wait Time: Amount of time (milliseconds) caller waits before connection timeout is sent
119
For optimal performance, use a pool with eight to 16 connections per node. For example, if you have four nodes configured, then the steady-pool size must be set to 32 and the maximum pool size must be 64. Adjust the Idle Timeout and Pool Resize Quantity values based on monitoring statistics. For the best performance, use the following settings:
Connection Validation: Required Validation Method: metadata Transaction Isolation Level: repeatable-read
To add a property, click the Add Property button, then specify the property name and value, and click Save. For more information on configuring the JDBC connection pool, see Tuning JDBC Connection Pools on page 77.
url: Specifies the listeners URL that the load balancer checks to determine its state of health.
120
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
interval-in-seconds: Specifies the interval at which health checks of instances occur. The default is 30 seconds. timeout-in-seconds: Specifies the timeout interval within which a response must be obtained for a listener to be considered healthy. The default is 10 seconds.
If the typical response from the server takes n seconds and under peak load takes m seconds, then set the timeout-in-seconds property to m + n, as follows:
<health-checker url="http://hostname.domain:port" interval-in-seconds="n" timeout-in-seconds="m+n"/>
121
122
Index
A
Acceptor Threads, 69 access log, 64 AddrLookups, 62 application architecture, 19 scalability, 24 tuning, 27 arrays, 27 authentication, 21 authorization, 21 automatic recovery, 60 Average Queuing Delay, 64
B
B commit option, 57 bandwidth, 94 benchmarking, tuning Solaris for, 104 best practices, 27 Buffer Length, HTTP Service, 66
caching (Continued) servlet results, 31 capacity planning, 24 checkpointing, 43, 119 class variables, shared, 30 Client ORB Properties, 73-74 Close All Connections On Any Failure, JDBC Connection Pool, 80 CMS collector, 85 coding guidelines, 27-29 commit options, 57-58 Common Data Representation (CDR), 75 configuration tips, 31 connection hash table, 96 Connection Validation Required, JDBC Connection Pool, 80 Connection Validation Settings, JDBC Connection Pool, 79-80 connector connection pools, 80 constants, 28 container-managed relationship, 44 container-managed transactions, 38 context factory, 73
C
C commit option, 57 cacheDatabaseMetaData, 120 CacheEntries, 61 caching EJB components, 53-54 message-driven beans, 48
D
data device size, 107 database buffer, 110 DataBufferPoolSize, 110-111 demilitarized zone (DMZ), 22
123
Index
deployment settings, 49 tips, 31 deserialization, 27-29 disabling network interrupts, 105 disk configuration, 105 disk I/O performance, 102 disk space, 94 distributed transaction logging, disabling, 59 DNS cache, 61-62 DNS lookups, 62, 67 dynamic reloading, disabling, 50
G
Garbage Collector, 84-85 generational object memory, 84
H
HADB, 107 data device size, 107 database buffer, 110 history files, 108 JDBC connection pool, 119 locks, 113 memory, 109 timeouts, 115 hardware resources, 22 Hash Init Size, HTTP file cache, 68 hash table, connection, 96 health checker, 120 high-availability database, 107 hires_tick, 104 history files, HADB, 108 HitRatio, 62 HotSpot, 85 HTTP access logged, 105 HTTP file cache, 67-68 Hash Init Size, 68 Max Age, 68 Max Files Count, 68 Small/Medium File Size, 68 HTTP listener settings, 69 HTTP protocol, 67 HTTP Service, 60 Buffer Length, 66 Initial Thread Count, 65 keep-alive settings, 66 monitoring, 60 Request Timeout, 65 Thread Count, 65
E
EJB components cache tuning, 35-36, 36, 55-56 commit options, 57-58 monitoring individual, 34-35 performance of types, 35 pool tuning, 36, 54-55 stubs, using, 36 transactions, 38-39 EJB container, 53-58 cache settings, 55-56 caching vs pooling, 53-54 monitoring, 32, 53 pool settings, 54-55 tuning, 32, 53-58 eliminateRedundantEndTransaction, 120 encryption, 21-22 entity beans, 42 expectations, 25-26
F
file cache, 63, 67-68 file descriptors, 99, 100 File Size Limit, HTTP file cacheHTTP file cache, File Size Limit, 68 File Transmission, HTTP file cacheHTTP file cache, File Transmission, 68 final, methods, 28
124
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Index
I
idle timeout EJB cache, 56 EJB pool, 55 IIOP Client Authentication Required, 72 IIOP messages, 74-75 Initial Thread Count, HTTP Service, 65 InternalLogbufferSize, 112-113 ip:ip_squeue_bind, 104 ip:ip_squeue_fanout, 104 IP stack, 99 ipge:ipge_bcopy_thresh, 104 ipge:ipge_srv_fifo_depth, 104 ipge:ipge_taskq_disable, 104 ipge:ipge_tx_ring_size, 104 ipge:ipge_tx_syncq, 104
K
keep-alive max connections, 66 settings, 66 statistics, 63 timeout, 67
L
last agent optimization (LAO), 39 Lighweight Directory Access Protocol (LDAP), 21 Linux, 100 load balancer, 120 locks, HADB, 113 log level, 51 LogBufferSize, 108, 111-112 logger settings, 50-51 LookupsInProgress, 62
J
Java coding guidelines, 27-29 Java Heap, 87-89 Java serialization, 75-76 Java Virtual Machine (JVM), 83 JAX-RPC, 29 JDBC Connection Pool, 77 Close All Connections On Any Failure, 80 Connection Validation Required, 80 Connection Validation Settings, 79-80 HADB, 119 Table Name, 80 Validation Method, 80 JDBC resources, 39 tips, 46-47 JMS connections, 48 local vs remote service, 58 tips, 47-48
M
Max Age, HTTP file cache, 68 max-cache-size, 56 Max Files Count, HTTP file cache, 68 Max Message Fragment Size, ORB, 72 max-pool-size, 54 MaxNewSize, 88 memory, 94, 109 message-driven beans, 47 monitoring EJB container, 32 file cache, 63 HTTP service, 60 JDBC connection pools, 77 ORB, 70-71
125
Index
N
NameLookups, 62 Network Address, 69 network configuration, 105 network interface, 102 network interrupts, disabling, 105 NewRatio, 88 NewSize, 88 Node Supervisor Process (NSUP), 109 null, assigning, 28 NumberOfLocks, 113-115
R
read-only beans, 43-44 refresh period, 44, 56 reap interval, 52 recover on restart, 60 refresh period read-only beans, 44, 56 remote vs local interfaces, 37 removal selection policy, 56 removal timeout, 56 request processing settings, 64 Request Timeout, HTTP Service, 65 resize quantity EJB cache, 56 EJB pool, 54 restart recovery, 60 rlim_fd_cur, 95 rlim_fd_max, 95, 104
O
open files, 97, 101 operating system, tuning, 93-106 operational requirements, 19-23 ORB, 70-76 Client properties, 73-74 IIOP Client Authentication Required, 72 Max Message Fragment Size, 72 monitoring, 70-71 Thread Pool ID, 72 thread pools, 71 Total Connections, 72 tuning, 71
S
safety margins, 24 Secure Sockets Layer, 21 security considerations, 21 security manager, 31 semaphores, 116 separate disks, 107, 109 multiple data devices, 107 serialization, 27-29, 75-76 server tuning, 49 servlets, 29 results caching, 31 tuning, 29-31 session persistence frequency, 117 persistence scope, 118 size, 118 state, storing, 107
P
page sizes, 105-106 pass-by-reference, 37-38 pass-by-value, 37 pauses, 86 persistence frequency, 117 persistence scope, 118 pool size, message-driven bean, 47 pre-compiled JSP files, 50 pre-fetching EJB components, 44
126
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009
Index
session (Continued) timeout, 51 Small/Medium File Size, HTTP file cache, 68 SOAP attachments, 29 Solaris JDK, 85 TCP/IP settings, 95 tuning for performance benchmarking, 104 version 9, 31 sq_max_size, 95, 104 SSL, 21 start options, 105-106 stateful session beans, 42-43, 119 stateless session beans, 43 storing persistent session state, 107 StringBuffer, 27-28 Strings, 27-28 -sun.rmi.dgc.client.gcInterval, 87 Survivor Ratio Sizing, 89 synchronizing code, 29 System.gc(), 87
T
Table Name, JDBC Connection Pool, 80 tcp_close_wait_interval, 95 tcp_conn_hash_size, 96 tcp_conn_req_max_q, 96, 104 tcp_conn_req_max_q0, 96, 104 tcp_cwnd_max, 104 tcp_ip_abort_interval, 96, 104 TCP/IP settings, 95, 102-103 tcp_keepalive_interval, 96 tcp_recv_hiwat, 96, 104 tcp_rexmit_interval_initial, 96 tcp_rexmit_interval_max, 96 tcp_rexmit_interval_min, 96 tcp_slow_start_initial, 96 tcp_smallest_anon_port, 96 tcp_time_wait_interval, 95 tcp_xmit_hiwat, 96, 104 Thread Count, HTTP Service, 65 Thread Pool ID, ORB, 72
thread pool sizing, 74 statistics, 71 tuning, 76 throughput, 86 timeouts, HADB, 115 Total Connections, ORB, 72 Total Connections Queued, 64 transactions connector connection pools, 80 EJB components, 38-39 EJB transaction attributes, 39 isolation level, 46-47 management for CMT, 79 monitoring, 58 tuning, 59 tuning applications, 27 EJB cache, 55-56 EJB pool, 54-55 JDBC connection pools, 77-80 Solaris TCP/IP settings, 95 the server, 49 thread pools, 76
U
ulimit, 97 user load, 24
V
Validation Method, JDBC Connection Pool, 80 variables, assigning null to, 28 victim-selection-policy, 56 virtual memory, 101
W
web container, 51
127
Index
X
x86, 98 XA-capable data sources, 38-39 -Xms, 88 -Xmx, 88 -XX +DisableExplicitGC, 87 MaxHeapFreeRatio, 88 MaxPermSize, 86 MinHeapFreeRatio, 88
128
Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide January 2009