AIX Tuning For IHS and WebSphere
AIX Tuning For IHS and WebSphere
g
27 August 2009
Agenda
This presentation will discuss AIX tuning as it relates to the best performance of the
primary components of a WebSphere infrastructure, these include the IBM HTTP Server
and the WebSphere Application Server versions 6.0, 6.1, and 7.0
This presentation will discuss only enough of the architecture and internal workings of
the IBM HTTP Server and WebSphere Application Server to put the AIX tuning in
context.
context
Rarelyy is a performance
p p
problem the result of a single
g component
p “There is always
y a next
bottleneck”.
The good news: most of the AIX defaults work well with IHS and WebSphere App Server.
One size never fits all. Although the standard defaults may provide satisfactory service
levels in 95% of cases there will always be 5% of installations where tuning is required to
achieve satisfactory service levels for users.
Consider the worst case scenario. Anticipate events that will stress the environment. These
may include
i l d peak k times
i off d
day, week,
k monthh or year, and
d special
i l events suchh as marketing
k i
campaigns or sporting events
Source: If applicable, describe source origin
Both IBM HTTP Server and WebSphere Application Server are highly multi-threaded and
can benefit significantly
g y from Simultaneous Multi-Threading,
g up
p to 40% increased throughput
g p
on Power5 systems and 50% increased throughput on Power6.
Note: Early implementations of Java – prior to Java V1.2 – used “green threads”. These
threads were emulated threads that all ran on a single OS thread. Some individuals still
have the misconception that Java runs on a single OS thread and therefore disable SMT.
This has not been true since Java V1.2 was introduced in 1999.
tcp_finwait2 – The time to wait before closing a connection in the FIN_WAIT_2 state
tcp_keepinit – The TCP initial connection timeout (allows failover to take place quickly)
In /etc/security/limits: core_hard = -1
cpu_hard = -1
data_hard = -1
fsize_hard = -1
stack_hard = -1
nofiles_hard = -1
Set the default Soft limits to unlimited for AT LEAST the WebSphere userid(s)
In /etc/security/limits: core = -1
cpu = -1
data = -1
fsize = -1
stack = -1
nofiles = -1
I highly recommend that network interfaces be manually configured for the correct speed to
match their peer network components and configured for Full Duplex. Network interfaces
running
i iin HHalflf D
Duplex
l mode
d will
ill severely
l iimpact server throughput.
h h
Cipherspecs ARE NOT an AIX tunable, however, if SSL connections are heavily used the
cipherspecs used can SEVERELY impact CPU utilization. The following information is
provided for your awareness in the event that you experience high CPU utilization with
moderate to high numbers of connections
When SSL connections are established the client and server negotiate the cipherspec to use
for traffic encryption. The server will select the strongest cipher that is supported by both the
client and the server. By default triple-DES
triple DES (3DES) is the strongest cipher supported by the
IBM HTTP Server. Since 3DES is almost universally supported by client browsers if it is
supported on the server then it will be the cipher chosen.
Unless 3DES cipher strength is required and CPU utilization is not a concern I recommend
removing 3DES from the server list of available ciphers.
See: “IBM HTTP Server Performance Tuning” in the references for a more complete
discussion of ciphers.
Default: 1024
Recommend: 4096 or larger
The actual size of the IHS connection backlog is determined by the IHS default or the
ListenBacklog directive in the IHS configuration file. During peaks or bursts of inbound Web
traffic we want to queue as much of the traffic at the Web server as possible rather than
flood the downstream servers with too much traffic. If somaxconn is set to a reasonably
large number then IHS can easily be reconfigured to use a larger connection request
(ListenBacklog) queue. Otherwise, somaxconn can only be altered temporarily and a reboot
will be required to increase the maximum queue size permanently.
Default: 0
Recommend: 1
Apart from the tuning which has been discussed thus far which is common to both IHS and
WebSphere Application Server, the most important consideration for WebSphere
performance is memory. Java performance, and therefore WebSphere performance is
critically dependant on JVM memory being resident and not paged.
Java users tend to think that the Java Heap equates to JVM memory. As the diagram on
the previous page showed there is significant memory used by the JVM outside of the Java
Heap. When planning a WebSphere infrastructure it is important to account for all of the
memory that is required by all JVMs on a server plus other system support processes to
ensure that memory is not overcommitted thus forcing JVM paging to occur.
The following data is taken from a live production WebSphere system on AIX:
Actual Memory Resident Set Size (RSS): 550MB (determined using ps aux)
Default: 1024
Recommend: 128
The actual size of the WebSphere connection request queue is determined by the TCP
Transport Channel default (511) or the setting of the TCP Transport Channel custom
property listenBacklog. Reducing the size of the connection request queue improves
failover in the event of an application server failure. This of course assumes that there are
more than one application servers that inbound requests can failover to.
Since listenBacklog is a custom property it is often overlooked (usually overlooked) and the
connection request queue therefore defaults to a size of 511. By setting somaxconn at the
OS level this limits the maximum connection request queue size regardless of the default or
explicit setting of listenBacklog.
References
References (cont’d)
Contact Information
Colin Henderson
Executive IT Architect
Software Services for WebSphere
IBM Software Group