0% found this document useful (0 votes)
15 views

WAS Admin rajasekhar

The document provides a comprehensive guide on IBM WebSphere Application Server (WAS) administration, focusing on versions 6.0 and 7.0, including installation prerequisites, types of packages, and profile creation. It outlines the differences between Express, Base, and Network Deployment packages, as well as the steps for installation in both GUI and silent modes. Additionally, it details commands for server operations and the structure of profiles within the WAS environment.

Uploaded by

kishorenare14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

WAS Admin rajasekhar

The document provides a comprehensive guide on IBM WebSphere Application Server (WAS) administration, focusing on versions 6.0 and 7.0, including installation prerequisites, types of packages, and profile creation. It outlines the differences between Express, Base, and Network Deployment packages, as well as the steps for installation in both GUI and silent modes. Additionally, it details commands for server operations and the structure of profiles within the WAS environment.

Uploaded by

kishorenare14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 391

Dekrinssoft

WebSphere Application Server Administration


Network Deployment package 6.0 & 7.0

Prepared By Rajashekhara Reddy K

DEC 2012-13
IBM WEBSPHERE APPLICATION SERVER ADMINISTRATION
17-Dec-12
Application: An application is a collection of resources like servlets, jsp’s, EJB’s, jar(Java
Archive), xml files, images, property files etc all together build to have a desired output.

Server: A server provides runtime environment for the applications. It takes the responsibility
to receive the end user’s request to identify the resources, to execute that resource and to
generate a response.

Java has 3 flavors:

By using J2SE we can develop only desktop applications. If we need any web based
applications we have to depend on J2EE components like servlets, JSP’s or EJB’s.

 If an application contains any of the J2EE components we require additional software


(server) to run that application.
 By using WAS we can run both .war & .ear files.

Websphere Application Server Versions

6.0 – 10-20% (Using in companies)


6.1 – 20-40%
7.0 – 50-80%
8.0 – 0-5%
8.5 – latest
WebSphere 6.0

3 Types of packages

1) WebSphere application server – Express package.


2) WebSphere application server – Base Package.
3) WebSphere application server – Network Deployment Package.

Package: It is a setup like have its own terminology.


Example: Windows xp is a package.
 Express and Base packages are provide stand alone environment. Here we can’t achieve
high availability, single point of administration, work load management etc.
 Network deployment package (Distributed environment) we can achieve high availability,
single point of administration, work load management, clustering. So, that we can minimize
the downtime or outage of the application.

Network Deployment Package: Distributed or Cluster Server Application


Express Package - Relational Web Developer (Eclips, My eclips etc)
Base Package - Relational Application Developer
Network Deployment Package

WAS Installation Prerequisites:

WAS 6.0 - Base Package

Pre requisites
- 1 GB RAM.
- 2GB Hard Disk.
- OS - Windows, Linux.

Types of Installation
GUI Mode and Silent Mode

GUI MODE:
Go to
C:\Documents and Settings\Administrator\Desktop\6.0 Base\WAS
Click install.exe
Click Next ….

Accept the license and click next


Click Next

Installation Directory for

HP – UNIX, LINUX, Sun Solaris

/opt/IBM/websphere/appserver

For AIX
/user/IBM/websphere/appserver

For Windows:

C:\IBM_Base_Dec_GUI\websphere\appserver

Next.. Select Full Installation

Click Next

Click Next
Click Finish..

After installing there is one dialogue box First steps to install


In that we are having below options
Working with installation directories

/opt/IBMwebsphere/appserver (Linux)
<WAS-ROOT> - It is called root directory.
Bin
<profile-home> - It is important one to maintain multiple servers.
Bin
Basic Operations on Server (Linux)

Command to start a server

<profile-name>/bin ./startserver.sh <server-name>


C:\IBM_BASE_GUI\WebSphere\AppServer\bin>startserver.bat server1 (Windows)
<profile-name>/bin ./stopserver.sh <server-name>
C:\IBM_BASE_GUI\WebSphere\AppServer\bin>stopserver.bat server1 (Windows)

Command to check the server status

<profile-name>/bin> ./serverstatus.sh <server-name>


C:\IBM_BASE_GUI\WebSphere\AppServer\bin>serverstatus.bat server1 (Windows)
<profile-name>/bin> ./serverstatus.sh –all
C:\IBM_BASE_GUI\WebSphere\AppServer\bin>serverstatus.bat –all (Windows)

URL to access WAS admin console

http://<host-name>:<admin console port number>/ibm/console


http://<host-name>:<admin console portnumber>/admin
http://<ipaddress of that host>:<portnumber>/ibm/console
http://<ipaddress of the host>:<portnumber>/admin

If it is same box we can use localhost also.

http://<localhost>:<adminconsoleportnumber>/ibm/console

Default port number is 9060.

Where can I found port no’s in installation WAS installation directory?

 Profile home
Config
Cell
Cell name
Node
Node name
Server index.xml
Path:
C:\IBM_BASE_GUI\WebSphere\AppServer\profiles\default\config\cells\RajashekharNode01Cell
\nodes\RajashekharNode01\serverindex.xml

Profile:
The WAS installation process allows for two main actions. The first being the base
binaries which are the core executables and the second being a profile. The base binaries are
the product mechanics made up of executables and shells scripts, and can be shared by one or
many profiles. Profiles exist so data can be partitioned away from the underlying core. Simply
put, a profile contains an Application Server. When an Application Server is running, the server
process may read and write data to underlying configuration files and logs. So, by using profiles,
transient data is kept away from the base product. This allows us to have more than one profile
using the same base binaries and also allows us to remove profiles without affecting other
profiles. Another reason for separating the base binaries is so that we can upgrade the product
with maintenance and fix packs without having to reconfigure our Application Server profiles.
Types of profile creation:
1) By using profile creation wizard (6.0) or
By using profile management tool (from 6.1 onwards).
2) By using command line.
3) By using silent mode.
Profile creation using profile creation wizard:

 Go to WAS root/bin directory we have profile creator

Pctwindows.exe execute the setup file


Profile creation wizard
Click Next

Profile name
Default name: AppSrv01(optional we can change the name)
Click Next
Profile directory
Default path by system or we can change
Click Next

Node name & host name


Node name: RajashekharNode02
Host name: system Name (Rajashekhar)
Click Next
Port value assignment
Click Next

Windows service definition


Run the app server process as a windows service(by default it is
checked. We have to uncheck that)
Log on as a local sys acc
Log on as a specified user acc (Providing user id password)
Startup Type: Manual
Click Next
Click Next
Click Finish.

WAS Directories Hierarchy:


WAS Components:

A step to install WAS through full silent mode:

Before installing WAS through silent mode we have to follow below points.
1) Identify the response file location.
2) Take a backup of that response file.
3) Customize that response file parameters according to your environment.
4) save the changes and execute that response file by using command through command
prompt

./install.exe –options<path of the response file location> -silent


5) If the installation successful will get a message called INSTCONFSUCCESS.

Explanation to above points:

1) Response file location path (It will reside in your WAS software folder)
C: \Websphere_6.0_Base\WAS\responsefile.base
2) Take a backup of response file to another location (Copy and rename to response.txt
And this will preserve the original file as a backup in case we need it again later. )
C:\Documents and Settings\Administrator\Desktop\response.txt
3) Customization of the response file (opens and edits the parameters according to your
requirements).
# InstallShield Options File
No need to change anything in this section.
# License Acceptance (By default it is “false” we have to set it to “true”)
-W silentInstallLicenseAcceptance.value="true"

# Incremental Install
In this section no need to change anything for fresh installation.

# IBM WebSphere Application Server, V6.0 Install Location


-P wasProductBean.installLocation="C:\Program Files\IBM\WebSphere\AppServer"

We have to change the default path to below path (optional)


-P wasProductBean.installLocation="C:\IBM_Base_Silent\WebSphere\AppServer"

# IBM WebSphere Application Server - Base, V6.0 UPGRADE from Base Trial or
Express

No need to perform anything in this section

# Setup type
#
# This value is required for the installation. Do not change this!
#

-W setuptypepanelInstallWizardBean.selectedSetupTypeId="Custom"

Mandatory value doesn’t change.

# Port value assignment

Defaults leave it for first server install.

# Node name
-W nodehostandcellnamepanelInstallWizardBean.nodeName="YOUR_NODE_NAME"
-W nodehostandcellnamepanelInstallWizardBean.nodeName="App_Node01"

In this section we have to specify the node name

# Host name
-W nodehostandcellnamepanelInstallWizardBean.hostName="YOUR_HOST_NAME"

In this section we have to give the system name


-W nodehostandcellnamepanelInstallWizardBean.hostName="Rajashekhar"

# Cell name
# -W setcellnameinglobalconstantsInstallWizardBean.value="YOUR_CELL_NAME"

In this section we have to change the cell name


# -W setcellnameinglobalconstantsInstallWizardBean.value="Mycell01"
# Run WebSphere Application Server as a Windows service

-W winservicepanelInstallWizardBean.winServiceQuery="true"
By default it is “true” we hae to set it to “true”

-W winservicepanelInstallWizardBean.winServiceQuery="false"

# Specify account type of the service. Legal values are:


#
# localsystem - Indicates that you choose to use Local System account.
# specifieduser - Indicates that you choose to use specified user account.
#

-W winservicepanelInstallWizardBean.accountType="localsystem"

In this section if we choose local system no need to give user name and password. If we choose
specified user we have to give the user name and password.

# Specify startup type of the service. Legal values are:


#
# automatic - Indicates that you choose to use automatic startup type
# manual - Indicates that you choose to use manual startup type
# disabled - Indicates that you choose to use disabled startup type
#

-W winservicepanelInstallWizardBean.startupType="manual"

In this section no needs to change anything leave it default.

# Specify your user name and password. Your user name

-W winservicepanelInstallWizardBean.userName="YOUR_USER_NAME"
-W winservicepanelInstallWizardBean.password="YOUR_PASSWORD"

Uncomment above values if it is local system neither we have to pass the userid and password.

4) Save the changes to the response.txt and execute through command prompt.
C:\Documents and Settings\Administrator\Desktop\6.0 Base\WAS>install.exe -options
"C:\Documents and Settings\Administrator\Desktop\response.txt" -silent

Where will you verify the installation started or not in silent mode installation?

Go to %temp% directory open the log.txt and we will get a message like below.
INSTCONFSUCCESS: Post-installation configuration is successful.

Note: After creating the WAS root binaries, profile directory will create.
In the response.txt we have to set the value true for license acceptance. Otherwise it will fail
the installation and gives a message like INSCONFFAILED: Accept the license.

Necessary parameters to change in responsefile.base:


-W silentInstallLicenseAcceptance.value="true"
-P wasProductBean.installLocation="C:\IBM_Base_Silent\WebSphere\AppServer"
-W defaultprofileportspanelInstallWizardBean.WC_defaulthost="9080"
-W defaultprofileportspanelInstallWizardBean.WC_adminhost="9060"
-W defaultprofileportspanelInstallWizardBean.WC_defaulthost_secure="9444"
-W defaultprofileportspanelInstallWizardBean.WC_adminhost_secure="9044"
-W defaultprofileportspanelInstallWizardBean.BOOTSTRAP_ADDRESS="2810"
-W defaultprofileportspanelInstallWizardBean.SOAP_CONNECTOR_ADDRESS="8881"
-W nodehostandcellnamepanelInstallWizardBean.nodeName="App_Node1"
-W nodehostandcellnamepanelInstallWizardBean.hostName="localhost"
-W winservicepanelInstallWizardBean.winServiceQuery="true"
-W winservicepanelInstallWizardBean.accountType="localsystem"
-W winservicepanelInstallWizardBean.startupType="manual"
Profile creation through command prompt (Silent Mode):

Go to any of the installation part either WAS-Root bin (or) Profile bin and select the
wasprofile.bat
Copy that path and paste it in command prompt

-listProfiles – Using this command to know how many profiles under profiles-home directory
(Here P is the uppercase).

If we don’t know how to create a profile and which values to be pass see the below
screen shot and go through those options.

Here is the full command to enter in command prompt to create a profile


C:\IBM_Base_Silent\WebSphere\AppServer\bin>wasprofile.bat -create -profileName
Appserv01 –profilePath "C:\IBM_Base_Silent\WebSphere\AppServer\profiles\Appserv01"
-templatePath "C:\IBM_Base_Silent\WebSphere\AppServer\profileTemplates\default"
-nodeName App_Node02 -cellName Mycell02 -hostName rajasekhar-pc -startingPort 2000

Note: In case of WAS 6.0 port values will not increment. From WAS 6.1 onwards it will
increment through command line also.
After successful installation we will get a message.
[INSTCONFSUCCESS:Success: the profile now exists.

Profile:
Profile is nothing an environment which contains a server, admin console, node and some
dependant, some independent xml and config files. With the help of a profile we can do the
admin activities on the server and applications.

 Profile Hierarchy:

 In this case of base and express packages we can create only one type of profile that is
application server profile.
 In case of ND package we can create 3 types of profiles.
1) Deployment manager profile.
2) Application server profile.
3) Custom profile.
 In case of base and express packages we have only one profile template that is “default”
template (Used for application server profiles).
 In case if ND package we have 3 types of profile templates.
1) Dmgr – (used for Deployment manager).
2) Default - (Used for Application server profile).
3) Managed - (Used for Custom profile).
Network Deployment Package Installation:
Go to C:\Documents
andSettings\Administrator\Desktop\Websphere_6.0_ND\ndv60_appsrvotherstuffHL\C587UML
_ndv60_appsrvotherstuff\WAS
Click install.exe

Click Next

Accept the license and click next


Click Next

Click next
Click Next

Click Next.
Click Next

Click Ignore
Uncheck launch the profile creation wizard. Click Next

Click Next
Click Finish.
Creating Application server Profile Using Profile Creation Wizard:

Here by default we will get a server called server1, admin console and a node. It is not
possible to create additional servers unless its node is federated to the dmgr.

Go to WAS root bin


Profile Creator
Pctwindows.exe (C:\IBM_ND_6.0\WebSphere\AppServer\bin\pctwindows.exe)

Click Next
Here select which profile to create. By default Application server profile
Choose Application server profile.
Click Next

Click Next

Click Next
Note: In case of hostname mismatch installation will success but at the time of starting server
we will get an exception.
Exception called Invoke Target Exception.
Node: A node is a logical group of servers. Anode is a logical one. A node is physical one.
Click Next
Click Next
Uncheck Run the application server process as a windows service.

Click Next
Click Next
Click Finish
Creating Deployment manager profile:
It contains admin console, node and dmgr process. By default it won’t contain any server
to deploy applications and run the applications.
Using dmgr we can create n no of servers under one profile.
Click profile creation wizard from first step console.

Click Next
Here we have to select create deployment manager profile.

Click Next
Click Next

Click Next
Click Next

Click Next
Uncheck Run the Application Server process as a Windows service
Click Next
Click Next
Click Finish
Launch first step console. There u will find only one difference among other installations.
That is start the Deployment manager.
----- For start/stop dmgr using start/stopmanager.bat

To stop the dmgr process

 Login in to Admin console of dmgr there we can see lot of changes when compared to base
package admin console.
Base package admin console (Application server profile)

 Log in to dmgr admin console and do some r&d on that


http://localhost:9062/ibm/console

Creating Custom Profile:


---- Using profile creation wizard

Click Next
Profile Type select Custom profile

Click Next

Click Next
Click Next

Click Next
Click Next

Click Next
Click Finish
Federation
It is a process of adding a node either from application server profile or custom profile to
deployment manager profile. Once the node of either application server profile or custom
profile is available with dmgr we can create N no of servers under that node and we can
manage from the same dmgr console.
------ We can add a node by using a command
./addnode.sh <host-name of dmgr> <soap connector of dmgr> -includeapps
–username <user name> -password <password> (If global security is enabled we have to
give the user name and password).
------ At the time of federation a process called node agent will be created to communicate
between dmgr profile and federated profile.
------ A node which is having a node agent will be called as managed node.
------ A node which doesn’t have node agent will be called as an unmanaged node.
------ After federation and configuration if the node agent is down there is no impact for the
federated server profiles and applications. But we can’t do the administration activities through
the dmgr console.
------ If the dmgr is down at that time also there is no impact for the federated server profiles
and applications. We can access those applications normally.
------ When we stop dmgr and node agent we can access server and applications.
Go to application server under server tab on left side pane.

Click New

Then you will get a error message see the below screen shot

------ For the first time when we login to the dmgr console there are no nodes that support
servers in the configuration. For that we have to add a node either from Application server
profile or Custom profile to the Deployment manager profile. Once the node of either
application server profile or custom profile is available with dmgr we can create n no of servers
under that node and we can manage from the same dmgr console.
Federation Process
Federation(Adding Node from Appsrv01 to Dmgr01)
----Go to <profiles/AppSrv01/bin>
----go to CMD prompt and execute the command
<Appserver/bin>addnode.bat <hostname Dmgr> <SOAP CONNECTOR PORT_Dmgr> -includeapps.

After the successful federation we have to add the n no of servers under App_Node01
Adding servers in dmgr console.
Go to dmgr admin console. Go to servers and click Application servers

Click New

Type Server name


Click Next
Click Next

Click Next

Click Next
Click finish

Click save changes


Check synchronize changes with node.
Click save.
Go to Sever and Application servers on left pane.

You will see the server3


Select server3 and click start button.

Go to CMD prompt check the status of the servers


Go to profiles/Appsrv01/bin
To stop the dmgr process runs the following command in CMD prompt.

To start the dmgr process runs the following command in CMD prompt

Federate Custom Profile:


------ Before federating the custom profile we have to start the deployment manager
Jump to Custom01 profile path
Execute the command to add a node to dmgr
------ Login to the dmgr console

Go to server and application servers

Click on Application servers

Click New
From the drop down select Custom_Node01
Enter server name: server4

Click Next
Click Next

Click Next

Click Finish
Click save changes

Check Synchronize changes with Nodes.


Click save changes.
Select server4 and click start

Through command line we can start/stop server4

---- For start/stop node agent using


Start/stopnode.bat
---- Synchronization Problem (If proper synchronization hasn’t done) follow below command.

profiles/Appsrv01/bin>syncnode.bat <hostname of dmgr> <soap connector of dmgr>

Note: In this case we have to stop the node and synchronization has to do.

Uninstallation:

---- Go to WAS root and _uninst folder


C:\IBM_ND_6.0\WebSphere\AppServer\_uninst
Click uninstall.exe
---- If we un-install directly by deleting or removing from control panel it doesn’t un-install
properly. It will through errors while fresh installation.

C:\windows\”vpd.propertiesfile”---- This property file will update in the time of installation and
Uninstallation.

Uninstallation by using silent Mode

By using below command in CMD prompt.

Deleting Profiles using CMD prompt:

Deleting all Profiles at a time:

---- We can find port no’s in


C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\config\cells\dmgr_cell01\nodes\Cus
tom_Node01\serverindex.xml
One more method to find ports
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\Dmgr01\logs\portdef.props
--- In this case if do any changes in portdef.props it will not effect. We have to do modification
in serverindex.xml at that time only it will effect.

Security
Security is an important part of any application server configuration. In this chapter,
we will cover securing the WebSphere Application Server's administrative console and how to
configure different types of repositories containing the users and groups of authorized users
who are given different levels of access to administer a WebSphere server.

Global Security
During the installation process of WebSphere Application Server, we opted not to turn on
global security and thus we did not have to supply a password to log in to the Administrative
console. We logged in using the username wasadmin and we were not prompted for a
password. The truth of the matter is that we could have actually used any name as the console
was not authenticating us at all. To protect our WAS from unauthorized access, we need to turn
on global security.
1) Local os user registry.
2) Custom user registry. (WAS 6.0 version)
3) LDAP user registry.
4) Federated repository. (WAS 6.1 onwards)
Steps to configure global security by using local os registry:
1) Create user accounts in your local os.
2) Assign passwords fro that account.
3) Login to the admin console and expand security.
4) Select secure administration, applications and infrastructure option.
5) Select security configuration wizard.
6) Select local os option to configure with local os registry.
7) Provide user id and password.
8) Under LTPA authentication mechanism, confirm the password once again.
9) Enable administrative security check box.
10) Select local operating system under available realm definitions.
11) Save the changes and restart the server.
12) Now access the admin console using http://<host-name>:9045/ibm/console (We have
to check in dmgr01 server index.xml for secure port).
13) Provide user name and password to login admin console.

Process:
For first two steps go to Administrative tools select Computer management
Select local users and computers
Select Users
Go to Actions menu and select New User
Provide user name: test123
Password: test123

Click create and closes.


Select test123 right click and select properties from the user list.

Select Member of, select users and click Add


Type “Administrators” in the names to select box or click Advanced and select Find Now and
select Administrators, click ok.

Click Check names.


It will show below box.
Click ok
Click apply
Click ok
-----Log off current user and log in with test123 login (Just for check whether working correctly
or not)
----- Do same for the admin, config, operator, and monitor.
----- Login to dmgr admin console.
Go to security
Select global security
Select Local OS under user registries in right pane.
Provide server user id: test123
Provide server password: test123

Click Apply. Click ok.


Select and collapse Authentication in that select Authentication Mechanism in right pane.
Select LTPA
Confirm the password: test123
Click save changes.
Click ok.
Click save changes.
Then security.xml will updated with new changes.
Checks synchronize changes with node.
Click save.

------- Enable Global Security check box.


Select Local OS (single, stand-alone server or sysplex and root administrator only) unser Active User
Registry.

Click Apply. Click OK.


And save the changes.
Click Save
Check the Synchronize changes with Nodes.

------ Log out the Admin console


------ Stop dmgr using below command.

Start dmgr using below command


By using the stop/startmanager.bat we can stop/start the dmgr server when the security.xml will
update under dmgr profile.
------Login to Admin Console using the blow url.
https://localhost:9045/ibm/console/logon.jsp

By default test123 is having Administrator role.

------There are 4 types of roles.


1) Administrator.
2) Configurator.
3) Operator. (In WAS 6.0)
4) Monitor.
----- Earlier we have created 4 types of user accounts with names respect to admin, config, operator and
monitor.
----- We have to add those users to test123 administrator for assigning roles to each of the created user
accounts.
----- Log in to dmgr admin console.
----- Select System Administration under that collapse Console settings in that select console users in
left pane.

In the above box click Add button.


Here we have to give User Name: admin
Role: Administrator
Click Apply. Click OK.
Click Save changes.

Check Synchronize changes with Nodes.

Click save.
----- Do same for the config, operator and monitor user accounts.
----- TO get effect, whatever the changes had done before we have to stop the dmgr profile. Directly we
can’t stop the dmgr profile, for that we have to provide the user id and password. At the timing of
stopping dmgr profile only we have to give user name and password (Higher Admin of the dmgr
console’s user name and password).

----- For the error log we can see in below path.


C:\IBM_ND_6.0\WebSphere\AppServer\profiles\Dmgr01\logs\dmgr\stopserver.log

Below observe the screenshot. Here we had given the user name and password of higher authority of
the dmgr profile. Then only it will stop.

---- Again we have to start the dmgr server profile.


---- Now we have to login to each user account through dmgr console.
This admin user can any thing.
----- Config User

He is responsible to configure server and applications but not having the authority to stop and start
servers.

----Operator User Account


He can do only start/stop the servers.

----Monitor User Account


These users will only monitor the applications & servers are up and running. He can’t do anything.
Export: Backup the application. No need to stop the server.
Export DDL: It will backup only queries of the application.
Enable Security by using Custom User Registry:

Steps to follow to create custom user registry:

1) Create two files a) users. Registry and b) groups.registry.


2) Add user accounts information under users.registry file.
3) Add group’s information under groups.registry.
4) Login to dmgr console and expand Security.
5) Select the secure administration, applications and infrastructure option.
6) Select security configuration wizard and select custom user registry option.
7) Create two variables usersFile and groupsFile.
8) Provide the absolute path of users.registry and groups.registry as a value for those
variables.
9) Enable administrative security check box and select custom registry under available
realm definitions.
10) Save the changes and restart server.
11) Login to the admin dmgr admin console by using http://rajasekhar-pc:9045/ibm/console

Process:
---- Open notepad and save it as “users.registry” and “groups.registry”

---- In users.registry we have to type below content


#<userid>:<password>:<uid>:<groupIDs>:<dispplayname>
Wasadmin: password: 101:200: wasadmin

---- In groups.registry file we have to type below content.


#<groupID>:<gid>:<members>:<dispplayname>

----- Login to dmgr console expands Security and select Global Security option.
---- Select User Registries under that select Custom User Registry.

Click Custom.
Provide Sever User ID: wasadmin
Sever User Password: password
Click Ok.

Click Save.
Check Synchronize changes with Nodes.
Click Save. (Security.xml file will update under dmgr).
---- Go to Global Security.
Select User Registry under that select Custom and select Custom Properties.

Click Custom

Select Custom Properties under Additional Properties.


Click New.
Provide Name: usersFile
UsersFile path: C:\Documents and Settings\Administrator\Desktop\Registries\users.registry

Click Ok.

Do the same for groups.registry


Click Ok.

Click save changes.

Click Save
---- Go to Global Security
Under Authentication collapse Authentication Mechanisms under this select LTPA.

Provide Password and conform password: password


Click Ok.

Click Save

Click save.
-----Under Active User Registry select Custom Registry.
Click Apply. Click Ok.

Click Save.
Click Save.
---- Logout from dmgr admin console.
---- Restart the dmgr server.
------ Log out the Admin console
------ Stop dmgr using below command.

Start dmgr using below command

By using the stop/startmanager.bat we can stop/start the dmgr server when the security.xml will
update under dmgr profile.
------Login to Admin Console using the blow url.
https://localhost:9045/ibm/console/logon.jsp
Click Log in.

Disable Global Security


--- Login to dmgr console
User id: was admin
Pwd: password
--- Go to Security and select Global security.
Uncheck the Enable global security.

Click apply. Click Ok.

Click Save changes.

Check Synchronize changes with Nodes. Click save.


---- Stop dmgr by using below command.
---- Start the dmgr server.

Steps to configure Database or Data source and jdbc providers:


1) Identify the jar files location.
2) Configure jar files with WebSphere variables.
3) Create jdbc providers.
4) Create Data sources.
5) Use test connection. If it is connected with database will get test connection successful.
---- db2 default username: db2admin.
Db2 default db2 Port: 50000.
Db2 default db: SAMPLE
Jar files required for db2: db2jcc.jar
---- XE default user name: sys or system
XE default port: 1521
XE default db: XE
Jar files required for XE: ojdbc14.jar

Process:
.jar: .jar means java archive. It contains collection of class files.

Connection
Object
App1
App1 App2

Server Database
---- Server will create connection object to interact db. It will do through jdbc drivers. Database vendor
will provide the jar files.
---- First install Oracle 10g XE.
Click Next
Accept the License.
Click next.

Click next.
Password: admin
Click next.

Click Install.
Click Finish.
---- jar file location of Oracle 10g XE
C:\oraclexe\app\oracle\product\10.2.0\server\jdbc\lib\ojdbc14.jar.

---- Configure Oracle ojdbc14.jar with WebSphere


Login to dmgr admin console
Select Environment

Click WebSphere Variables.


Check ORACLE JDBC DRIVER PATH
Click on that.
Provide the path of Oracle JDBC Driver.
C:\oraclexe\app\oracle\product\10.2.0\server\jdbc\lib

Click Apply.

Click Save changes.

Variables.xml file will update.


Click save.
---- Go to dmgr select Resources under that select JDBC Providers.

Click New.
Provide the details
Select database type: Oracle
Select provider type: Oracle JDBC Driver
Select the implementation type: Connection pool data

source.

Click Next.
Click Apply.
Click Ok

Click save changes.

Click save changes.


---- Go to Resources and jdbc providers.

Click on Oracle JDBC Driver.


Select Data sources under Additional Properties.

Click on new.
Provide Data Source Name.
Select data store helper class as oracle 10g data store helper.
Provide the url

Click Apply. Click Ok.


Click save changes.

Click save.
---- Go to Resources select Oracle JDBC Driver
Under Additional Properties select Data Sources.

Select OracleDS DataSource.


Under Related Items select J2EE connector Architecture (J2C) Authentication Data Entries.

Click New.
Provide below details.
Alias: Oracle_details
Provide user name: System
Password: admin.

Click Apply.

Click Save changes


Click save.
--- Go to Resources, Jdbc providers.

Click on Oracle JDBC Provider.


Select Data Source under Additional Properties.

Click OracleDS.
---- Map the Alias Under Component-managed authentication alias
Click Apply.

Click Save Changes.

Click save.

-----Testing the connection.


Go to Resources, select JDBC providers under that select Oracle JDBC Driver.
Select Data Sources under Additional Properties.
Check OracleDS Datasource.

Click on Test Connection.


Connection successful.

DB2 Installation and configure JDBC with WebSphere Application Server


--- Execute DB2 setup.exe

From install a product

Click Install New


Click Next.
Accept the license.

Click Next
Select Typical.
Click Next.

Click Next.
Click Next.
Provide Password: admin

Click Next.
Click Next

Click Finish.
Click Finish.
---- Checking the DB2 installation & Sample db.
---- Start – All Programs – Default – General Administration Tools – Control Center
---- DB2 jar file location.
C:\IBM_DB2\DB2\SQLLIB\java\db2jcc.jar (DB2jcc_License_cu.jar – It is also important one).

Configuring Db2 jar file with WebSphere

---- Go to Environment
Click WebSphere Variables.

For each and every variable we have to provide the DB2 jar file location.

Check and click DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH


Provide the DB2jcc.jar file location.
C:\IBM_DB2\DB2\SQLLIB\java
Click Apply.

Click Save Changes.

Click Save.
---- Do the same for remaining DB2 Drivers.
---- Go to Resources
Click JDBC Providers.

Click New.
Provide the information.
Database type: DB2
Provider Type: DB2 universal jdbc driver provider.
Implementation Type: connection pool Data Source.
Click next.

Click Apply.
Click Save the changes.

Click Save.

Click on DB2 Universal JDBC Provider.


Click on Data Sources under Additional Properties.

Click New.
Name: DB2DS
JNDI name: jdbc/DB2DS
Data store helper class: DB2 Universal data store helper.
Provide DB name; SAMPLE
Server Name: rajasekhar-pc
Port: 50000 (Default)

Click Apply.

Click Save.

Click Save.
Click DB2DS

Click J2EE Connector Architecture (J2C) authentication data entries under Related Items.

Click New.
Provide Alias: DB2_det
User ID: db2admin
Password: admin
Click Apply.

Click Save.

Click Save.
---- Map the Alias
---- Go to Resources
Click JDBC Providers.

Click on DB2 Universal JDBC Driver Provider.

Click on Data Sources under Additional Properties.

Click DB2DS.
Component-managed authentication alias under that select dmgr_node1/DB2_det
Click Apply.

Click Save.

Click Save.
Check DB2DS and
Click on Test Connection.

Connection successful.
WebSphere Application Server 7.0 Network Deployment
Installation:
----Go to C:\Documents and Settings\Administrator\Desktop\Websphere_7.0\WAS\install.exe
Click install.exe

Click Next.
Accept the License.

Click Next.
Click Next.
No need to check anything.

Click Next.
Click Next.
Select None.
Note: from WAS ND 6.1 0nwards one more profile will add.
Dmgr01
Ie.. Cell Profile (AppSrv01 Profile will automatically federated to dmgr)
AppSrv01

Click Next.
Click Yes.
Uncheck Create a repository for Centralized Installation Managers.(New feature in 7.0).

Click Next.

Click Next.
Uncheck create a new WebSphere Application Server Profile using the Profile Management
Tool.
Click Finish.
---- Compared with 6.0 in 7.0 three more profiles will add.
Profile Creation Using Profile Management Tool:
---- Go to C:\IBM_ND_7.0\WebSphere\AppServer\bin\ProfileManagement\pmt.bat
(Or)
Using Profile Management Tool.

Click Launch Profile Management Tool.


Click On create.
Select Cell (deployment manager and a federated application server).

Click Next.
Select Advanced Profile Creation Radio button.

Click Next.
Click Next.

Click Next.
Click Next.
Uncheck Enable administrative security.
Click Next.

Click Next.
Click Next.
Click Next.
Click Next.
Uncheck Run deployment manager process as a Windows service.
Startup type: Manual.

Click next.
Uncheck Create a Web server definition.

Click Next.

Click Create.
Click Finish.
Q) How could you know whether the federation had done without login to dmgr console?
Ans: First we have to check the server status of dmgr.

---- Login to dmgr admin console using url http://rajasekhar-pc:9066/ibm/console

---- But the server status is unavailable.


There is a problem with synchronization. For that we need to synchronize the node features to
dmgr using the below command.

--- Check the server status of AppSrv01 profile.

---- Start the node on AppSrv01 Profile.

---- Start server on AppSrv01 Profile.

---- Login to dmgr console and check whether the server has started or not.
Creating Custom Profile using Profile management tool:
---Go to Profile Management Tool.

Click Launch Profile Management Tool.

Click on Create.
Select Custom Profile.

Click Next.
Click Next.

Click Next.
Click Next.
If we check Federate this node later, we have to add the node to dmgr using below command
after successful completion of Custom profile creation.
----

Click Next.
Click Next.
Click Next.
Click Next.
Click Create.

Click Finish.
Federation in WAS 7.0
---- Login to dmgr console.
http://rajasekhar-pc:9066/ibm/console/login.do
---- Go to Servers select WebSphere Application Servers under server type.
Click on New.
In select Node drop down list select app_Node01 (ND 7.0.0.0).
Server Name: server3.

Click Next.

Click Next.
Click Next.

Click Finish.
Click on Review.
Check Synchronize changes with Nodes.

Click save.
Click Ok.
---- Start the server3.
Adding the server under Custom Node.
---- Select servers under that select server types in that select WebSphere Application Servers.

Click on New.
Under select node drop down list select Custom_Node01 (ND 7.0.0.0)
Server name: server4

Click Next.

Click Next.

Click Next.
Click Finish.

Click on Review.
Check Synchronize changes with Nodes.

Click Save.
Click Ok.
Check the server4. Click start.
Global Security in WAS 7.0 ND
Steps to configure global security by using local os registry:
1) Create user accounts in your local os.
2) Assign passwords fro that account.
3) Login to the admin console and expand security.
4) Select secure administration, applications and infrastructure option.
5) Select security configuration wizard.
6) Select local os option to configure with local os registry.
7) Provide user id and password.
8) Under LTPA authentication mechanism, confirm the password once again.
9) Enable administrative security check box.
10) Select local operating system under available realm definitions.
11) Save the changes and restart the server.
12) Now access the admin console using http://<host-name>:9045/ibm/console (We have
to check in dmgr01 server index.xml for secure port).
13) Provide user name and password to login admin console.
Process:
---- Go to dmgr console.
--- Select Global Security under Security.
Click on Security Configuration Wizard.
Uncheck Enable application Security.

Click Next.
Select Local Operating System radio button.
Click Next.
Primary Administrative Name: test123.

Click Next.

Click Finish.
Click Review.
Check Synchronize Changes with Nodes.

Click Save.

Click Ok.
---- Click LTPA under Authentication.
Password Confirmation: test123.

Click Apply.
Click Review.
Check Synchrinize changes with Nodes.

Click Save.

Click Ok.
---Check Enable administrative security.
Select Local Operating System Registry under Available realm definitions.
Click Apply.

Click on Review.
Check Synchronize changes with Nodes.
Click Save.

Click Ok.
---- Logout dmgr console
Stop the dmgr console.

Start the dmgr.

----- Login to dmgr console using url https://rajasekhar-pc:9049/ibm/console/logon.jsp


----- Providing roles to other users of dmgr admin console.

There are 8 types of roles in WAS 7.0 ND.


1) Admin Security Manager.
2) Administrator.
3) Auditor.
4) Configurator.
5) Deployer.
6) ISC Admins.
7) Monitor.
8) Operator.
---- Go to Administrative User Roles under Users and Groups.

Click On Add.
Click on Search.

Select Administrator Role for RAJASEKHAR-PC\\admin user.


Click on Add button.

Click Ok.
Click on Review.
Check synchronize changes with Nodes (admin-authz.xml will update under dmgr cell).

Click Save.

Click Ok.
---- Do the same for remaining users for config, operator, and monitor.

----- Logout the dmgr console.


--- Stop the dmgr.

Click Ok.
---- Start the dmgr.

---- Login to each user account. Login to admin user account and check the roles.
He is having full control rights on dmgr console, like crate new servers and applications, start or
stop servers and applications etc…..
----Login to config user account.

He is responsible to configure new servers and applications but not having the authority to stop and
start servers and applications.
---- Login to Operator.

He can do only start/stop and Restart the servers and applications.


---- Login to Monitor.

These users will only monitor the applications & servers are up and running. He can’t do anything.
Export: Backup the application. No need to stop the server.
Export DDL: It will backup only queries of the application.
Custom User Registry:
Steps to follow to create custom user registry:

1) Create two files a) users. Registry and b) groups.registry.


2) Add user accounts information under users.registry file.
3) Add group’s information under groups.registry.
4) Login to dmgr console and expand Security.
5) Select the secure administration, applications and infrastructure option.
6) Select security configuration wizard and select custom user registry option.
7) Create two variables usersFile and groupsFile.
8) Provide the absolute path of users.registry and groups.registry as a value for those
variables.
9) Enable administrative security check box and select custom registry under available
realm definitions.
10) Save the changes and restart server.
11) Login to the admin dmgr admin console by using http://rajasekhar-pc:9045/ibm/console

Process:
---- Go to Global Security under security.
Click Security Configuration Wizard.

Click Next.
Select standalone custom registry.
Click Next.
Provide Primary administrative user name: wasadmin
usersFile Path: C:\Registries\users.registry
groupsFile Path: C:\Registries\groups.registry

Click Next.
Click Finish.

Click Review.
Check Synchronize changes with Nodes.

Click Save.
Click Ok.
---- Go to Global Security under Security. Select LTPA under Authentication.

In Cross-cell single sign-on (Confirm the password: password)


Click Apply.

Click Review.
Check Synchronize changes with Nodes.
Click Save.

Click Ok.
---- Select Global Security under Security.
Select Standalone custom registry under Available realm definitions.

Click Apply.
Click Review.
Check Synchronize changes with Nodes.

Click Save.

Click Ok.
----- Logout dmgr and stop and start the dmgr.
Click Ok.

---- Login to dmgr console as a wasadmin user.


----- Disable global Security. Uncheck Enable Administrative Security.

Click Apply.

Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
----- Logout dmgr console
Stop/start the dmgr.
Configure LDAP Using Sun One Directory Server 5.2 on
WAS 7.0

Click Next.
Click Accept License.

For above Fully Qualified Computer Name we have to add an entry system's host file like
" Rajashekhar.domain.com" (It's up to user's custom choice, But domain.com is must).
System's Host File will be available under this path:
C:\Windows\System32\drivers\etc\hosts (Edit this host file). See the below screen shot for the
host file.
Click Next.

Click Next.
Click Next.

Click Next.
Click Create Directory.

Click Next.

Click Next.
Click Next.

Click Next.
Enter password: $3kReT4wD.
Click Next.
Enter Password: #$8Yk$-%&

Click Next.
Click Next.

Click Install Now.


Click Next.
Click Close.
User ID: cn=Directory Manager
Password: #$8Yk$-%&

Click Ok.
The Sun ONE Server Console is displayed.
Navigate through the tree in the left-hand pane to find the system hosting your Directory
Server and click on it to display its general properties.
Click on Open.
Double-click the name of your Directory Server in the tree or click the Open button. The
Directory Server Console for managing this Directory Server instance is displayed.

Go to Directory tab.
Click on user.
Click ok.
Click on Edit with Generic Editor.
Note the uid=rkomma,dc=domain,dc=com

----- After user creation in LDAP start dmgr manager and appnode.
---Login to dmgr console.

Select Global Security.


Click on Security Configuration Wizard.
Uncheck Enable application security.

Click Next.
Select standalone LDAP registry.

Click Next.
Base distinguished name:
Specifies the base distinguished name of the directory service, which indicates the starting
point for Lightweight Directory Access Protocol (LDAP) searches in the directory service. For
example, ou=Rochester, o=IBM, c=us.

Bind distinguished name:


Specifies the distinguished name for the application server, which is used to bind to the
directory service.

Primary administrative user name: rkomma


Type of LDAP server: sun Java System Directory Server.
Host: Rajashekhar.domain.com
Port: 3891
Base distinguished name: dc=domain,dc=com
Bind distinguished name: uid=rkomma,dc=domain,dc=com
Bind password: password.
Click Next.

Click Finish.
Select LTPA

Password: password
Confirm password: password

Click Apply.
Click ok.
Check Enable administrative security.
Under available realm definitions: Standalone LDAP registry.

Click Apply.
Click Review.

Click Save.

Click ok.
------ Logout the dmgr console and stop and start the dmgr manager.
----- Login to the dmgr console with secure port no.

Click Log in.

In the above screen shot u can observe welcome rkomma. The user credentials are came from
LDAP server.
How to configure WAS to stop application server without prompting for password even though
security is enabled?
1. First go to this path
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\properties\soap.client.props.
com.ibm.SOAP.securityEnabled=false
(here make false as true)
com.ibm.SOAP.securityEnabled=true
com.ibm.SOAP.loginUserid=test123
com.ibm.SOAP.loginPassword=password
save the file and go to below path
C:\IBM_ND_6.0\WebSphere\AppServer\bin>PropFilePasswordEncoder.bat <full path
of soap.client.props whether in DMGR or APPSRV01><com.ibm.SOAP.loginPassword>

C:\IBM_ND_6.0\WebSphere\AppServer\bin>PropFilePasswordEncoder.bat
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\properties\soap.client.pr
ops com.ibm.SOAP.loginPassword
Oracle JDBC and Data source configuration
---- Go to Resources and select JDBC provider.

From the scope selection drop-down list select


Node=app_node01

Click New.
Select Database type: oracle
Provider type: oracle JDBC provider
Implementation type: Connection pool data source
Name: Oracle JDBC Driver.
Click Next.
Provide ojdbc6.jar path location. (By default oracle xe edition doesn’t have ojdbc6.jar. we have to
download that jar file and place it in oracle jdbc library)

Click Next.
Click Finish.

Click Review.
Check synchronize changes with Nodes.

Click save.
Click Ok.

Click Oracle JDBC Driver (or) Go to Resources and select Data sources under JDBC.
From Scope selection drop-down list select
Node = app_Node01

Click New.
Provide Data source name: OracleDS
JNDI name: jdbc/OracleDS

Click Next.
Select Oracle JDBC Driver under select an existing JDBC provider radio button

Click Next.
Provide url: jdbc:oracle:thin:@rajasekhar-pc:1521:XE
Data store helper class name: Oracle 10g data store helper
Check CMP.

Click Next.

Click Next.
Click Finish.

Click Review.
Check synchronize changes with Nodes.

Click Save.
Click Ok

Click the OracleDS.


Select JAAS - J2c authentication data under Related items.
Select New.
Alias = Oracle_det
User ID = system
Password = admin

Click Ok

Click Review.
Check Synchronize changes with Nodes.
Click save.

Click Ok.
---- Go to Data sources. Select OracleDS.
Select Oracle_det under component managed authentication alias.

Click Apply.

Click Review.
Check Synchronize changes with Nodes.
Click Save.

Click Ok.
Check OracleDS

Click on Test connection.


For the first time when you click the Test connection it will through the error.

Because of synch not happened, for that we have to stop the node on AppSrv01 profile and do the
synchronization and start the node.
Open CMD follow the commands in order.

Click Ok for all 3 Steps (Stop, sync and start).


---- Go to Data Sources under Resources.
Check OracleDS and Click Test connection.

Connection Successful.

DB2 JDBC and Data Source Configuration:


---- Go to JDBC providers under Resources.

In scope selection drop-down list select


Node = app_Node01

Click New.
Provide Database type: DB2
Provider type: DB2 Universal JDBC Driver Provider.
Implementation type: Connection pool data source
Name: It will take default after provide above details (DB2 Universal JDBC Driver Provider).
Click Next.
Provide the db2jcc.jar file location.

Click Next.

Click Finish.
Click Review.
Check synchronize changes with Nodes.

Click Save.

Click Ok.
--- Go to Resources select Data Sources under JDBC.
Click New.
Provide Data source name = DB2DS
JNDI name = jdbc/DB2DS
Click Next.
Select DB2 Universal JDBC Driver Provider under select an existing JDBC provider radio button.

Click Next.
Provide Driver type = 4 (Default)
Database name = SAMPLE.
Server name = rajasekhar-pc
Port number = 50000 (Default)
Check CMP (Default it checked)
Click Next.

Click Next.
Click Finish.

Click Review.
Check Synchronize changes with Nodes.

Click Save.
Click Ok.

Click DB2DS.
Under Related items select JAAS – J2c Authentication data.
Click New.
Provide Alias = DB2_det
User ID = db2admin
Password = admin

Click Apply.

Click Review.
Check Synchronize changes with Nodes.

Click Save.

Click Ok.
---- Go to Resources. Select Data Sources under JDBC.
Click on DB2DS.
Select dmgr_Node01/DB2_det under Component-managed authentication alias.

Click Apply.
Click Review.
Check Synchronize changes with Nodes.

Click Save.

Click ok.
Click on Test Connection.
For the first time when you click the Test connection it will through the error.
Because of synch not happened, for that we have to stop the node on AppSrv01 profile and do the
synchronization and start the node.

Click ok.
Synch Command.

Click Ok.
Start App_srv01 Node.

----- Go to Resources. Select Data Sources under JDBC.

Click Test Connection.


Connection Succesful.
JDBC Providers:
It indicates what type of database we are using, what type of implementation mechanism
either XA or connection pool data source we are using.

Implementation Types:
1) Connection pool Data Source - Single phase commit.
2) XA Data Source – Two phase commit.

Connection Pool:
It contains predefined connection objects; Server won’t create a new connection object
every time. It will use the connection objects from the connection pool parameters.

Connection pool Properties:

Connection Timeout:
Specify the interval, in seconds, after which a connection request times out and a
ConnectionWaitTimeoutException is thrown. This action can occur when the pool is at its maximum (Max
Connections) and all of the connections are in use by other applications for the duration of the wait. For
example, if Connection Timeout is set to 300 and the maximum number of connections is reached, the
Pool Manager waits for 300 seconds for an available physical connection. If a physical connection is not
available within this time, the Pool Manager throws a ConnectionWaitTimeoutException.
Min Connections:
Specify the minimum number of physical connections to be maintained. Until this number is
reached, the pool maintenance thread does not discard any physical connections. However, no attempt is
made to bring the number of connections up to this number. For example, if Min Connections is set to 3,
and one physical connection is created, that connection is not discarded by the Unused Timeout thread.
By the same token, the thread does not automatically create two additional physical connections to reach
the Min Connections setting.

Tip: Set Min Connections to zero (0) if the following conditions are true:
 You have a firewall between the application server and database server.
 Your systems are not busy 24x7.
Max Connections:
Specify the maximum number of physical connections that can be created in this pool.
These connections are the physical connections to the database. After this number is reached, no new
physical connections are created and the requester waits until a physical connection that is currently in
use is returned to the pool or a ConnectionWaitTimeoutException is thrown. For example, if Max
Connections is set to 5, and there are five physical connections in use, the Pool Manager waits for the
amount of time specified in Connection Timeout for a physical connection to become free. If, after that
time, there are still no free connections, the Pool Manager throws a ConnectionWaitTimeoutException to
the application.
Unused Time out:
Specify the interval in seconds after which an unused or idle connection is discarded.
Tips:
 Set the Unused Timeout value higher than the Reap Timeout value for optimal performance.
Unused physical connections are only discarded if the current number of connections not in use
exceeds the Min Connections setting.
 Make sure that the database server’s timeout for connections exceeds the Unused timeout
property specified here. Long lived connections are normal and desirable for performance.
For example, if the unused timeout value is set to 120, and the pool maintenance thread is enabled (Reap
Time is not 0), any physical connection that remains unused for two minutes is discarded. Note that
accuracy of this timeout and performance are affected by the Reap Time value.
Aged Time out:
Specify the interval in seconds before a physical connection is discarded, regardless of
recent usage activity.
Setting Aged Timeout to 0 allows active physical connections to remain in the pool indefinitely. For
example, if the Aged Timeout value is set to 1200 and the Reap Time value is not 0, any physical
connection that remains in existence for 1200 seconds (20 minutes) is discarded from the pool. Note that
accuracy of this timeout and performance are affected by the Reap Time value.
Tip: Set the Aged Timeout value higher than the Reap Timeout value for optimal performance.
Reap Time out:
Specify the interval, in seconds, between runs of the pool maintenance thread. For example, if
Reap Time is set to 60, the pool maintenance thread runs every 60 seconds. The Reap Time interval
affects the accuracy of the Unused Timeout and Aged Timeout settings. The smaller the interval you set,
the greater the accuracy. When the pool maintenance thread runs, it discards any connections that are
unused for longer than the time value specified in Unused Timeout, until it reaches the number of
connections specified in Min Connections. The pool maintenance thread also discards any connections
that remain active longer than the time value specified in Aged Timeout.
Tip: If the pool maintenance thread is enabled, set the Reap Time value less than the values of Unused
Timeout and Aged Timeout. The Reap Time interval also affects performance. Smaller intervals mean
that the pool maintenance thread runs more often and degrades performance.

Purge Policy:
Specify how to purge connections when a stale connection or fatal connection error is detected.
Valid values are EntirePool and FailingConnectionOnly. If you choose EntirePool, all physical connections
in the pool are destroyed when a stale connection is detected. If you choose FailingConnectionOnly, the
pool attempts to destroy only the stale connection. The other connections remain in the pool. Final
destruction of connections that are in use at the time of the error might be delayed. However, those
connections are never returned to the pool.
Tip: Many applications do not handle a StaleConnectionException in the code. Test
and ensure that your applications can handle them.

---- Go to Resources. Select Data Sources under JDBC.


Click on OracleDS or DB2DS.

Click on Connection pool properties.


Application Deployment:
We can deploy an application in 4 different ways.
1) Adminconsole.
2) Scripting. [Jacl (java control language – in WAS 6.0)/Jython – will use from WAS 6.1
onwards].
3) Application Server Tool Kit.
4) Rapid Deployment (or) Hard deployment.

Deployment in WAS 6.0 versions:


---- Go to WAS root.
C:\IBM_ND_6.0\WebSphere\AppServer\installableApps – Here you can find some installable
applications. (Some of the applications with an extension .ear [enterprise archive] and some
applications with an extension .war [web archive]).

Steps to Deploy an Application:


Login to dmgr console.
Expand Applications. Select Enterprise Applications.
Select install and browse where the application (.ear or .war) is available. If it is a .war file
provide the context root.
Specify the application parameters.
Application name, installation location, target for the location.
Map data sources and ejb’s with JNDI and specify the host name.
Save the changes and start the application.

Process:
---- Take a backup of the application.
Go to Enterprise applications under Applications.
Check the PlantsByWebSphere

Click on Export.

Click on PlantsByWebsphere to download .ear file.

Click on save.

Click Back.
Click Save changes.

Click save.

------ Check the PlantsByWebsphere Application.

Click Uninstall.

Click Ok.
Click Save Changes.

Click Save.

------ Select Enterprise Applications or Install New Applications.

Specify the path of .ear or .war module to upload and install.

Click Next.
Click Next.

Click Continue.
Click Next.

Click Apply.

Click Next.
Click Next.
Click Next.
In Step 5 No need to change any thing.
Click Next.
In step 6 No need to change any thing.

Click Next.
In step 7 No need to change any thing.

Click Next.
In Step 8 No need to change any thing.

Click Next.
Click Continue.
In step 9 Select Default_host under Virtual host.

Click Next.
In Step 10 No need to change any thing.

Click Next.
Click Finish.

Click Save to Master Configuration.


Check Synchronize changes with Nodes.

Click Save.
Click Save.
---- If server were not in stop Mode we have to start the server.
----- Go to Enterprise applications.
Check PlantsByWebsphere.

Click Start.
---- To know under which server the PlantsByWebSphere
---- Click on PlantsByWebSphere.

Click Target Mappings.


Steps to access an Application:
1) Identify the context root of the application.
2) Identify under which server that application is running.
3) Identify host name of that application.
4) Identify the port no for that host under that server.
5) Make sure that the port no is registered under Virtual host aliases.
Process:
1)--- Go to Enterprise Applications.

Click on PlantsByWebSphere.

Select View Deployment Descriptor.


Below screen shot you can find the context root.

----- My Deployed application was installed under this path.


C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\dmgr_cell01
Note:
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\dmgr_cell01\
PlantsByWebSphere.ear\ META-INF\application.xml
--- When you browse the application.xml you can find the context root of the application.
2)--- To know under which server the PlantsByWebSphere
---- Click on PlantsByWebSphere.

Click Target Mappings.

3) ---- Select Enterprise Applications.

Click on PlantsByWebSphere.
Select Map Virtual hosts for Web modules under Additional Properties.
Click ok.

Click Save.
Check Synchronize changes with Nodes.

Click Save.

4) --- Go to Environment.

Select Virtual Hosts.


Click on default_host.

Click on Host Aliases under Additional Properties.


Default_host port no: 9080
Default_host secure port: 9443

Note: In WAS 7.0 there is no need of crating new port for the applications. But in WAS 6.0 we
have to create new port.
--- Creating new port.
Click on New.
Click Apply.

Click Save.

Click Save.

---- Login to PlantsByWebSphere by using this url http://localhost:9080/PlantsByWebSphere


Application Deployment in WAS 7.0
---- Go to Application, Application Types there select WebSphere Enterprise applications.

Click On Install.

Click Next.

Click Next.
In Step 1 No need to change any thing.

Click Next.
In Clusters and Servers select on which server u want to deploy the Application.

Click Apply. Click Next.


Click Finish.

Click Review.
Check Synchronize changes with Nodes.

Click Save.
Click Ok.
---- Start Server2
----- To know under which server the application is running.
Go to WebSphere Enterprise Applications.
Click on PlantsByWebSphere.

Click on Target Specific application status.


Steps to access an Application WAS 7.0:
1) Identify the context root of the application.
2) Identify under which server that application is running.
3) Identify host name of that application.
4) Identify the port no for that host under that server.
5) Make sure that the port no is registered under Virtual host aliases.
Process:
1)---- Go to WebSphere Enterprise Applications.

Click on PlantsByWebSphere.

Click on View Deployment Descriptor.


Note: ----- My Deployed application was installed under this path.
C:\IBM_ND_7.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\dmgrcell01

And to know the context root of the application go to application.xml file.

C:\IBM_ND_7.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\dmgrcell01
\PlantsByWebSphere.ear\META-INF\application.xml.

Where can I find the log of particular application installation?

C:\IBM_ND_7.0\WebSphere\AppServer\profiles\Dmgr01\logs\dmgr\systemout.log.

2) ---- Go to WebSphere Enterprise Applications.

Click on PlantsByWebSphere.

Click on Target specific application status.


3) ---- Go to WebSphere Enterprise Applications.
Select PlantsByWebSphere.

Click on Virtual Hosts.

Click Ok.

Click Review.
Click Save.

Click Ok.

4)---- Go to WebSphere Application Servers.

Select Server2.
Expand Ports. WC_defaulthost:9085

5)---- Go to Environment.

Select Virtual Hosts.

Click on default_host.
Click on Host Aliases.
Here you can see the PlantsByWebSphere application port no of the corresponding dedicated
server.

----- Access the Application by this URL http://rajasekhar-pc:9085/PlantsByWebSphere


(And) http://rajasekhar-pc:9084/snoop (Default Application URL).
Application Request Flow:

1) Whenever user makes a request for example www.abc.com, initially request will go to
DNS. DNS forwards that request to load balancer.
2) Load balancer forwards that request to web server. We are using IHS as a web server.
3) If a request is static request, i.e. if a request is looking for any HTML pages or images etc,
web server itself generates the response.
4) If a request is dynamic request, web server routes that request to corresponding
appserver with the help of plug-ins.
5) At the time of web server startup, it loads httpd.conf file into the server, it contains path
of the plugin-cfg.xml file.
6) Plugin-cfg.xml file contains complete information about application server environment.
So, that web server forwards the request to the corresponding appserver.
7) Appserver contains mainly two containers. A) Web Container B) EJB Container.
8) Web container is responsible to execute web resources like servlets, jsp’s, html etc.
9) EJB container is responsible to execute EJB resources like session beans, entity beans
and message driven beans.
10) If a request is looking for web resources like servlets, jsp’s, web container itself
generates the response.
11) If a request is looking for EJB resources session beans, entity beans and message driven
beans that request will be forwarded to EJB container through JNDI, RMI, and IIOP
protocol.
12) If a request requires any database interaction that request will be forwarded to
connection pool, based on the connection pool properties it attains a connection object,
once the transaction was completed that connection object will be back to the pool.
13) Finally response will be forwarded from web container to web server and web server
forwards that response to the end user.

Installing IHS on WAS 6.0:


----- Go to WebSphere folder. Open the IHS Folder click on install.exe.

Click Ok.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.
Finish.
----- Start the Apache Server using below commands.
C:\HTTPServer_6.0\bin>apachemonitor.exe
----- Stating/stopping Apache Web Server.
C:\HTTPServer_6.0\bin>Apache.exe –k start
Apache.exe –k stop.

WAS 6.0 Plug-ins Installation:

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.
Click Next.

Click Next.
Click Next.
Click Next.

Click Finish.

Note: In WAS 6.0 ND package before installing the plug-ins take a backup of httpd.conf file from
HTTP server installation folder. (C:\HTTPServer_6.0\conf\httpd.conf). (Just for check for better
understanding what changes are going to be happen). Before installing a plug-in there is no
connection between web server and WebSphere application Server.

---- After installing the plug-ins have a look on httpd.conf file. Because after installing the plug-
ins httpd.conf file update the WebSphere plug-in config with below path.
C:\HTTPServer_6.0\Plugins\config\webserver1\plugin-cfg.xml.
We have to overwrite the above path by generating the new plug-in config file by using
GenPluginCfg.bat. It will generate a server plug-in configuration file for all of servers in the cell
dmgr_cell01.

Steps to access an application through web server:


----- Generate a plugin-cfg.xml file by using a command GenPluginCfg.bat. This batch file
available under profile home of dmgr01/bin profile. From there you have to run that batch file
through command prompt. Take a reference of above screen shot.
It will generate a plugin-cfg.xml file under this path.
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\Dmgr01\config\cells\pulgin-cfg.xml.

----- Open the httpd.conf file from HTTP server, bin installation directory. (Below path is before
generating the new plug-in config file)

----- Copy and replace the above path to generated plug-in path in httpd.conf file in HTTP server
installation directory.

--- After completion of all above process we have to restart the dmgr and HTTP web server.
---- Access the Applications using by URL http://rajasekhat-pc:80/PlantsByWebSphere
And access the Default application by URL http://rajasekhar-pc:80/snoop .
Q) Where u will find the Application access log?
Ans: Go to C:\HTTPServer_6.0\logs\access.log. Form this path you can find the access points.

Q) Where u will find HTTP web server running process id?


C:\HTTPServer_6.0\logs\httpd.pid
Installing IHS on WAS 7.0:
---- Go to WebSphere software folder. Form HIS folder execute install.exe

Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.
WAS 7.0 Plug-ins Installation:

Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.

---- Before installing plug-in there is an entry for the WebSphere plug-in config in httpd.conf file.

The above dialogue box is asking for that HTTP server configuration file is already having
that plug-in entries. If you proceed, this configuration file (httpd.conf) will be updated with a
new plugin-cfg.xml file location.

Click Ok.
Click Next.

Click Next.
Click Next.

Click Next.
Click Next.
Click Finish.

Note: In WAS 7.0 ND package before installing the plug-ins take a backup of httpd.conf file from
HTTP server installation folder. (C:\HTTPServer_7.0\conf\httpd.conf). (Just for check for better
understanding what changes are going to be happen). Before installing a plug-in there is no
connection between web server and WebSphere application Server.

---- After installing the plug-ins have a look on httpd.conf file. Because after installing the plug-
ins httpd.conf file update the WebSphere plug-in config with below path.
C:\HTTPServer_7.0\Plugins\config\webserver1\plugin-cfg.xml.

We have to overwrite the above path by generating the new plug-in config file by using
GenPluginCfg.bat. It will generate a server plug-in configuration file for all of servers in the cell
dmgr_cell01.
Steps to access an application through web server:
----- Generate a plugin-cfg.xml file by using a command GenPluginCfg.bat. This batch file
available under profile home of dmgr01/bin profile. From there you have to run that batch file
through command prompt. Take a reference of above screen shot.
It will generate a plugin-cfg.xml file under this path.
C:\IBM_ND_7.0\WebSphere\AppServer\profiles\Dmgr01\config\cells\pulgin-cfg.xml.

----- Open the httpd.conf file from HTTP server, bin installation directory. (Below path is before
generating the new plug-in config file)

----- Copy and replace the above path to generated plug-in path in httpd.conf file in HTTP server
installation directory.

--- After completion of all above process we have to restart the dmgr and HTTP web server.
---- Access the Applications using by URL http://rajasekhat-pc:80/PlantsByWebSphere
And access the Default application by URL http://rajasekhar-pc:80/snoop .

Q) Where u will find the Application access log?


Ans: Go to C:\HTTPServer_7.0\logs\access.log. Form this path you can find the access points.

Q) Where u will find HTTP web server running process id?


C:\HTTPServer_7.0\logs\httpd.pid
Virtual Host:
If you want to access an application with multiple host-names or domain names we have
to configure virtual host with document root and directory index in the httpd.conf file.

NameVirtualHost Plants
<VirtualHost Plants>
ServerName Plants
ServerAlias Server2
DocumentRoot
"C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\
dmgr_cell01\PlantsByWebSphere.ear\PlantsByWebSphere.war"
DirectoryIndex index.html
</VirtualHost>

NameVritualHost Snoop1,abc
<VirtualHost Snoop1,abc>
ServerName Snoop1
ServerAlias Server1
DocumentRoot
"C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\
dmgr_cell01\DefaultApplication.ear\DefaultWebApplication.war"
DirectoryIndex index.html
</VirtualHost>

For the first time u will get the below error message while starting the web server.
By adding an entry of the particular application's virtual host name in the host file of linux.

Now access the application through Below URL.


Access an application with multiple ports no:
---- Login to dmgr console.
---- Go to Application Servers.

Select server2

Click on Ports under Communications.

Click New.
Click Apply.

Click Save.

Click Save.
---- Now you can observe in myport in server2 port’s list.
----- Select server2

Click on server2.

Expand Web Container Settings.


Select Web Container transport chains.

Click New.

Click Next.
Click Next.

Click Finish.

Click Save.
Click Save.
---- Go to Environment.

Select Virtual Hosts.

Click on default_host.

Click on Host Aliases.


Click New.

Click Apply.

Click Save.
Click Save.
---- Restart the server2 (On which server you have deployed your application).

---- http://localhost:2000/PlantsByWebSphere/
Use this URL you can login to that application.

Cluster creation in WAS 6.0


A cluster is a group of servers. It will be used to achieve work load management and high
availability. There are 2 types of clusters.
1) Vertical Cluster.
2) Horizontal Cluster.

Vertical Cluster:
Here we are creating all the cluster members under same box. If any one of the cluster
member is down request will be routed to other cluster members. If machine completely get
crashes there is high impact on end users even though it is a clustered environment.

Cluster

Cl_s1

Cl_s2

Host A
Cl_s3

Horizontal Cluster:
Here we are creating all the cluster members under different boxes. If any one of the
cluster member is down or completely machine get crashes at that time also end users can
access the application from other cluster member which is running on other host. So, here we
can minimize the outage of the application.

 A cluster member should not be member of multiple clusters. Under multiple nodes it
will create.
Vertical Cluster Creation Process in WAS 6.0:

 Expand servers. Select Clusters.

Click New.
Click Next

Click Apply.
Click Next.

Click Finish.

Click Save.
Check Synchronize changes with Nodes.

Click Save.
 Expand Servers. Select Clusters.

Click on Cluster01.

Under Additional Properties select Cluster Members.


Deploying Applications on Cluster:
--- Go to Applications. Select Install new Application.

Specify the path of .ear or .war module to upload and install.

Click Next.
Click Next.

Click Continue.
Click Next.

Click Apply.

Click Next.
Click Next.
Click Next.
In Step 5 No need to change any thing.
Click Next.
In step 6 No need to change any thing.

Click Next.
In step 7 No need to change any thing.

Click Next.
In Step 8 No need to change any thing.

Click Next.
Click Continue.
In step 9 Select Default_host under Virtual host.

Click Next.
In Step 10 No need to change any thing.

Click Next.
Click Finish.

Click Save to Master Configuration.


Check Synchronize changes with Nodes.

Click Save.
Click Save.
---- When you start the cluster after deploying applications if any error occurs we have to sync
the changes. (Stop the Appsrvr01 node and do the sync and start the Appsrvr01 node).
----- Go to Enterprise applications.
Check PlantsByWebsphere.
 Expand Servers. Select Clusters.
Check Cluster01

Click Start.

Click on Cluster01.
Click on Cluster Members.
Start Custom Node under custom profile.
Check Cl_S1 Click Start.
Accessing the Application in Cluster environment through Web Server.

 First we have to check under plugincfg.xml the configurations are correctly added after
cluster configuration. If we configure cluster some entries will add like

 If the details are not correctly added to the plugincfg.xml after cluster creation we have
to generate the new plugincfg.xml.
 Copy and replace the old plugincfg.xml path in httpd.conf file.
 Restart the HTTP web server.
---- Access the snoop application and keep on refresh the browser. There you can observe
accessing the application from cluster members (Server1 and Cl_s1) based on weight age of the
cluster member.

Q) Without restarting server how can I set weight age?


Ans: Yes we can change weight age in plugincfg.xml. There is an option called Retry Interval. If
we do any changes in plugincfg.xml for every 60 sec it will update the plugincfg.xml based on
Retry Interval.
Cluster creation in WAS 7.0
---- Expand servers. Select Clusters.

Select WebSphere application server clusters.

Select New.
Cluster name: Cluster01

Click Next.
Click Next.
Click on Add Member.

Click Next.
Click Finish.

Click Review.
Click Save.

Click Ok.
----- Deploying Application on Cluster.
----- Expand Applications. Click on New Application.

Click on New Enterprise Application.

Click Next.
Click Next.
In Step 1 No need to change anything.

Click Next.
In Clusters and Servers select on which server u want to deploy the Application.

Click Apply. Click Next.


Click Finish.

Click Review.
Check Synchronize changes with Nodes.

Click save.
Click Ok.
----- Go to Applications

Select WebSphere enterprise applications.


----- Start the Cluster01. (For the first time u will get error. At that time we have to stop the app
node and do the sync and start the node. After that we have to start the cluster).

Cluster01 is partially started. (Because we have to start all the cluster members, then Cluster01
is in fully started state).
Click on Cluster01.

Click on Cluster members under Additional Properties.


(Before starting the app node and custom node these servers will in stop state. We have to
start those servers through command line or through dmgr console).
Accessing the Application in Cluster environment through Web Server.

 First we have to check under plugincfg.xml the configurations are correctly added after
cluster configuration. If we configure cluster some entries will add like

 If the details are not correctly added to the plugincfg.xml after cluster creation we have
to generate the new plugincfg.xml.
 Copy and replace the old plugincfg.xml path in httpd.conf file.
 Restart the HTTP web server.
---- Access the snoop application and keep on refresh the browser. There you can observe
accessing the application from cluster members (Cl_s1, Cl_s2, and Cl_s3) based on weight age
of the cluster member.
Update:
I have installed the application just we want to add any java or jsp file without uninstalling the
application we will use "update" that time.
Rollout Update:
In cluster environment i had changes in cluster at that time no need to stop all members to
update the changes, for that we have to stop the cluster member and it will update then next
start that member. Again we have to stop the second cluster member so on.
Class Loaders
 Class loaders are part of the java virtual machine (JVM) code and are responsible for
finding and loading classes (WebSphere java classes and User application classes). We
have different types of class loaders.
 Class loaders affect the packaging of applications and the run time behaviors of
packaged applications deployed on the application servers.
 WebSphere application server provides several class loader hierarchy and options to
allow more flexible packaging of your applications.
 During server startup, the class loader hierarchy will create.
 Class loaders have a parent class loader – except the root class loader.
 In the class loader hierarchy, a request to load a class can go from a child class
loader to a parent class loader but never from a parent class loader to child class
loader.
 If a class not found by a specific class loader or any of it’s parent class loaders,
then a class not found exception will result.

Delegation
Mode:
Each class loader has a delegation (Search) mode that may or may not be configurable.
 When searching for a class, a class loader can search the parent class loader before it
looks inside its own loader or it can look at its own loader before searching the parent
class loader.
 Delegation algorithm does not apply to native libraries (*.dll, *.so, etc).
Delegation values are:
 PARENT_FIRST – Delegate the search to the parent class loader FIRST before attempting
to load the class from the local class loader.
 PARENT_LAST – First attempt to load classes from the local class path before delegating
the class loading to the parent class loader.
 Allows an application class loader to override and provide its own version of a class that
exists in the parent class loader.
 The JVM will cache the class on a successful load and will associate the class with its specific
class loader.
Java class loading allows for a search mode that lets a class loader search its own class path
before requesting a parent class loader or search via the parent class loader before its local
class path. These searches, or delegation modes, are PARENT_FIRST and PARENT_LAST
respectively.
Class Loader Hierarchy:

Class loaders are organized in a hierarchy. This means that a child class loader can
delegate class finding and loading to its parent, should it fail to load a class.
1) JVM Class Loader
2) WebSphere Extension Class Loader.
3) WebSphere Server Class Loader.
4) Application Module Class Loader.
5) Web Module Class Loader.

JVM Class Loader:


The root of the hierarchy is occupied by the JVM class loader and its Bootstrap class loader
that loads the JVM extension classes. The JVM class loader loads JVM classes, the JVM
extension classes and the classes defined in the class path environment variable.

WebSphere Extension Class Loader:


This loads all WebSphere Application Server Classes, resource adapters, and other classes.

WAS Application Class Loader:


This loads classes form the WebSphere Application Server library application directory. In
V4, this was used to specify classes shared by all applications. Beginning with V5, the shared
library function provides a better option to share classes across one or more applications.
Therefore, this class loader is provided mainly for backward compatibility.
WebSphere Server Class Loader:
This loads shared libraries that are defined at the server level and can be accessed by all
applications.
Shared Libraries:
Shared libraries provide a way to use common java or native code and share across one or
more J2EE applications running with in the server.
Example: Dependency (“utility”) JARs, and native libraries.
Benefits:
 Shared libraries support versioning of application artifacts.
 Allows deployment of application artifacts without repackaging and reinstalling the EAR
or WAR files.
 Shared libraries are defined by the administrator and associated with one or more
applications.
 This is called an “application-associated” shared library, compared to a “server-
associated” shared library loaded by the “server” class loader.

In WebSphere Application Server V4 if you required JARs to be shared by more than one
application, you would put them in the Install_Root/lib/app directory. The drawback to this
Is that the JARs in this directory are exposed to all the applications. In WebSphere
application Server V5 and V6, shared libraries provide a better mechanism where only
applications that need the JARs are exposed to them. Other applications are not affected
by the shared libraries. The administrator defines the shared libraries by assigning a name
and specifying the file or directories that contain the code to be shared. These defined
shared libraries can then be associated with one or more applications running within the
Application Server.
The advantage of using an application-scoped shared library, is the capability of using
different versions of common application artifacts by different applications. A unique shared
library can be defined for each version of a particular artifact, for example, a utility JAR.
If a native library loaded by shared library requires a second native library, then it must
not be specified in the shared library path. Rather, it must be specified in the JVM native library
path and will be loaded by the JVM class loader.

Native Libraries:
 Native libraries are non-java code used by java via the Java Native Interface (JNI). They are
platform specific files, for example: “.dll” in windows, “.SO” and “.a” in unix.
 Java applications use the system.loadlibrary (lib name) method to load the native library. It
is loaded at the time of the system.loadlibrary (….) call.
 The JVM uses the caller’s class loader to load the native library. If that fails, it then uses the
JVM system class loader.
 If both failed to load, an UnsatisfiedLinkError will result.
 Native libraries are located on the native library paths of the JVM class loader and the
WebSphere application server (Extensions, server, and application module) class loaders.
Native libraries are loaded by Java using the System load library method. They are loaded
on demand, when needed. Native libraries could be located in the JVM class loader or one
of the WebSphere Application Server class loaders; namely, the Extension, Server, or the
Application module class loader.
WebSphere Application Server Extensions, Server, and Application module class loaders
define a local native library path, similar to the java.library.path supported by the JVM class
loader.

Application Module Class Loader:


It loads the J2EE applications.

Web Module Class Loader:


It loads web module related class files.

Class Loader Policies:


 If the class loader policy is single, only one application class loader will be created for all
the applications.

Shown here is an example hierarchy of an Application Class loader policy of Single and a
Web module class loader policy of Application for all the J2EE applications. Using these class
loader options sacrifices the isolation and dynamic reloading features.
Shown here is an example hierarchy of an Application Class loader policy of Single and a
Web module class loader policy of Module for all the J2EE applications.
With these policies, the Web module class loader is loaded by its own separate class
loader. This is the default J2EE class loader mode.
Shown here is an example hierarchy of the Application Class loader policy of Single and
Web module class loader policy of Module for some of the J2EE applications and
Application for the remaining applications.
With these policies, the Web module class loader for Module policy option is loaded by its own
separate class loader. For a Web module class loader policy of Application, they are loaded by
the Application Class loader.
Shown here is an example hierarchy of an Application Class loader policy of Multiple.
Each J2EE Application is loaded by its own class loader. Additionally, with the Web
Module class loader policy of Application, the Web modules of those applications are loaded by
the same class loader as the rest of the J2EE application classes.

Shown here is an example hierarchy of an Application Class loader policy of Multiple.


Each J2EE Application is loaded by its own class loader. However, note that the Wed
Module class loader policy is Module. As a result, each Web module is loaded by
separate class loader, lower in the class loader hierarchy.
This is the default class loader configuration of WebSphere Application Server V6.
Shown here is an example hierarchy of an Application Class loader policy of Multiple.
Each J2EE Application is loaded by its own class loader. However, some of the
Applications have Web Module class loader policy is Module. As a result, for those
applications, the Web module is loaded by a separate class loader, lower in the class loader
hierarchy.
For the remaining applications that have a Web module class loader policy of
Application, those Web module classes are loaded by the same class loader that loads the J2EE
application.
The example shown here has the Application class loader policy at the server level set to
Single.
As a result, the classes for both the applications are loaded by the Single class loader.
However, the Web module class loader policy defines how the Web modules will be loaded.

For Application 1, the Web module class loader policy is Module. As a result, each Web module
of application 1 will have its own separate class loader, as shown by the WAR1 Web module.
For Application 2, the Web module class loader policy is Application. As a result, all Web
modules of application 2 will be loaded by the same class loader that loaded Application 2 classes,
as shown by the WAR2 Web module.

The example shown here has the Application class loader policy at the server level set to
Multiple.
As a result, the classes for both the applications are loaded by their own separate class
loader, as shown by the class loaders of Application 1 and 2.
However, the Web module class loader policy defines how the Web modules will be
loaded.
For Application 1, the Web module class loader policy is Module. As a result, each Web module
of application 1 will have its own separate class loader, as shown by the WAR1 Web module.
For Application 2, the Web module class loader policy is Application. As a result, each
Web modules of application 2 will be loaded by the same class loader that loaded Application 2
classes, as shown by the WAR2 Web module.
Delegation or search mode is defined at the Application class loader and at the Web
module class loader levels. Based on the different delegation modes, the table shows the
search order path. At each level, if the delegation mode is PARENT FIRST, the search
goes to the parent. If the delegation mode is PARENT_LAST, the current class loader is searched
before the parent. These delegation modes help when an application requires
classes loaded from its own class loader rather than having it loaded by WebSphere
supplied classes. This provides flexibility.
The default mode is the PARENT_FIRST for both the application class loader and the Web
module class loader.

Class loader policies at a glance:


 If the class loader policy single only one application class loader will be created for all the
applications.
 If the class loader policy is multiple, for each and every application one individual
application class loader will be created.
 If the class loader policy is application, web module class loader will not created and web
module related jar files also loaded by application class loader itself.
 If the class loader policy module, a web module class loader will be created for that
application and it loads web module related jar files.

---- By default we will use Multiple and Module class loader policy.
Apply Patches
Release:
This is the term used by WebSphere development and support for a major version of
WebSphere. The first two digits are the product version no’s identifies the release.
Ex: 6.0, 6.1, 7.0, 8.0, and 8.5.
Refresh Pack:
This is the term used to identify an update to the product version which typically contains
feature additions and changes. The third digit of a version no identifies a refresh pack.
Ex: 6.0.0, 6.0.1, 6.1.0, and 7.0.0
FIX Pack:
This is the term used to describe a product update which includes defect fixes. The version
no’s fourth digit identifies a fix pack.
Ex: 6.0.1.0, 7.0.0.25 etc.

Fix (or) Interim Fix (or) E Fix:


This indicates a temporary or emergency product update focused on a specific defect. This
type of update used to be referred as an emergency fix or e fix or interim fix.

Steps to apply a Refresh pack or a Fix pack:


 Check the current version of WAS by using Versioninfo.sh.
 Stop all websphere processes that use the installation.
 Take a backup by using backupcongig.sh.
 Copy update installer directory to WAS root.
 Execute update executable file.
Process:
Check the current version of WAS by using Versioninfo.sh.
Stop all websphere processes that use the installation.
Take a backup by using backupconfig.sh.
Backup file will create under
Was root - bin
C:\IBM_ND_6.0\WebSphere\Appserver\bin>backupconfig.bat
C:\IBM_ND_7.0\WebSphere\Appserver\bin>backupconfig.bat
Copy the update installer directory to the WAS root.
Execute update executable file.
---- Copy the update installer to the WAS root.
---- Silent installation WAS 6.0 (Response file under update installer folder).
-w maintenance. Package = “path of maintenance packg directory”
-w maintenance.location = “ was root directory” (c:/IBM_ND_6.0/websphere/appserver).
----- Checking the log of fix pack installation

Was root -- log -- update


Trouble Shooting
Types of Logs:

1) JVM Logs - <profile-home>/logs/<process-name> ------ (1)


a) Systemout.log. standard print screen messages. (system.out.println (“ “))
b) Systemerror.log. Standard error messages. (system.error.println (“ “))
2) Native logs ------------ (1)
a) Native_stdout.log. .dll, .iso, .exe etc
b) Native_stderror.log. Garbage collector information.
3) Trace logs--------------(1)
a) Trace logs. Detailed information about an activity.
4) Active Logs/service logs ------ <profile-home>/logs -----------(2)
a) activity.log. For entire profile there will be only one activity log. We have to
use log analyzer to read the activity.log
5) Command line logs.
a) Startserver.log. (1)
b) Stopserver.log.
c) Addnode.log. -------- (2) --- at the time of federation it will create. only one
node will create for each profile.
6) Installation logs.
a) Logs.txt. look in %temp% Dir or <was-root>/logs
7) Fix pack logs. <was-root>/logs/update
a) Updatelog.txt.
8) Profile creation logs. ----It will create at the time of profile creation.
a) <profile-name>_create.log. ---
C:\IBM_ND_6.0\WebSphere\AppServer\logs\wasprofile (Up to V6.0)
Ex: wasprofile_create_Appsrv01_log.
C:\IBM_ND_7.0\WebSphere\AppServer\logs\manageprofiles (From V6.1
onwards).
Ex: Appsrv01_create.log
9) IHS logs.
a) Access.log. If a request is reaching to web server or not.
b) Error.log. <HIS-root>/logs
c) Adminerror.log. If the web server is on remote box at that time we will get
logs.
http_plugin --- C:\HTTPServer_7.0\Plugins\logs\webserver1 -- we can check plug-in related
issues.
JVM Logs:
 Systemout.log and systemerr.log are called as JVM logs.
 For each and every process there will be JVM logs.
 Systemout.log contains standard print stream messages like what are the services that
have started, what are the applications that have started, services initialization
messages etc.
 Systemerr.log file contains error stream messages, it contain error messages like port
conflicts, create listener failed, exception errors etc.
Native Logs:
 Native_stdout.log contains other than java code information i.e. .dll, .iso, .exe etc.
 Native_stderr.log file contains Garbage collector information and this log file will be
created once we enable verbose Garbage collection (GC).
 Enabling verbose garbage collector.
----- Go to dmgr console.
Select servers. Select websphere application servers.

Click on any server name.


Expand Java and process management.

Select Process definition.


Select Java Virtual Machine under Additional Properties.

Click apply. Click ok.

Trace Logs:
It contains detailed information about each and every activity. It is easy to pin point a
failure when we enable tracing.

Activity log/Service log:


There will only one activity log file for entire profile and by default it is in binary format.
This log file contains all the processes information under that node. We have to use log analyzer
to read this activity log file and by using log analyzer we can compare with symptom databases
for the known problems.

Logging Utilities:
1) Websphere Tracing.
2) Collector Tool.
3) Log Analyzer.
4) FFDC (First Failure Data Capture).
5) Thread Dumps.
6) Heap Dumps.
Websphere Tracing:
 Tracing provides detailed information about the execution of websphere (was-root)
components including application servers, clients and other processes in the
environment.
 Trace files show the time and sequence of methods called by websphere base classes.
We can use these files to pointing a failure.
 The default types of tracing and logging levels are all, finest, debug, error, warn, info,
fatal, config, audit, detail, and off.
 When we set logging level as “all” it enables all the logging for that component.
 When we set logging level as “off” it disables all the loggings for that component.
Collector Tool:
It generates information about websphere installation and packages it in a java archive file
(jar). This jar file we can send to IBM technical support to assist in determining and analyzing
the problem. We can create a collector tool by using CMD collector.sh.
---- Setting the collector tool class path:
---- Go to My Computer properties.
Select Advanced tab.
Select Environment Variables.
Under system variables add the below entry.
Variable Name: PATH
Variable Value: C:\IBM_ND_7.0\WebSphere\AppServer\bin
---- Execute the command collector.bat through cmd prompt.
---- At the end of execution process one jar file will create. We have to send that file to IBM
technical support.

Log Analyzer:
 By default activity log files are in binary format. We have to use log analyzer to read that
activity.log file. It merges all the data and displays the entries.
 Based on its symptom db the tool analyzes and interprets the events or error conditions in
the log entries to diagnose the problem.
 We can open a log analyzer by using a command (in was 6.0 only).
Waslogbr.sh
FFDC (First Failure Data Capture):
 This tool preserves the information generated from a processing failure and runs control
to the affected engines. The tool saves the capture data in a log file to analyze the
problem.
 FFDC runs in the background and collects events and errors that are occurring during
websphere application server runtime. By default FFDC log file will be created under
<profile-home>/logs/FFDC.
 The main use of this FFDC logs is to get support from IBM (when you request a PMR
(problem management record) request to IBM to analyze the log file).
Thread Dump:

 Thread dumps are the snapshots of a JVM at a given time. Thread dump contain info
about the threads and it is used mostly to debug hung threads.
 We can generate a thread dump by using a command.
Kill -3 <pid>
 It will generate a thread dump file called
Javacore.<timestamp>.pid.<dump no>.txt file under profile home directory.
 Alternatively we can generate a thread dump by using a command
$AdminControl invoke $jvm dumpThreads.

After setting environment parameters.


Wasadmin>set jvm [$ AdminControl CompleteObjectName type = JVM, process =
server1, *]
 We can analyze the thread dumps by using Thread Analyzer.

Heap Dump:
It contains information about java objects like size of the object and relation between
the object and references for that object etc. Heap dumps are mostly useful in debugging
memory leaks.
 We can generate a heap dump by using a command
$AdminControl invoke $jvm generateHeapdump

After setting the parameters


Wasadmin>set jvm [$ AdminControl completeobjectName type = JVM, process =
server1,*]
 Alternatively we can generate a heap dump by using a command
Kill -3 processid
 If the parameters
IBM_HEAPDUMP – True
IBM_HEAP_DUMP – True are configured.
----- Go to dmgr console.
Select servers. Select WebSphere application servers.

Click on any server name.


Expand Java and process management.

Select Process definition.

Select Environment Entries under Additional Properties.

Click New.
Click ok.

Click Review. Click Save. Click ok.

Restart the server.


Q) What happens if min and max heap size or equal?
Ans: Heap parameters influences the behavior of garbage collection. If we increase the heap
size more objects will be accommodated.
 Larger heap takes longer to fill the application runs longer before a garbage collection
occurs.
 JVM has threshold it uses to manage jvm storage whenever the threshold are reached
the garbage collector gets invoked to free up unused objects. so, garbage collector can
cause significant degradation of java performance. In majority of cases we should set
maximum heap size to value higher than the initial heap size.
 This allows for the jvm to operate efficiently during normal study state period.
 Ultimately large heap size affects response time and performance because of delay in
the garbage collector.
Performance Monitoring Infrastructure (PMI):
It collects data from a running application server and externalizes to PMI clients like Tivoli
Performance Viewer.
Steps to configure PMI:
 Login to dmgr admin console.
 Expand Monitoring and Tuning.

Select Performance Monitoring Infrastructure (PMI).

Select the server which you want to enable PMI.


Click Apply. Click ok.
Click save changes.
 A file called server.xml file will be updated corresponding to that server. Now this data
can be used by Tivoli Performance Viewer.
Request Metrics: It allows u to monitor transactions through the different components in the
WebSphere application server.
 It allows u to see the amount of time spent executing the whole request and also the
time spent in executing the request in each of the supported WAS components.
 Request metrics supports the components like http server plug-in, web container, web
services container, ejb container, jdbc calls and java messaging service.
Tivoli Performance Viewer: It will collect PMI data and it allows users to visualize the specific
PMI counters in a graph or table from and also we can record perform data to a performance
log file.
Compatibility Matrix of WAS
This table is derived from IBM Information Center: Specifications and API documentation

WebSpher WebSphere WebSpher WebSpher WebSpher WebSpher WebSpher WebSpher


e version 3.5 e 4.0 e 5.0 e 5.1 e 6.0 e 6.1 e 7.0

Release November November


1998 ? June 2001 ? late 2004 May 2006 Sept 2008
date 2002? 2003 ?

End of 30 April 30 Sept 30 Sept 30 Sept


30 Nov 2003
support 2005 2006 2008 2010

JDK 1.2 1.3 1.3 1.4 1.4 1.5 1.6

JavaEE ? 1.2 1.3 1.3 1.4 1.4 1.5

Servlet 2.1&2.2 2.2 2.3 2.3 2.4 2.4 2.5

0.91&1.0&1.
JSP 1.1 1.2 1.2 2.0 2.0 2.1
1

EJB 1.0 1.1 2.0 2.0 2.1 2.1 3.0


Core Group:
A core group is a component on the high availability manager (HA). By default a core group
called default core group will be created for each cell in the WAS environment.
 A core group must contain at least one deployment manager.
 Every process should be a member of at least one core group.
 All members of a cluster must be same from the core group.
Session Management:
Web application need a mechanism to hold the users state information over a period of time.
HTTP protocol does not recognize or maintain users state or users information.
 HTTP treats each and every request a discrete and independent transaction. It won't
remember previous transaction or conversation details about the users.
 Java servlet specification provides a mechanism for servlet app's to maintain users
session information or users state information. This mechanism is known as session.
 We can enable session tracking in the application level in four different ways.
1) URL Rewriting.
2) Hidden form fields.
3) Cookies.
4) Session Objects.
 Session management in WAS can be defined at the following levels.

1) Application server level:


This is the default level whenever we configure session management under this level, it will be
applicable for all the applications within that server.
2) Application Level: It is applicable to all web modules within that application.
3) Web module level: It is applicable to that web module only.
 To configure session management for server level select the particular server which u
want to configure.
 Select session management under container settings there we can select the property
either cookies or URL rewriting or SSLID tracking (Deprecated).
 By default it use cookies.
 Most of the applications will choose cookie support to pass the users identifier between
the server and the end user.
 Here WAS generates a unique session id for each user and returns this id to the users
browser with a cookie.
 A cookie consists of information embedded as a part of the headers in the html stream
passed between the server and the browser.
 The browser holds the cookie and returns it to the server whenever the user makes a
subsequent request.
 By default WebSphere defines as cookies. So, they are destroyed if the browser is
closed. This cookie holds a session identifier. The remaining user's session information
resides at the server side itself.
 We can configure cookie settings under session management, by default the cookie
name is JSESSIONID and we have cookie domain and cookie path. Cookie domain is a
value which indicates to the browser not send a cookie to the particular server.
 For example if u specify any domain name the browser will only send back session
cookies to host in that domain. With the help of cookie path we can restrict path. We
can keep the cookie from being send to certain URL's on that server.
 Cookie max age parameter indicates that amount of time that cookie will live in the cliet
browser.

Session Affinity:
In a clustered environment, any HTTP requests associated with an HTTP session must be routed
to the same Web application in the same JVM. This ensures that all of the HTTP requests are
processed with a consistent view of the user’s HTTP session. The exception to this rule is when
the cluster member fails
or has to be shut down.
WebSphere assures that session affinity is maintained in the following way: Each server ID is
appended to the session ID. When an HTTP session is created, its ID is passed back to the
browser as part of a cookie or URL encoding. When the browser makes further requests, the
cookie or URL encoding will be sent back to the Web server. The Web server plug-in examines
the HTTP session ID in the cookie or URL encoding, extracts the unique ID of the cluster
member handling the session, and forwards the request.

This situation can be seen in Figure 12-3, where the session ID from the HTTP header,
request.getHeader(“Cookie”), is displayed along with the session ID from session.getId(). The
application server ID is appended to the session ID from the HTTP header. The first four
characters of HTTP header session ID are the cache identifier that determines the validity of
cache entries.
The JSESSIONID cookie can be divided into these parts: cache ID, session ID, separator, clone ID,
and partition ID. JSESSION ID will include a partition ID instead of a clone ID when memory-to-
memory replication in peer-to-peer mode is selected. Typically, the partition ID is a long
numeric number.

Table 12-1 shows their mappings based on the example in Figure 12-3. A clone
ID is an ID of a cluster member.

The application server ID can be seen in the Web server plug-in configuration
file, plug-in-cfg.xml file, as shown in Example 12-4.
Example 12-4 Server ID from plugin-cfg.xml file
<?xml version="1.0" encoding="ISO-8859-1"?><!--HTTP server plugin
config file for the cell ITSOCell generated on 2004.10.15 at 07:21:03
PM BST-->
<Config>
......
<ServerCluster Name="MyCluster">
<Server CloneID="vuel491u" LoadBalanceWeight="2"
Name="NodeA_server1">
<Transport Hostname="wan" Port="9080" Protocol="http"/>
<Transport Hostname="wan" Port="9443" Protocol="https">
......
</Config>
Note: Session affinity can still be broken if the cluster member handling the request fails. To
avoid losing session data, use persistent session management. In persistent sessions mode,
cache ID and server ID will change in the cookie when there is a failover or when the session is
read from the persistent store, so do not rely on the value of the session cookie remaining the
same for a given session.

Issues:
Issue with Application Module: We are able to access the application but whenever we select
modules to view the data we are not getting the data.
 This is an incident ticket and it is sev1 ticket. To troubleshoot this issue, I look in to the
jvm logs because i found an entry in the acces.log file and also web server is up and
running there is no issue with the web server but in the jvm logs systemerr.log i found
that could not invoke service method on servlet detail servlet and the exception i have
noticed java.lang.illegal state exception.
 I know this exception may cause due to errors in the code or due to invalid session
objects at the time of response generation. There are no changes in the code because
we deployed that application before one week and we are able to access that app after
deployment. I suspect that there may be a problem with session objects, because
including illegal state exception it is showing session object internal with an id.
 To resolve this issue i have logged in to the admin console and increases the set time
value for that application under additional properties after getting the approval and
restarted the application. Then we didn't face any time this kind of issue from that
module to resolve this issue i went through the HTTP session interface life cycle and
understood that how new session objects will be created and how invalidate method
will be called after getting clear understanding about http session interface life cycle. I
noticed that i have to increase set time out values an issue got resolved.
Synchronization:
There are two types of synchronizations in WebSphere.
1) Partial or Normal Synchronization.
2) Full Synchronization.
 Usually auto sync is the partial sync and also auto sync can do full sync i.e. whenever
sync happens first time after the node agent started.
 Whenever user clicks synchronize option in admin console i.e. a partial or normal sync.
we cannot do partial sync by using syncnode.sh.
 Whenever issue the command syncnode.sh i.e. full sync.
 Sync uses a cache mechanism called epoch that will be used by node agent and dmgr to
determine what files in the master repository have changed after the last sync.
 Whenever the next sync operation is invoked only the folders in the cache will be
compared. If the epoch's for a folder in the cache are different such folders are
considered as modified and will be checked in the sync process.
 The refresh repository epoch jmx cell cleans up the cache which requires the next sync
operation to check all folders in the repository resulting a full sync operation.
 In case of full sync all folders are checked into the cache including the files which are
modified manually and the updated file will be detected and files will get updated in the
corresponding location.
 In case of partial sync or auto sync the manually changed files will not be pushed to the
node repository, because dmgr is not aware the files that are changed and does not put
the folders which contains the modified files into cache.
 The partial sync does not know the updated files and folders and it will not be checked.
 In case of full sync including all folders, files and also manually modified files will be
identified and it will maintain accurate data.
HTTP 404 ERROR
1. This error may occur due to problem in the web server configuration, issue with the plug-ins, virtual
host configuration and also if the application server (or) if the application is under not running.

2. To resolve this issue , first check the web server is responding or not, we can check it by using

“ http://<HOSTNAME of web server>:< web server port no: 80> “

3. If the web server is up and running, we will get a welcome page and here we can confirm that there is
no issue with web server.

4. If we didn’t get a response there may be issue with the web server and we need to look into that “log
files”

5. Check the status of the application server and the application which we are trying to access , if this
server is not running ,we have to start the server and if the application is not started , we have to start
the application.

6. If the failure occurs in the application server “systemout.logs” file contains information regarding the
issue, if there are no messages regarding the failure , may be the request are not reaching to the
application server.

7. If the customer gives incorrect “URL” at the time also there is a chance of getting 404 error code in
the “access.log” file and also due to the unavailability of the component which is required to access that
resource.

8. It is better to make a note of the URL at that time , when the error is displayed.

9. There are some cases , we get an error msg, ”JSP ERROR:FAILED TO FIND A RESOURCE” with a code
“JSPG0036E” at that time we can get a 404 status code, this may occur due to page is not available in
the server and also due to the web server and plug-in configuration with the app. server.

10. We have to look into the access.log or error.log file which are under web server logs ,in the
access.log file we have got an entry as follows:-

REMOTEHOST_<TIMESTAMP>”REQUEST” “STATUS CODE” BYTES

11. In the error.log file we have the log entries with the format <TIMESTAMP>[ERROR] [CLIENT
REMOTEHOST] error message.

12. For example if the access.log file contains 404 STATUS CODE and error.log file contains “FILE
DOESNOT EXIST WITH FILEPATH” Based on that error msg “file does not exist”. This is because ,the file
does not exist in the web server and also verify “URL” is correct and make sure that file is available in
the proper location.
13. If there are no error message in error.log file and if access.log file contains 404 STATUS CODE , this
may be the problem with the web server plug-in or with the application server . It indicates that web
server did not consider this as a request that it should handle .

14. Make sure that Plugin-cfg.xml file is generated after the changes to the environment. If it is not
generated check the timestamp and generate a new plugin-cfg.xml file and make sure that we are able
to access with the new updates.

15. Sometimes it is necessary to look into “URL” pattern of the deployment descriptor i.e web.xml file
which is inside your application to access with proper URL.

16. Verify the Virtual Host configuration under environment Virtual HostHost Aliasesedit the host
name (or) port number if necessary and generate a plugin-cfg.xml file.

17. Access ur application directly from the application server by using application server port and we can
confirm that the issue with the app. server or web server and plug-ins .

18. This is the way to resolve 404 issue . It will be easy to identify the problem i.e.:- occurring either in
the web server, plug-ins or application server, If we start debugging from the web server level.

HTTP 500 ERROR CODE OCCURRENCE AND RESOLUTION


1. It indicates that an internal server problem has occurred during the process of
serving the requested page. This may be due to errors in Servlets, JSP, JSF and also
with session errors.
2. JSP processing errors can be caused by applications coding errors (or) a runtime
error in the JSP processor which is under web container.
3. If there are any processing errors , we get messages like
JSPG0076E: MISSING REQUIRED ATTRIBUTE PAGE,
JSPG0049E: <JSP-NAME> FAILED to complete.
4. If you get any messages with a code JSFG0001E, JSFG002E, JSFG003E…..etc. These
messages indicate unable to invoke write method, unable to find write method,
unable to set property etc.
5. If we get messages like SRVE02391, SRVE02401 it indicates that JSP processor
started normally.
6. If the JSP processor starts normally, it might be the problem with the application
code and we have to look into the corresponding JSP Pages.
7. If the JSP Page is invalid with the JSP Syntax, that JSP’s will not be processed by the
JSP processor at that time also we get 500 error code.
8. For example if there is any syntax error we will get a message like “MISSING
REQUIRED ATTRIBUTE PAGE FOR JSP”
9. Sometimes we get an error message like JSP FAILED TO COMPLETE AN ERROR
OCCURRED in the particular line (THE EXCEPTION IS JSP CORE EXCEPTION).
10. Whenever the application tries to use an invalidated session objects at that time we
get an exception called “ILLEGAL STATE EXCEPTION”, this may be due to 500 errors.
11. We can find out ILLEGAL STATE EXCEPTION ERRORS in systemout.log file, for
example SRVE0068E could not invoke <method-name> on particular servlet and the
exception thrown is java.lang.illegalstate exception.
12. Based on error messages (or) exceptions either due to JSP processor, application
code and session errors with the error messages in the log files , we can debug 500
errors.

Garbage Collection:
It is a mechanism provided by JVM to reclaim heap space from java objects. In case of c or c++ the
programmer have to take the responsibility for the memory management. Where as in case of java
memory management is taken care by Garbage Collector. so that a java developer can focus more time
on business logic.

 Garbage Collection in java is carried by a daemon thread called Garbage Collector.


 Before removing an object from memory garbage collection thread invokes finalize() of that
object.
 If there is no memory space for creating new objects in heap JVM throws
OUTOFMEMORYEXCEPTION.
 An object becomes eligible for garbage collection if that object is not reachable from any live
threads or any references. We can say if all the references are null then that object is eligible for
garbage collection.
 In IBM SDK 5.0 there are 4 types of garbage collection policies.
1) Optimal Throughput.
2) Optimal avg pause.
3) Gencon.
4) Subpool.

The default policy is Optimal Throughput.

Throughput:

It defines the amount of data processed by an application.

Pause Time:

It is the duration of time in which garbage collection has paused (hold) all application thread to collect
heap.
Optimal Throughput:

This is the default GC policy.It is typically used for application where throughput is more impotant than
short GC passes.

The application is stopped each time when the garbage is collected.

Optimal Avg Pause:

We can choose this optimal avg pause for high throughput and for shorter gc passes by performing
some of the garbage collection concurrently, if any where the application is paused for shorter periods.

Gen Con (Generation Concurrent):

It handles short lived objects differently than objects that are long alive. we can use this policy if the
applications have many short lived objects and looks shorter pass times to produce a good throughput.

Subpool or subpooling:

It is more suitable for large multiprocessor machines and it is advisable to go for SMP machines with 16
more processors. This policy is only available on IBM p series and z series.

Applications that need to scale on large machines can benifit from this policy.

Verbose GC is a mechanism which logs the information about GC, when GC is called which objects are
removed.

Installing enterprise applications using wsadmin


scripting
Use the AdminApp object or the AdminApplication script library to install an application to the application
server run time. You can install an enterprise archive file (EAR), Web archive (WAR) file, servlet archive
(SAR), or Java archive (JAR) file.

Before you begin

On a network deployment installation, verify that the deployment manager is running before you install an
application. Use the startManager command utility to start the deployment manager.

There are two ways to complete this task. Complete the steps in this topic to use the AdminApp object to
install enterprise applications. Alternatively, you can use the scripts in the AdminApplication script library
to install, uninstall, and administer your application configurations.

The scripting library provides a set of procedures to automate the most common administration functions.
You can run each script procedure individually, or combine several procedures to quickly develop new
scripts.
About this task

Use this topic to install an application from an enterprise archive file (EAR), a Web archive (WAR) file, a
servlet archive (SAR), or a Java archive (JAR) file. The archive file must end
in .ear, .jar, .sar or .war for the wsadmin tool to complete the installation. The wsadmin tool uses
these extensions to determine the archive type. The wsadmin tool automatically wraps WAR and JAR
files as an EAR file.

Best practice: Use the most recent product version of the wsadmin tool when installing
applications to mixed-version environments to ensure that the most recent wsadmin options and
commands are available.

Procedure

1. Start the wsadmin scripting tool.

2. Determine which options to use to install the application in your configuration.

For example, if your configuration consists of a node, a cell, and a server, you can specify that
information when you enter the installcommand. Review the list of valid options for
the install and installinteractive commands in the Options for the AdminApp object install,
installInteractive, edit, editInteractive, update, and updateInteractive commands using wsadmin
scripting topic to locate the correct syntax for the -node, -cell, and -server options. For this
configuration, use the following command examples:

Using Jython:
AdminApp.install('location_of_ear.ear','[-node nodeName -cell cellName
-server serverName]')
Using Jacl:
$AdminApp install "location_of_ear.ear" {-node nodeName -cell cellName
-server serverName}

You can also obtain a list of supported options for an enterprise archive (EAR) file using
the options command, for example:

Using Jython:
print AdminApp.options()
Using Jacl:
$AdminApp options

You can set or update a configuration value using options in batch mode. To identify which
configuration object is to be set or updated, the values of read only fields are used to find the
corresponding configuration object. All the values of read only fields have to match with an
existing configuration object, otherwise the command fails.

You can use pattern matching to simplify the task of supplying required values for certain
complex options. Pattern matching only applies to fields that are required or read only.

3. Choose to use the install or installInteractive command to install the application.

You can install the application in batch mode, using the install command, or you can install the
application in interactive mode using theinstallinteractive command. Interactive mode prompts
you through a series of tasks to provide information. Both the install command and
theinstallinteractive command support the set of options you chose to use for your installation in
the previous step.

4. Install the application. For this example, only the server option is used with the install command,
where the value of the server option isserv2. Customize
your install or installInteractive command with on the options you chose based on your
configuration.

o Using the install command to install the application in batch mode:


 For a network deployment installation only, the following command uses the EAR
file and the command option information to install the application on a cluster:
 Using Jython string:

AdminApp.install('c:/MyStuff/application1.ear', '[-
cluster cluster1]')

 Using Jython list:

AdminApp.install('c:/MyStuff/application1.ear', ['-
cluster', 'cluster1'])

 Using Jacl:

$AdminApp install "c:/MyStuff/application1.ear" {-


cluster cluster1}

Table 1. install cluster command elements. Run the install command with the -cluster option.

$ is a Jacl operator for substituting a variable name with


its value

AdminApp is an object allowing application objects to be managed

Install is an AdminApp command

MyStuff/application1.ear is the name of the application to install

Cluster is an installation option

cluster1 the value of the cluster option which will be cluster


name
o Use the installInteractive command to install the application using interactive mode. The
following command changes the application information by prompting you through a
series of installation tasks:
 Using Jython:

AdminApp.installInteractive('c:/MyStuff/application1.ear')
 Using Jacl:

$AdminApp installInteractive "c:/MyStuff/application1.ear"

Table 2. installInteractive command elements. Run the installInteractive command with the -name of the
application to install.

$ is a Jacl operator for substituting a variable name with its


value

AdminApp is an object allowing application objects to be managed

installInteractive is an AdminApp command

MyStuff/application1.ear is the name of the application to install

5. Save the configuration changes.

Use the following command example to save your configuration changes:

AdminConfig.save()

6. In a network deployment environment only, synchronize the node.

Dmgr/bin>./wsadmin.sh –lang jython

AdminApp.install('/root/Desktop/DefaultApplication.ear','[-node
app_node -cell appCell01 -server server1]')

print AdminApp.options()

AdminConfig.save()
Dmgr/bin>wsadmin –lang jython

AdminApp.export('DefaultApplication.ear', '/home/DefaultApplication.ear', '[-


exportToLocal]')
Uninstalling enterprise applications using the
wsadmin scripting tool
You can use the AdminApp object or the AdminApplication script library to uninstall applications.

Procedure

1. Start the wsadmin scripting tool.


[root@localhostbin]# ./wsadmin.sh -lang jython
2. Uninstall the application:
Using Jython:
AdminApp.uninstall(‘application1’)
Example

wsadmin>AdminApp.uninstall('DefaultApplication.ear')
ADMA5017I: Uninstallation of DefaultApplication.ear started.
ADMA5106I: Application DefaultApplication.ear uninstalled successfully.
''
3.Save the configuration changes.
Use the following command example to save your configuration changes:

wsadmin>AdminConfig.save()
4. In a network deployment environment only, synchronize the node.
Wsadmin>AdminNodeManagement.syncActiveNodes()
Maximum heap size?
For 32 bit the Max heap size is 2GB AND for 64 bit the Max heap size unlimited.

What is heap memory?


Objects storage space for objects references created at run time in a jvm is heap memory. Initial Heap
Size: 50 MB .Maximum Heap Size: 256 MB
Out of memory exception is there, how to handle that exception?
To increase heap memory size.

Increasing the WebSphere Application Server Node Agent Heap Size?


You might need to increase the WebSphere® Application Server Node Agent heap size.

Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the System Administration heading and click Deployment
manager.
3. Click the name of the node agent that you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the deployment manager for the changes to take effect.

Increasing the WebSphere Application Server Heap Size?

You might need to increase the WebSphere® Application Server heap size.
Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the Servers heading and click Application servers.
3. Click the name of the WebSphere Application Server you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the application server for the changes to take effect.

Increasing the WebSphere Application Server Node Agent Heap Size?

You might need to increase the WebSphere® Application Server Node Agent heap size.
Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the System Administration heading and click Deployment
manager.
3. Click the name of the node agent that you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the deployment manager for the changes to take effect.

If we give heap size value same for both min and max then what are the
advantages and what are the disadvantages?

The Java heap parameters influence the behaviour of garbage collection. Increasing the heap size
supports more object creation. Because a large heap takes longer to fill, the application runs longer
before a garbage collection occurs. However, a larger heap also takes longer to compact and causes
garbage collection to take longer.

The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached,
the garbage collector gets invoked to free up unused storage. Therefore, garbage collection can cause
significant degradation of Java performance. Before changing the initial and maximum heap sizes, you
should consider the following information:
In the majority of cases you should set the maximum JVM heap size to value higher than the
initial JVM heap size. This allows for the JVM to operate efficiently during normal, steady state periods
within the confines of the initial heap but also to operate effectively during periods of high transaction
volume by expanding the heap up to the maximum JVM heap size.

In some rare cases where absolute optimal performance is required you might want to specify
the same value for both the initial and maximum heap size. This will eliminate some overhead that
occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is
large enough to hold the specified JVM heap.

Beware of making the Initial Heap Size too large. While a large heap size initially improves
performance by delaying garbage collection, a large heap size ultimately affects response time when
garbage collection eventually kicks in because the collection process takes more time.

rotatelogs - Piped logging program to rotate Apache logs


rotatelogs is a simple program for use in conjunction with Apache's piped logfile feature. It
supports rotation based on a time interval or maximum size of the log.

Synopsis:
rotatelogs [ -l ] [ -f ] logfile rotationtime|filesizeM [ offset ]

Options:
-l
Causes the use of local time rather than GMT as the base for the interval or
for strftime(3) formatting with size-based rotation. Note that using -l in an
environment which changes the GMT offset (such as for BST or DST) can lead to
unpredictable results!
-f
Causes the logfile to be opened immediately, as soon as rotatelogs starts,
instead of waiting for the first logfile entry to be read (for non-busy sites, there may
be a substantial delay between when the server is started and when the first
request is handled, meaning that the associated logfile does not "exist" until then,
which causes problems from some automated logging tools). Available in version
2.2.9 and later.
logfile
The path plus basename of the logfile. If logfile includes any '%' characters, it is
treated as a format string for strftime(3). Otherwise, the suffix .nnnnnnnnnn is
automatically added and is the time in seconds. Both formats compute the start
time from the beginning of the current period. For example, if a rotation time of
86400 is specified, the hour, minute, and second fields created from
the strftime(3) format will all be zero, referring to the beginning of the current
24-hour period (midnight).

When using strftime(3) filename formatting, be sure the log file format has
enough granularity to produce a different file name each time the logs are rotated.
Otherwise rotation will overwrite the same file instead of starting a new one. For
example, if logfile was /var/logs/errorlog.%Y-%m-%d with log rotation at 5
megabytes, but 5 megabytes was reached twice in the same day, the same log file
name would be produced and log rotation would keep writing to the same file.

rotationtime
The time between log file rotations in seconds. The rotation occurs at the beginning
of this interval. For example, if the rotation time is 3600, the log file will be rotated
at the beginning of every hour; if the rotation time is 86400, the log file will be
rotated every night at midnight. (If no data is logged during an interval, no file will
be created.)
filesizeM
The maximum file size in megabytes followed by the letter M to specify size rather
than time.
offset
The number of minutes offset from UTC. If omitted, zero is assumed and UTC is
used. For example, to use local time in the zone UTC -5 hours, specify a value of-
300 for this argument. In most cases, -l should be used instead of specifying an
offset.

Expamples:
CustomLog "|bin/rotatelogs /var/logs/logfile 86400" common

This creates the files /var/logs/logfile.nnnn where nnnn is the system time at which the log nominally
starts (this time will always be a multiple of the rotation time, so you can synchronize cron scripts
with it). At the end of each rotation time (here after 24 hours) a new log is started.

CustomLog "|bin/rotatelogs -l /var/logs/logfile.%Y.%m.%d 86400" common

This creates the files /var/logs/logfile.yyyy.mm.dd where yyyy is the year, mm is the month, and dd is
the day of the month. Logging will switch to a new file every day at midnight, local time.

CustomLog "|bin/rotatelogs /var/logs/logfile 5M" common

This configuration will rotate the logfile whenever it reaches a size of 5 megabytes.

ErrorLog "|bin/rotatelogs /var/logs/errorlog.%Y-%m-%d-%H_%M_%S 5M"


This configuration will rotate the error logfile whenever it reaches a size of 5 megabytes, and the
suffix to the logfile name will be created of the form errorlog.YYYY-mm-dd-HH_MM_SS.

On Windows:
CustomLog “|D:/viewstore/snapshot/ralberts/apache/bin/rotatelogs.exe
D:/viewstore/snapshot/ralberts/apache/logs/access.%Y-%m-%d-%H_%M_%S.log 60″ common

Portability:
The following logfile format string substitutions should be supported by all strftime(3) implementations,
see the strftime(3) man page for library-specific extensions.

%A full weekday name (localized)


%a 3-character weekday name (localized)
%B full month name (localized)
%b 3-character month name (localized)
%c date and time (localized)
%d 2-digit day of month
%H 2-digit hour (24 hour clock)
%I 2-digit hour (12 hour clock)
%j 3-digit day of year
%M 2-digit minute
%m 2-digit month
%p am/pm of 12 hour clock (localized)
%S 2-digit second
%U 2-digit week of year (Sunday first day of week)
%W 2-digit week of year (Monday first day of week)
%w 1-digit weekday (Sunday first day of week)
%X time (localized)
%x date (localized)
%Y 4-digit year
%y 2-digit year
%Z time zone name
%% literal `%'
http://httpd.apache.org/docs/2.2/programs/rotatelogs.html#examples

When I tried on putty:


root@localhost bin]# ./rotatelogs
Usage: ./rotatelogs [-l] <logfile><rotation time in seconds> [offset minutes from UTC] or <rotation size in
megabytes>
Add this:
TransferLog "|./rotatelogs /some/where 86400"
or
TransferLog "|./rotatelogs /some/where 5M"
tohttpd.conf. The generated name will be /some/where.nnnn where nnnn is the system time at which
the log nominally starts (N.B. if using a rotation time,the time will always be a multiple of the rotation
time, so you can synchronize cron scripts with it). At the end of each rotation time or when the file sizeis
reached a new log is started.

[root@localhost bin]# ./rotatelogs /var/logs/logfile 5


1 Previous file handle doesn't exists /var/logs/logfile.1360897290
[root@localhost bin]# ./rotatelogs /tmp/vimala/logfile 5M
^C
[root@localhost bin]# ./rotatelogs /tmp/vimala/logfile 12
^C
[root@localhost bin]#
Log file is created in /tmp/vimala/
logfile.1360897512 this is log file created.

You might also like