WAS Admin rajasekhar
WAS Admin rajasekhar
DEC 2012-13
IBM WEBSPHERE APPLICATION SERVER ADMINISTRATION
17-Dec-12
Application: An application is a collection of resources like servlets, jsp’s, EJB’s, jar(Java
Archive), xml files, images, property files etc all together build to have a desired output.
Server: A server provides runtime environment for the applications. It takes the responsibility
to receive the end user’s request to identify the resources, to execute that resource and to
generate a response.
By using J2SE we can develop only desktop applications. If we need any web based
applications we have to depend on J2EE components like servlets, JSP’s or EJB’s.
3 Types of packages
Pre requisites
- 1 GB RAM.
- 2GB Hard Disk.
- OS - Windows, Linux.
Types of Installation
GUI Mode and Silent Mode
GUI MODE:
Go to
C:\Documents and Settings\Administrator\Desktop\6.0 Base\WAS
Click install.exe
Click Next ….
/opt/IBM/websphere/appserver
For AIX
/user/IBM/websphere/appserver
For Windows:
C:\IBM_Base_Dec_GUI\websphere\appserver
Click Next
Click Next
Click Finish..
/opt/IBMwebsphere/appserver (Linux)
<WAS-ROOT> - It is called root directory.
Bin
<profile-home> - It is important one to maintain multiple servers.
Bin
Basic Operations on Server (Linux)
http://<localhost>:<adminconsoleportnumber>/ibm/console
Profile home
Config
Cell
Cell name
Node
Node name
Server index.xml
Path:
C:\IBM_BASE_GUI\WebSphere\AppServer\profiles\default\config\cells\RajashekharNode01Cell
\nodes\RajashekharNode01\serverindex.xml
Profile:
The WAS installation process allows for two main actions. The first being the base
binaries which are the core executables and the second being a profile. The base binaries are
the product mechanics made up of executables and shells scripts, and can be shared by one or
many profiles. Profiles exist so data can be partitioned away from the underlying core. Simply
put, a profile contains an Application Server. When an Application Server is running, the server
process may read and write data to underlying configuration files and logs. So, by using profiles,
transient data is kept away from the base product. This allows us to have more than one profile
using the same base binaries and also allows us to remove profiles without affecting other
profiles. Another reason for separating the base binaries is so that we can upgrade the product
with maintenance and fix packs without having to reconfigure our Application Server profiles.
Types of profile creation:
1) By using profile creation wizard (6.0) or
By using profile management tool (from 6.1 onwards).
2) By using command line.
3) By using silent mode.
Profile creation using profile creation wizard:
Profile name
Default name: AppSrv01(optional we can change the name)
Click Next
Profile directory
Default path by system or we can change
Click Next
Before installing WAS through silent mode we have to follow below points.
1) Identify the response file location.
2) Take a backup of that response file.
3) Customize that response file parameters according to your environment.
4) save the changes and execute that response file by using command through command
prompt
1) Response file location path (It will reside in your WAS software folder)
C: \Websphere_6.0_Base\WAS\responsefile.base
2) Take a backup of response file to another location (Copy and rename to response.txt
And this will preserve the original file as a backup in case we need it again later. )
C:\Documents and Settings\Administrator\Desktop\response.txt
3) Customization of the response file (opens and edits the parameters according to your
requirements).
# InstallShield Options File
No need to change anything in this section.
# License Acceptance (By default it is “false” we have to set it to “true”)
-W silentInstallLicenseAcceptance.value="true"
# Incremental Install
In this section no need to change anything for fresh installation.
# IBM WebSphere Application Server - Base, V6.0 UPGRADE from Base Trial or
Express
# Setup type
#
# This value is required for the installation. Do not change this!
#
-W setuptypepanelInstallWizardBean.selectedSetupTypeId="Custom"
# Node name
-W nodehostandcellnamepanelInstallWizardBean.nodeName="YOUR_NODE_NAME"
-W nodehostandcellnamepanelInstallWizardBean.nodeName="App_Node01"
# Host name
-W nodehostandcellnamepanelInstallWizardBean.hostName="YOUR_HOST_NAME"
# Cell name
# -W setcellnameinglobalconstantsInstallWizardBean.value="YOUR_CELL_NAME"
-W winservicepanelInstallWizardBean.winServiceQuery="true"
By default it is “true” we hae to set it to “true”
-W winservicepanelInstallWizardBean.winServiceQuery="false"
-W winservicepanelInstallWizardBean.accountType="localsystem"
In this section if we choose local system no need to give user name and password. If we choose
specified user we have to give the user name and password.
-W winservicepanelInstallWizardBean.startupType="manual"
-W winservicepanelInstallWizardBean.userName="YOUR_USER_NAME"
-W winservicepanelInstallWizardBean.password="YOUR_PASSWORD"
Uncomment above values if it is local system neither we have to pass the userid and password.
4) Save the changes to the response.txt and execute through command prompt.
C:\Documents and Settings\Administrator\Desktop\6.0 Base\WAS>install.exe -options
"C:\Documents and Settings\Administrator\Desktop\response.txt" -silent
Where will you verify the installation started or not in silent mode installation?
Go to %temp% directory open the log.txt and we will get a message like below.
INSTCONFSUCCESS: Post-installation configuration is successful.
Note: After creating the WAS root binaries, profile directory will create.
In the response.txt we have to set the value true for license acceptance. Otherwise it will fail
the installation and gives a message like INSCONFFAILED: Accept the license.
Go to any of the installation part either WAS-Root bin (or) Profile bin and select the
wasprofile.bat
Copy that path and paste it in command prompt
-listProfiles – Using this command to know how many profiles under profiles-home directory
(Here P is the uppercase).
If we don’t know how to create a profile and which values to be pass see the below
screen shot and go through those options.
Note: In case of WAS 6.0 port values will not increment. From WAS 6.1 onwards it will
increment through command line also.
After successful installation we will get a message.
[INSTCONFSUCCESS:Success: the profile now exists.
Profile:
Profile is nothing an environment which contains a server, admin console, node and some
dependant, some independent xml and config files. With the help of a profile we can do the
admin activities on the server and applications.
Profile Hierarchy:
In this case of base and express packages we can create only one type of profile that is
application server profile.
In case of ND package we can create 3 types of profiles.
1) Deployment manager profile.
2) Application server profile.
3) Custom profile.
In case of base and express packages we have only one profile template that is “default”
template (Used for application server profiles).
In case if ND package we have 3 types of profile templates.
1) Dmgr – (used for Deployment manager).
2) Default - (Used for Application server profile).
3) Managed - (Used for Custom profile).
Network Deployment Package Installation:
Go to C:\Documents
andSettings\Administrator\Desktop\Websphere_6.0_ND\ndv60_appsrvotherstuffHL\C587UML
_ndv60_appsrvotherstuff\WAS
Click install.exe
Click Next
Click next
Click Next
Click Next.
Click Next
Click Ignore
Uncheck launch the profile creation wizard. Click Next
Click Next
Click Finish.
Creating Application server Profile Using Profile Creation Wizard:
Here by default we will get a server called server1, admin console and a node. It is not
possible to create additional servers unless its node is federated to the dmgr.
Click Next
Here select which profile to create. By default Application server profile
Choose Application server profile.
Click Next
Click Next
Click Next
Note: In case of hostname mismatch installation will success but at the time of starting server
we will get an exception.
Exception called Invoke Target Exception.
Node: A node is a logical group of servers. Anode is a logical one. A node is physical one.
Click Next
Click Next
Uncheck Run the application server process as a windows service.
Click Next
Click Next
Click Finish
Creating Deployment manager profile:
It contains admin console, node and dmgr process. By default it won’t contain any server
to deploy applications and run the applications.
Using dmgr we can create n no of servers under one profile.
Click profile creation wizard from first step console.
Click Next
Here we have to select create deployment manager profile.
Click Next
Click Next
Click Next
Click Next
Click Next
Uncheck Run the Application Server process as a Windows service
Click Next
Click Next
Click Finish
Launch first step console. There u will find only one difference among other installations.
That is start the Deployment manager.
----- For start/stop dmgr using start/stopmanager.bat
Login in to Admin console of dmgr there we can see lot of changes when compared to base
package admin console.
Base package admin console (Application server profile)
Click Next
Profile Type select Custom profile
Click Next
Click Next
Click Next
Click Next
Click Next
Click Next
Click Finish
Federation
It is a process of adding a node either from application server profile or custom profile to
deployment manager profile. Once the node of either application server profile or custom
profile is available with dmgr we can create N no of servers under that node and we can
manage from the same dmgr console.
------ We can add a node by using a command
./addnode.sh <host-name of dmgr> <soap connector of dmgr> -includeapps
–username <user name> -password <password> (If global security is enabled we have to
give the user name and password).
------ At the time of federation a process called node agent will be created to communicate
between dmgr profile and federated profile.
------ A node which is having a node agent will be called as managed node.
------ A node which doesn’t have node agent will be called as an unmanaged node.
------ After federation and configuration if the node agent is down there is no impact for the
federated server profiles and applications. But we can’t do the administration activities through
the dmgr console.
------ If the dmgr is down at that time also there is no impact for the federated server profiles
and applications. We can access those applications normally.
------ When we stop dmgr and node agent we can access server and applications.
Go to application server under server tab on left side pane.
Click New
Then you will get a error message see the below screen shot
------ For the first time when we login to the dmgr console there are no nodes that support
servers in the configuration. For that we have to add a node either from Application server
profile or Custom profile to the Deployment manager profile. Once the node of either
application server profile or custom profile is available with dmgr we can create n no of servers
under that node and we can manage from the same dmgr console.
Federation Process
Federation(Adding Node from Appsrv01 to Dmgr01)
----Go to <profiles/AppSrv01/bin>
----go to CMD prompt and execute the command
<Appserver/bin>addnode.bat <hostname Dmgr> <SOAP CONNECTOR PORT_Dmgr> -includeapps.
After the successful federation we have to add the n no of servers under App_Node01
Adding servers in dmgr console.
Go to dmgr admin console. Go to servers and click Application servers
Click New
Click Next
Click Next
Click finish
To start the dmgr process runs the following command in CMD prompt
Click New
From the drop down select Custom_Node01
Enter server name: server4
Click Next
Click Next
Click Next
Click Finish
Click save changes
Note: In this case we have to stop the node and synchronization has to do.
Uninstallation:
C:\windows\”vpd.propertiesfile”---- This property file will update in the time of installation and
Uninstallation.
Security
Security is an important part of any application server configuration. In this chapter,
we will cover securing the WebSphere Application Server's administrative console and how to
configure different types of repositories containing the users and groups of authorized users
who are given different levels of access to administer a WebSphere server.
Global Security
During the installation process of WebSphere Application Server, we opted not to turn on
global security and thus we did not have to supply a password to log in to the Administrative
console. We logged in using the username wasadmin and we were not prompted for a
password. The truth of the matter is that we could have actually used any name as the console
was not authenticating us at all. To protect our WAS from unauthorized access, we need to turn
on global security.
1) Local os user registry.
2) Custom user registry. (WAS 6.0 version)
3) LDAP user registry.
4) Federated repository. (WAS 6.1 onwards)
Steps to configure global security by using local os registry:
1) Create user accounts in your local os.
2) Assign passwords fro that account.
3) Login to the admin console and expand security.
4) Select secure administration, applications and infrastructure option.
5) Select security configuration wizard.
6) Select local os option to configure with local os registry.
7) Provide user id and password.
8) Under LTPA authentication mechanism, confirm the password once again.
9) Enable administrative security check box.
10) Select local operating system under available realm definitions.
11) Save the changes and restart the server.
12) Now access the admin console using http://<host-name>:9045/ibm/console (We have
to check in dmgr01 server index.xml for secure port).
13) Provide user name and password to login admin console.
Process:
For first two steps go to Administrative tools select Computer management
Select local users and computers
Select Users
Go to Actions menu and select New User
Provide user name: test123
Password: test123
Click save.
----- Do same for the config, operator and monitor user accounts.
----- TO get effect, whatever the changes had done before we have to stop the dmgr profile. Directly we
can’t stop the dmgr profile, for that we have to provide the user id and password. At the timing of
stopping dmgr profile only we have to give user name and password (Higher Admin of the dmgr
console’s user name and password).
Below observe the screenshot. Here we had given the user name and password of higher authority of
the dmgr profile. Then only it will stop.
He is responsible to configure server and applications but not having the authority to stop and start
servers.
Process:
---- Open notepad and save it as “users.registry” and “groups.registry”
----- Login to dmgr console expands Security and select Global Security option.
---- Select User Registries under that select Custom User Registry.
Click Custom.
Provide Sever User ID: wasadmin
Sever User Password: password
Click Ok.
Click Save.
Check Synchronize changes with Nodes.
Click Save. (Security.xml file will update under dmgr).
---- Go to Global Security.
Select User Registry under that select Custom and select Custom Properties.
Click Custom
Click Ok.
Click Save
---- Go to Global Security
Under Authentication collapse Authentication Mechanisms under this select LTPA.
Click Save
Click save.
-----Under Active User Registry select Custom Registry.
Click Apply. Click Ok.
Click Save.
Click Save.
---- Logout from dmgr admin console.
---- Restart the dmgr server.
------ Log out the Admin console
------ Stop dmgr using below command.
By using the stop/startmanager.bat we can stop/start the dmgr server when the security.xml will
update under dmgr profile.
------Login to Admin Console using the blow url.
https://localhost:9045/ibm/console/logon.jsp
Click Log in.
Process:
.jar: .jar means java archive. It contains collection of class files.
Connection
Object
App1
App1 App2
Server Database
---- Server will create connection object to interact db. It will do through jdbc drivers. Database vendor
will provide the jar files.
---- First install Oracle 10g XE.
Click Next
Accept the License.
Click next.
Click next.
Password: admin
Click next.
Click Install.
Click Finish.
---- jar file location of Oracle 10g XE
C:\oraclexe\app\oracle\product\10.2.0\server\jdbc\lib\ojdbc14.jar.
Click Apply.
Click New.
Provide the details
Select database type: Oracle
Select provider type: Oracle JDBC Driver
Select the implementation type: Connection pool data
source.
Click Next.
Click Apply.
Click Ok
Click on new.
Provide Data Source Name.
Select data store helper class as oracle 10g data store helper.
Provide the url
Click save.
---- Go to Resources select Oracle JDBC Driver
Under Additional Properties select Data Sources.
Click New.
Provide below details.
Alias: Oracle_details
Provide user name: System
Password: admin.
Click Apply.
Click OracleDS.
---- Map the Alias Under Component-managed authentication alias
Click Apply.
Click save.
Click Next
Select Typical.
Click Next.
Click Next.
Click Next.
Provide Password: admin
Click Next.
Click Next
Click Finish.
Click Finish.
---- Checking the DB2 installation & Sample db.
---- Start – All Programs – Default – General Administration Tools – Control Center
---- DB2 jar file location.
C:\IBM_DB2\DB2\SQLLIB\java\db2jcc.jar (DB2jcc_License_cu.jar – It is also important one).
---- Go to Environment
Click WebSphere Variables.
For each and every variable we have to provide the DB2 jar file location.
Click Save.
---- Do the same for remaining DB2 Drivers.
---- Go to Resources
Click JDBC Providers.
Click New.
Provide the information.
Database type: DB2
Provider Type: DB2 universal jdbc driver provider.
Implementation Type: connection pool Data Source.
Click next.
Click Apply.
Click Save the changes.
Click Save.
Click New.
Name: DB2DS
JNDI name: jdbc/DB2DS
Data store helper class: DB2 Universal data store helper.
Provide DB name; SAMPLE
Server Name: rajasekhar-pc
Port: 50000 (Default)
Click Apply.
Click Save.
Click Save.
Click DB2DS
Click J2EE Connector Architecture (J2C) authentication data entries under Related Items.
Click New.
Provide Alias: DB2_det
User ID: db2admin
Password: admin
Click Apply.
Click Save.
Click Save.
---- Map the Alias
---- Go to Resources
Click JDBC Providers.
Click DB2DS.
Component-managed authentication alias under that select dmgr_node1/DB2_det
Click Apply.
Click Save.
Click Save.
Check DB2DS and
Click on Test Connection.
Connection successful.
WebSphere Application Server 7.0 Network Deployment
Installation:
----Go to C:\Documents and Settings\Administrator\Desktop\Websphere_7.0\WAS\install.exe
Click install.exe
Click Next.
Accept the License.
Click Next.
Click Next.
No need to check anything.
Click Next.
Click Next.
Select None.
Note: from WAS ND 6.1 0nwards one more profile will add.
Dmgr01
Ie.. Cell Profile (AppSrv01 Profile will automatically federated to dmgr)
AppSrv01
Click Next.
Click Yes.
Uncheck Create a repository for Centralized Installation Managers.(New feature in 7.0).
Click Next.
Click Next.
Uncheck create a new WebSphere Application Server Profile using the Profile Management
Tool.
Click Finish.
---- Compared with 6.0 in 7.0 three more profiles will add.
Profile Creation Using Profile Management Tool:
---- Go to C:\IBM_ND_7.0\WebSphere\AppServer\bin\ProfileManagement\pmt.bat
(Or)
Using Profile Management Tool.
Click Next.
Select Advanced Profile Creation Radio button.
Click Next.
Click Next.
Click Next.
Click Next.
Uncheck Enable administrative security.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Uncheck Run deployment manager process as a Windows service.
Startup type: Manual.
Click next.
Uncheck Create a Web server definition.
Click Next.
Click Create.
Click Finish.
Q) How could you know whether the federation had done without login to dmgr console?
Ans: First we have to check the server status of dmgr.
---- Login to dmgr console and check whether the server has started or not.
Creating Custom Profile using Profile management tool:
---Go to Profile Management Tool.
Click on Create.
Select Custom Profile.
Click Next.
Click Next.
Click Next.
Click Next.
If we check Federate this node later, we have to add the node to dmgr using below command
after successful completion of Custom profile creation.
----
Click Next.
Click Next.
Click Next.
Click Next.
Click Create.
Click Finish.
Federation in WAS 7.0
---- Login to dmgr console.
http://rajasekhar-pc:9066/ibm/console/login.do
---- Go to Servers select WebSphere Application Servers under server type.
Click on New.
In select Node drop down list select app_Node01 (ND 7.0.0.0).
Server Name: server3.
Click Next.
Click Next.
Click Next.
Click Finish.
Click on Review.
Check Synchronize changes with Nodes.
Click save.
Click Ok.
---- Start the server3.
Adding the server under Custom Node.
---- Select servers under that select server types in that select WebSphere Application Servers.
Click on New.
Under select node drop down list select Custom_Node01 (ND 7.0.0.0)
Server name: server4
Click Next.
Click Next.
Click Next.
Click Finish.
Click on Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
Check the server4. Click start.
Global Security in WAS 7.0 ND
Steps to configure global security by using local os registry:
1) Create user accounts in your local os.
2) Assign passwords fro that account.
3) Login to the admin console and expand security.
4) Select secure administration, applications and infrastructure option.
5) Select security configuration wizard.
6) Select local os option to configure with local os registry.
7) Provide user id and password.
8) Under LTPA authentication mechanism, confirm the password once again.
9) Enable administrative security check box.
10) Select local operating system under available realm definitions.
11) Save the changes and restart the server.
12) Now access the admin console using http://<host-name>:9045/ibm/console (We have
to check in dmgr01 server index.xml for secure port).
13) Provide user name and password to login admin console.
Process:
---- Go to dmgr console.
--- Select Global Security under Security.
Click on Security Configuration Wizard.
Uncheck Enable application Security.
Click Next.
Select Local Operating System radio button.
Click Next.
Primary Administrative Name: test123.
Click Next.
Click Finish.
Click Review.
Check Synchronize Changes with Nodes.
Click Save.
Click Ok.
---- Click LTPA under Authentication.
Password Confirmation: test123.
Click Apply.
Click Review.
Check Synchrinize changes with Nodes.
Click Save.
Click Ok.
---Check Enable administrative security.
Select Local Operating System Registry under Available realm definitions.
Click Apply.
Click on Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
---- Logout dmgr console
Stop the dmgr console.
Click On Add.
Click on Search.
Click Ok.
Click on Review.
Check synchronize changes with Nodes (admin-authz.xml will update under dmgr cell).
Click Save.
Click Ok.
---- Do the same for remaining users for config, operator, and monitor.
Click Ok.
---- Start the dmgr.
---- Login to each user account. Login to admin user account and check the roles.
He is having full control rights on dmgr console, like crate new servers and applications, start or
stop servers and applications etc…..
----Login to config user account.
He is responsible to configure new servers and applications but not having the authority to stop and
start servers and applications.
---- Login to Operator.
These users will only monitor the applications & servers are up and running. He can’t do anything.
Export: Backup the application. No need to stop the server.
Export DDL: It will backup only queries of the application.
Custom User Registry:
Steps to follow to create custom user registry:
Process:
---- Go to Global Security under security.
Click Security Configuration Wizard.
Click Next.
Select standalone custom registry.
Click Next.
Provide Primary administrative user name: wasadmin
usersFile Path: C:\Registries\users.registry
groupsFile Path: C:\Registries\groups.registry
Click Next.
Click Finish.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
---- Go to Global Security under Security. Select LTPA under Authentication.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
---- Select Global Security under Security.
Select Standalone custom registry under Available realm definitions.
Click Apply.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
----- Logout dmgr and stop and start the dmgr.
Click Ok.
Click Apply.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
----- Logout dmgr console
Stop/start the dmgr.
Configure LDAP Using Sun One Directory Server 5.2 on
WAS 7.0
Click Next.
Click Accept License.
For above Fully Qualified Computer Name we have to add an entry system's host file like
" Rajashekhar.domain.com" (It's up to user's custom choice, But domain.com is must).
System's Host File will be available under this path:
C:\Windows\System32\drivers\etc\hosts (Edit this host file). See the below screen shot for the
host file.
Click Next.
Click Next.
Click Next.
Click Next.
Click Create Directory.
Click Next.
Click Next.
Click Next.
Click Next.
Enter password: $3kReT4wD.
Click Next.
Enter Password: #$8Yk$-%&
Click Next.
Click Next.
Click Ok.
The Sun ONE Server Console is displayed.
Navigate through the tree in the left-hand pane to find the system hosting your Directory
Server and click on it to display its general properties.
Click on Open.
Double-click the name of your Directory Server in the tree or click the Open button. The
Directory Server Console for managing this Directory Server instance is displayed.
Go to Directory tab.
Click on user.
Click ok.
Click on Edit with Generic Editor.
Note the uid=rkomma,dc=domain,dc=com
----- After user creation in LDAP start dmgr manager and appnode.
---Login to dmgr console.
Click Next.
Select standalone LDAP registry.
Click Next.
Base distinguished name:
Specifies the base distinguished name of the directory service, which indicates the starting
point for Lightweight Directory Access Protocol (LDAP) searches in the directory service. For
example, ou=Rochester, o=IBM, c=us.
Click Finish.
Select LTPA
Password: password
Confirm password: password
Click Apply.
Click ok.
Check Enable administrative security.
Under available realm definitions: Standalone LDAP registry.
Click Apply.
Click Review.
Click Save.
Click ok.
------ Logout the dmgr console and stop and start the dmgr manager.
----- Login to the dmgr console with secure port no.
In the above screen shot u can observe welcome rkomma. The user credentials are came from
LDAP server.
How to configure WAS to stop application server without prompting for password even though
security is enabled?
1. First go to this path
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\properties\soap.client.props.
com.ibm.SOAP.securityEnabled=false
(here make false as true)
com.ibm.SOAP.securityEnabled=true
com.ibm.SOAP.loginUserid=test123
com.ibm.SOAP.loginPassword=password
save the file and go to below path
C:\IBM_ND_6.0\WebSphere\AppServer\bin>PropFilePasswordEncoder.bat <full path
of soap.client.props whether in DMGR or APPSRV01><com.ibm.SOAP.loginPassword>
C:\IBM_ND_6.0\WebSphere\AppServer\bin>PropFilePasswordEncoder.bat
C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\properties\soap.client.pr
ops com.ibm.SOAP.loginPassword
Oracle JDBC and Data source configuration
---- Go to Resources and select JDBC provider.
Click New.
Select Database type: oracle
Provider type: oracle JDBC provider
Implementation type: Connection pool data source
Name: Oracle JDBC Driver.
Click Next.
Provide ojdbc6.jar path location. (By default oracle xe edition doesn’t have ojdbc6.jar. we have to
download that jar file and place it in oracle jdbc library)
Click Next.
Click Finish.
Click Review.
Check synchronize changes with Nodes.
Click save.
Click Ok.
Click Oracle JDBC Driver (or) Go to Resources and select Data sources under JDBC.
From Scope selection drop-down list select
Node = app_Node01
Click New.
Provide Data source name: OracleDS
JNDI name: jdbc/OracleDS
Click Next.
Select Oracle JDBC Driver under select an existing JDBC provider radio button
Click Next.
Provide url: jdbc:oracle:thin:@rajasekhar-pc:1521:XE
Data store helper class name: Oracle 10g data store helper
Check CMP.
Click Next.
Click Next.
Click Finish.
Click Review.
Check synchronize changes with Nodes.
Click Save.
Click Ok
Click Ok
Click Review.
Check Synchronize changes with Nodes.
Click save.
Click Ok.
---- Go to Data sources. Select OracleDS.
Select Oracle_det under component managed authentication alias.
Click Apply.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
Check OracleDS
Because of synch not happened, for that we have to stop the node on AppSrv01 profile and do the
synchronization and start the node.
Open CMD follow the commands in order.
Connection Successful.
Click New.
Provide Database type: DB2
Provider type: DB2 Universal JDBC Driver Provider.
Implementation type: Connection pool data source
Name: It will take default after provide above details (DB2 Universal JDBC Driver Provider).
Click Next.
Provide the db2jcc.jar file location.
Click Next.
Click Finish.
Click Review.
Check synchronize changes with Nodes.
Click Save.
Click Ok.
--- Go to Resources select Data Sources under JDBC.
Click New.
Provide Data source name = DB2DS
JNDI name = jdbc/DB2DS
Click Next.
Select DB2 Universal JDBC Driver Provider under select an existing JDBC provider radio button.
Click Next.
Provide Driver type = 4 (Default)
Database name = SAMPLE.
Server name = rajasekhar-pc
Port number = 50000 (Default)
Check CMP (Default it checked)
Click Next.
Click Next.
Click Finish.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
Click DB2DS.
Under Related items select JAAS – J2c Authentication data.
Click New.
Provide Alias = DB2_det
User ID = db2admin
Password = admin
Click Apply.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
---- Go to Resources. Select Data Sources under JDBC.
Click on DB2DS.
Select dmgr_Node01/DB2_det under Component-managed authentication alias.
Click Apply.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click ok.
Click on Test Connection.
For the first time when you click the Test connection it will through the error.
Because of synch not happened, for that we have to stop the node on AppSrv01 profile and do the
synchronization and start the node.
Click ok.
Synch Command.
Click Ok.
Start App_srv01 Node.
Implementation Types:
1) Connection pool Data Source - Single phase commit.
2) XA Data Source – Two phase commit.
Connection Pool:
It contains predefined connection objects; Server won’t create a new connection object
every time. It will use the connection objects from the connection pool parameters.
Connection Timeout:
Specify the interval, in seconds, after which a connection request times out and a
ConnectionWaitTimeoutException is thrown. This action can occur when the pool is at its maximum (Max
Connections) and all of the connections are in use by other applications for the duration of the wait. For
example, if Connection Timeout is set to 300 and the maximum number of connections is reached, the
Pool Manager waits for 300 seconds for an available physical connection. If a physical connection is not
available within this time, the Pool Manager throws a ConnectionWaitTimeoutException.
Min Connections:
Specify the minimum number of physical connections to be maintained. Until this number is
reached, the pool maintenance thread does not discard any physical connections. However, no attempt is
made to bring the number of connections up to this number. For example, if Min Connections is set to 3,
and one physical connection is created, that connection is not discarded by the Unused Timeout thread.
By the same token, the thread does not automatically create two additional physical connections to reach
the Min Connections setting.
Tip: Set Min Connections to zero (0) if the following conditions are true:
You have a firewall between the application server and database server.
Your systems are not busy 24x7.
Max Connections:
Specify the maximum number of physical connections that can be created in this pool.
These connections are the physical connections to the database. After this number is reached, no new
physical connections are created and the requester waits until a physical connection that is currently in
use is returned to the pool or a ConnectionWaitTimeoutException is thrown. For example, if Max
Connections is set to 5, and there are five physical connections in use, the Pool Manager waits for the
amount of time specified in Connection Timeout for a physical connection to become free. If, after that
time, there are still no free connections, the Pool Manager throws a ConnectionWaitTimeoutException to
the application.
Unused Time out:
Specify the interval in seconds after which an unused or idle connection is discarded.
Tips:
Set the Unused Timeout value higher than the Reap Timeout value for optimal performance.
Unused physical connections are only discarded if the current number of connections not in use
exceeds the Min Connections setting.
Make sure that the database server’s timeout for connections exceeds the Unused timeout
property specified here. Long lived connections are normal and desirable for performance.
For example, if the unused timeout value is set to 120, and the pool maintenance thread is enabled (Reap
Time is not 0), any physical connection that remains unused for two minutes is discarded. Note that
accuracy of this timeout and performance are affected by the Reap Time value.
Aged Time out:
Specify the interval in seconds before a physical connection is discarded, regardless of
recent usage activity.
Setting Aged Timeout to 0 allows active physical connections to remain in the pool indefinitely. For
example, if the Aged Timeout value is set to 1200 and the Reap Time value is not 0, any physical
connection that remains in existence for 1200 seconds (20 minutes) is discarded from the pool. Note that
accuracy of this timeout and performance are affected by the Reap Time value.
Tip: Set the Aged Timeout value higher than the Reap Timeout value for optimal performance.
Reap Time out:
Specify the interval, in seconds, between runs of the pool maintenance thread. For example, if
Reap Time is set to 60, the pool maintenance thread runs every 60 seconds. The Reap Time interval
affects the accuracy of the Unused Timeout and Aged Timeout settings. The smaller the interval you set,
the greater the accuracy. When the pool maintenance thread runs, it discards any connections that are
unused for longer than the time value specified in Unused Timeout, until it reaches the number of
connections specified in Min Connections. The pool maintenance thread also discards any connections
that remain active longer than the time value specified in Aged Timeout.
Tip: If the pool maintenance thread is enabled, set the Reap Time value less than the values of Unused
Timeout and Aged Timeout. The Reap Time interval also affects performance. Smaller intervals mean
that the pool maintenance thread runs more often and degrades performance.
Purge Policy:
Specify how to purge connections when a stale connection or fatal connection error is detected.
Valid values are EntirePool and FailingConnectionOnly. If you choose EntirePool, all physical connections
in the pool are destroyed when a stale connection is detected. If you choose FailingConnectionOnly, the
pool attempts to destroy only the stale connection. The other connections remain in the pool. Final
destruction of connections that are in use at the time of the error might be delayed. However, those
connections are never returned to the pool.
Tip: Many applications do not handle a StaleConnectionException in the code. Test
and ensure that your applications can handle them.
Process:
---- Take a backup of the application.
Go to Enterprise applications under Applications.
Check the PlantsByWebSphere
Click on Export.
Click on save.
Click Back.
Click Save changes.
Click save.
Click Uninstall.
Click Ok.
Click Save Changes.
Click Save.
Click Next.
Click Next.
Click Continue.
Click Next.
Click Apply.
Click Next.
Click Next.
Click Next.
In Step 5 No need to change any thing.
Click Next.
In step 6 No need to change any thing.
Click Next.
In step 7 No need to change any thing.
Click Next.
In Step 8 No need to change any thing.
Click Next.
Click Continue.
In step 9 Select Default_host under Virtual host.
Click Next.
In Step 10 No need to change any thing.
Click Next.
Click Finish.
Click Save.
Click Save.
---- If server were not in stop Mode we have to start the server.
----- Go to Enterprise applications.
Check PlantsByWebsphere.
Click Start.
---- To know under which server the PlantsByWebSphere
---- Click on PlantsByWebSphere.
Click on PlantsByWebSphere.
Click on PlantsByWebSphere.
Select Map Virtual hosts for Web modules under Additional Properties.
Click ok.
Click Save.
Check Synchronize changes with Nodes.
Click Save.
4) --- Go to Environment.
Note: In WAS 7.0 there is no need of crating new port for the applications. But in WAS 6.0 we
have to create new port.
--- Creating new port.
Click on New.
Click Apply.
Click Save.
Click Save.
Click On Install.
Click Next.
Click Next.
In Step 1 No need to change any thing.
Click Next.
In Clusters and Servers select on which server u want to deploy the Application.
Click Review.
Check Synchronize changes with Nodes.
Click Save.
Click Ok.
---- Start Server2
----- To know under which server the application is running.
Go to WebSphere Enterprise Applications.
Click on PlantsByWebSphere.
Click on PlantsByWebSphere.
C:\IBM_ND_7.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\dmgrcell01
\PlantsByWebSphere.ear\META-INF\application.xml.
C:\IBM_ND_7.0\WebSphere\AppServer\profiles\Dmgr01\logs\dmgr\systemout.log.
Click on PlantsByWebSphere.
Click Ok.
Click Review.
Click Save.
Click Ok.
Select Server2.
Expand Ports. WC_defaulthost:9085
5)---- Go to Environment.
Click on default_host.
Click on Host Aliases.
Here you can see the PlantsByWebSphere application port no of the corresponding dedicated
server.
1) Whenever user makes a request for example www.abc.com, initially request will go to
DNS. DNS forwards that request to load balancer.
2) Load balancer forwards that request to web server. We are using IHS as a web server.
3) If a request is static request, i.e. if a request is looking for any HTML pages or images etc,
web server itself generates the response.
4) If a request is dynamic request, web server routes that request to corresponding
appserver with the help of plug-ins.
5) At the time of web server startup, it loads httpd.conf file into the server, it contains path
of the plugin-cfg.xml file.
6) Plugin-cfg.xml file contains complete information about application server environment.
So, that web server forwards the request to the corresponding appserver.
7) Appserver contains mainly two containers. A) Web Container B) EJB Container.
8) Web container is responsible to execute web resources like servlets, jsp’s, html etc.
9) EJB container is responsible to execute EJB resources like session beans, entity beans
and message driven beans.
10) If a request is looking for web resources like servlets, jsp’s, web container itself
generates the response.
11) If a request is looking for EJB resources session beans, entity beans and message driven
beans that request will be forwarded to EJB container through JNDI, RMI, and IIOP
protocol.
12) If a request requires any database interaction that request will be forwarded to
connection pool, based on the connection pool properties it attains a connection object,
once the transaction was completed that connection object will be back to the pool.
13) Finally response will be forwarded from web container to web server and web server
forwards that response to the end user.
Click Ok.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Finish.
----- Start the Apache Server using below commands.
C:\HTTPServer_6.0\bin>apachemonitor.exe
----- Stating/stopping Apache Web Server.
C:\HTTPServer_6.0\bin>Apache.exe –k start
Apache.exe –k stop.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Finish.
Note: In WAS 6.0 ND package before installing the plug-ins take a backup of httpd.conf file from
HTTP server installation folder. (C:\HTTPServer_6.0\conf\httpd.conf). (Just for check for better
understanding what changes are going to be happen). Before installing a plug-in there is no
connection between web server and WebSphere application Server.
---- After installing the plug-ins have a look on httpd.conf file. Because after installing the plug-
ins httpd.conf file update the WebSphere plug-in config with below path.
C:\HTTPServer_6.0\Plugins\config\webserver1\plugin-cfg.xml.
We have to overwrite the above path by generating the new plug-in config file by using
GenPluginCfg.bat. It will generate a server plug-in configuration file for all of servers in the cell
dmgr_cell01.
----- Open the httpd.conf file from HTTP server, bin installation directory. (Below path is before
generating the new plug-in config file)
----- Copy and replace the above path to generated plug-in path in httpd.conf file in HTTP server
installation directory.
--- After completion of all above process we have to restart the dmgr and HTTP web server.
---- Access the Applications using by URL http://rajasekhat-pc:80/PlantsByWebSphere
And access the Default application by URL http://rajasekhar-pc:80/snoop .
Q) Where u will find the Application access log?
Ans: Go to C:\HTTPServer_6.0\logs\access.log. Form this path you can find the access points.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
WAS 7.0 Plug-ins Installation:
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
---- Before installing plug-in there is an entry for the WebSphere plug-in config in httpd.conf file.
The above dialogue box is asking for that HTTP server configuration file is already having
that plug-in entries. If you proceed, this configuration file (httpd.conf) will be updated with a
new plugin-cfg.xml file location.
Click Ok.
Click Next.
Click Next.
Click Next.
Click Next.
Click Next.
Click Finish.
Note: In WAS 7.0 ND package before installing the plug-ins take a backup of httpd.conf file from
HTTP server installation folder. (C:\HTTPServer_7.0\conf\httpd.conf). (Just for check for better
understanding what changes are going to be happen). Before installing a plug-in there is no
connection between web server and WebSphere application Server.
---- After installing the plug-ins have a look on httpd.conf file. Because after installing the plug-
ins httpd.conf file update the WebSphere plug-in config with below path.
C:\HTTPServer_7.0\Plugins\config\webserver1\plugin-cfg.xml.
We have to overwrite the above path by generating the new plug-in config file by using
GenPluginCfg.bat. It will generate a server plug-in configuration file for all of servers in the cell
dmgr_cell01.
Steps to access an application through web server:
----- Generate a plugin-cfg.xml file by using a command GenPluginCfg.bat. This batch file
available under profile home of dmgr01/bin profile. From there you have to run that batch file
through command prompt. Take a reference of above screen shot.
It will generate a plugin-cfg.xml file under this path.
C:\IBM_ND_7.0\WebSphere\AppServer\profiles\Dmgr01\config\cells\pulgin-cfg.xml.
----- Open the httpd.conf file from HTTP server, bin installation directory. (Below path is before
generating the new plug-in config file)
----- Copy and replace the above path to generated plug-in path in httpd.conf file in HTTP server
installation directory.
--- After completion of all above process we have to restart the dmgr and HTTP web server.
---- Access the Applications using by URL http://rajasekhat-pc:80/PlantsByWebSphere
And access the Default application by URL http://rajasekhar-pc:80/snoop .
NameVirtualHost Plants
<VirtualHost Plants>
ServerName Plants
ServerAlias Server2
DocumentRoot
"C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\
dmgr_cell01\PlantsByWebSphere.ear\PlantsByWebSphere.war"
DirectoryIndex index.html
</VirtualHost>
NameVritualHost Snoop1,abc
<VirtualHost Snoop1,abc>
ServerName Snoop1
ServerAlias Server1
DocumentRoot
"C:\IBM_ND_6.0\WebSphere\AppServer\profiles\AppSrv01\installedApps\
dmgr_cell01\DefaultApplication.ear\DefaultWebApplication.war"
DirectoryIndex index.html
</VirtualHost>
For the first time u will get the below error message while starting the web server.
By adding an entry of the particular application's virtual host name in the host file of linux.
Select server2
Click New.
Click Apply.
Click Save.
Click Save.
---- Now you can observe in myport in server2 port’s list.
----- Select server2
Click on server2.
Click New.
Click Next.
Click Next.
Click Finish.
Click Save.
Click Save.
---- Go to Environment.
Click on default_host.
Click Apply.
Click Save.
Click Save.
---- Restart the server2 (On which server you have deployed your application).
---- http://localhost:2000/PlantsByWebSphere/
Use this URL you can login to that application.
Vertical Cluster:
Here we are creating all the cluster members under same box. If any one of the cluster
member is down request will be routed to other cluster members. If machine completely get
crashes there is high impact on end users even though it is a clustered environment.
Cluster
Cl_s1
Cl_s2
Host A
Cl_s3
Horizontal Cluster:
Here we are creating all the cluster members under different boxes. If any one of the
cluster member is down or completely machine get crashes at that time also end users can
access the application from other cluster member which is running on other host. So, here we
can minimize the outage of the application.
A cluster member should not be member of multiple clusters. Under multiple nodes it
will create.
Vertical Cluster Creation Process in WAS 6.0:
Click New.
Click Next
Click Apply.
Click Next.
Click Finish.
Click Save.
Check Synchronize changes with Nodes.
Click Save.
Expand Servers. Select Clusters.
Click on Cluster01.
Click Next.
Click Next.
Click Continue.
Click Next.
Click Apply.
Click Next.
Click Next.
Click Next.
In Step 5 No need to change any thing.
Click Next.
In step 6 No need to change any thing.
Click Next.
In step 7 No need to change any thing.
Click Next.
In Step 8 No need to change any thing.
Click Next.
Click Continue.
In step 9 Select Default_host under Virtual host.
Click Next.
In Step 10 No need to change any thing.
Click Next.
Click Finish.
Click Save.
Click Save.
---- When you start the cluster after deploying applications if any error occurs we have to sync
the changes. (Stop the Appsrvr01 node and do the sync and start the Appsrvr01 node).
----- Go to Enterprise applications.
Check PlantsByWebsphere.
Expand Servers. Select Clusters.
Check Cluster01
Click Start.
Click on Cluster01.
Click on Cluster Members.
Start Custom Node under custom profile.
Check Cl_S1 Click Start.
Accessing the Application in Cluster environment through Web Server.
First we have to check under plugincfg.xml the configurations are correctly added after
cluster configuration. If we configure cluster some entries will add like
If the details are not correctly added to the plugincfg.xml after cluster creation we have
to generate the new plugincfg.xml.
Copy and replace the old plugincfg.xml path in httpd.conf file.
Restart the HTTP web server.
---- Access the snoop application and keep on refresh the browser. There you can observe
accessing the application from cluster members (Server1 and Cl_s1) based on weight age of the
cluster member.
Select New.
Cluster name: Cluster01
Click Next.
Click Next.
Click on Add Member.
Click Next.
Click Finish.
Click Review.
Click Save.
Click Ok.
----- Deploying Application on Cluster.
----- Expand Applications. Click on New Application.
Click Next.
Click Next.
In Step 1 No need to change anything.
Click Next.
In Clusters and Servers select on which server u want to deploy the Application.
Click Review.
Check Synchronize changes with Nodes.
Click save.
Click Ok.
----- Go to Applications
Cluster01 is partially started. (Because we have to start all the cluster members, then Cluster01
is in fully started state).
Click on Cluster01.
First we have to check under plugincfg.xml the configurations are correctly added after
cluster configuration. If we configure cluster some entries will add like
If the details are not correctly added to the plugincfg.xml after cluster creation we have
to generate the new plugincfg.xml.
Copy and replace the old plugincfg.xml path in httpd.conf file.
Restart the HTTP web server.
---- Access the snoop application and keep on refresh the browser. There you can observe
accessing the application from cluster members (Cl_s1, Cl_s2, and Cl_s3) based on weight age
of the cluster member.
Update:
I have installed the application just we want to add any java or jsp file without uninstalling the
application we will use "update" that time.
Rollout Update:
In cluster environment i had changes in cluster at that time no need to stop all members to
update the changes, for that we have to stop the cluster member and it will update then next
start that member. Again we have to stop the second cluster member so on.
Class Loaders
Class loaders are part of the java virtual machine (JVM) code and are responsible for
finding and loading classes (WebSphere java classes and User application classes). We
have different types of class loaders.
Class loaders affect the packaging of applications and the run time behaviors of
packaged applications deployed on the application servers.
WebSphere application server provides several class loader hierarchy and options to
allow more flexible packaging of your applications.
During server startup, the class loader hierarchy will create.
Class loaders have a parent class loader – except the root class loader.
In the class loader hierarchy, a request to load a class can go from a child class
loader to a parent class loader but never from a parent class loader to child class
loader.
If a class not found by a specific class loader or any of it’s parent class loaders,
then a class not found exception will result.
Delegation
Mode:
Each class loader has a delegation (Search) mode that may or may not be configurable.
When searching for a class, a class loader can search the parent class loader before it
looks inside its own loader or it can look at its own loader before searching the parent
class loader.
Delegation algorithm does not apply to native libraries (*.dll, *.so, etc).
Delegation values are:
PARENT_FIRST – Delegate the search to the parent class loader FIRST before attempting
to load the class from the local class loader.
PARENT_LAST – First attempt to load classes from the local class path before delegating
the class loading to the parent class loader.
Allows an application class loader to override and provide its own version of a class that
exists in the parent class loader.
The JVM will cache the class on a successful load and will associate the class with its specific
class loader.
Java class loading allows for a search mode that lets a class loader search its own class path
before requesting a parent class loader or search via the parent class loader before its local
class path. These searches, or delegation modes, are PARENT_FIRST and PARENT_LAST
respectively.
Class Loader Hierarchy:
Class loaders are organized in a hierarchy. This means that a child class loader can
delegate class finding and loading to its parent, should it fail to load a class.
1) JVM Class Loader
2) WebSphere Extension Class Loader.
3) WebSphere Server Class Loader.
4) Application Module Class Loader.
5) Web Module Class Loader.
In WebSphere Application Server V4 if you required JARs to be shared by more than one
application, you would put them in the Install_Root/lib/app directory. The drawback to this
Is that the JARs in this directory are exposed to all the applications. In WebSphere
application Server V5 and V6, shared libraries provide a better mechanism where only
applications that need the JARs are exposed to them. Other applications are not affected
by the shared libraries. The administrator defines the shared libraries by assigning a name
and specifying the file or directories that contain the code to be shared. These defined
shared libraries can then be associated with one or more applications running within the
Application Server.
The advantage of using an application-scoped shared library, is the capability of using
different versions of common application artifacts by different applications. A unique shared
library can be defined for each version of a particular artifact, for example, a utility JAR.
If a native library loaded by shared library requires a second native library, then it must
not be specified in the shared library path. Rather, it must be specified in the JVM native library
path and will be loaded by the JVM class loader.
Native Libraries:
Native libraries are non-java code used by java via the Java Native Interface (JNI). They are
platform specific files, for example: “.dll” in windows, “.SO” and “.a” in unix.
Java applications use the system.loadlibrary (lib name) method to load the native library. It
is loaded at the time of the system.loadlibrary (….) call.
The JVM uses the caller’s class loader to load the native library. If that fails, it then uses the
JVM system class loader.
If both failed to load, an UnsatisfiedLinkError will result.
Native libraries are located on the native library paths of the JVM class loader and the
WebSphere application server (Extensions, server, and application module) class loaders.
Native libraries are loaded by Java using the System load library method. They are loaded
on demand, when needed. Native libraries could be located in the JVM class loader or one
of the WebSphere Application Server class loaders; namely, the Extension, Server, or the
Application module class loader.
WebSphere Application Server Extensions, Server, and Application module class loaders
define a local native library path, similar to the java.library.path supported by the JVM class
loader.
Shown here is an example hierarchy of an Application Class loader policy of Single and a
Web module class loader policy of Application for all the J2EE applications. Using these class
loader options sacrifices the isolation and dynamic reloading features.
Shown here is an example hierarchy of an Application Class loader policy of Single and a
Web module class loader policy of Module for all the J2EE applications.
With these policies, the Web module class loader is loaded by its own separate class
loader. This is the default J2EE class loader mode.
Shown here is an example hierarchy of the Application Class loader policy of Single and
Web module class loader policy of Module for some of the J2EE applications and
Application for the remaining applications.
With these policies, the Web module class loader for Module policy option is loaded by its own
separate class loader. For a Web module class loader policy of Application, they are loaded by
the Application Class loader.
Shown here is an example hierarchy of an Application Class loader policy of Multiple.
Each J2EE Application is loaded by its own class loader. Additionally, with the Web
Module class loader policy of Application, the Web modules of those applications are loaded by
the same class loader as the rest of the J2EE application classes.
For Application 1, the Web module class loader policy is Module. As a result, each Web module
of application 1 will have its own separate class loader, as shown by the WAR1 Web module.
For Application 2, the Web module class loader policy is Application. As a result, all Web
modules of application 2 will be loaded by the same class loader that loaded Application 2 classes,
as shown by the WAR2 Web module.
The example shown here has the Application class loader policy at the server level set to
Multiple.
As a result, the classes for both the applications are loaded by their own separate class
loader, as shown by the class loaders of Application 1 and 2.
However, the Web module class loader policy defines how the Web modules will be
loaded.
For Application 1, the Web module class loader policy is Module. As a result, each Web module
of application 1 will have its own separate class loader, as shown by the WAR1 Web module.
For Application 2, the Web module class loader policy is Application. As a result, each
Web modules of application 2 will be loaded by the same class loader that loaded Application 2
classes, as shown by the WAR2 Web module.
Delegation or search mode is defined at the Application class loader and at the Web
module class loader levels. Based on the different delegation modes, the table shows the
search order path. At each level, if the delegation mode is PARENT FIRST, the search
goes to the parent. If the delegation mode is PARENT_LAST, the current class loader is searched
before the parent. These delegation modes help when an application requires
classes loaded from its own class loader rather than having it loaded by WebSphere
supplied classes. This provides flexibility.
The default mode is the PARENT_FIRST for both the application class loader and the Web
module class loader.
---- By default we will use Multiple and Module class loader policy.
Apply Patches
Release:
This is the term used by WebSphere development and support for a major version of
WebSphere. The first two digits are the product version no’s identifies the release.
Ex: 6.0, 6.1, 7.0, 8.0, and 8.5.
Refresh Pack:
This is the term used to identify an update to the product version which typically contains
feature additions and changes. The third digit of a version no identifies a refresh pack.
Ex: 6.0.0, 6.0.1, 6.1.0, and 7.0.0
FIX Pack:
This is the term used to describe a product update which includes defect fixes. The version
no’s fourth digit identifies a fix pack.
Ex: 6.0.1.0, 7.0.0.25 etc.
Trace Logs:
It contains detailed information about each and every activity. It is easy to pin point a
failure when we enable tracing.
Logging Utilities:
1) Websphere Tracing.
2) Collector Tool.
3) Log Analyzer.
4) FFDC (First Failure Data Capture).
5) Thread Dumps.
6) Heap Dumps.
Websphere Tracing:
Tracing provides detailed information about the execution of websphere (was-root)
components including application servers, clients and other processes in the
environment.
Trace files show the time and sequence of methods called by websphere base classes.
We can use these files to pointing a failure.
The default types of tracing and logging levels are all, finest, debug, error, warn, info,
fatal, config, audit, detail, and off.
When we set logging level as “all” it enables all the logging for that component.
When we set logging level as “off” it disables all the loggings for that component.
Collector Tool:
It generates information about websphere installation and packages it in a java archive file
(jar). This jar file we can send to IBM technical support to assist in determining and analyzing
the problem. We can create a collector tool by using CMD collector.sh.
---- Setting the collector tool class path:
---- Go to My Computer properties.
Select Advanced tab.
Select Environment Variables.
Under system variables add the below entry.
Variable Name: PATH
Variable Value: C:\IBM_ND_7.0\WebSphere\AppServer\bin
---- Execute the command collector.bat through cmd prompt.
---- At the end of execution process one jar file will create. We have to send that file to IBM
technical support.
Log Analyzer:
By default activity log files are in binary format. We have to use log analyzer to read that
activity.log file. It merges all the data and displays the entries.
Based on its symptom db the tool analyzes and interprets the events or error conditions in
the log entries to diagnose the problem.
We can open a log analyzer by using a command (in was 6.0 only).
Waslogbr.sh
FFDC (First Failure Data Capture):
This tool preserves the information generated from a processing failure and runs control
to the affected engines. The tool saves the capture data in a log file to analyze the
problem.
FFDC runs in the background and collects events and errors that are occurring during
websphere application server runtime. By default FFDC log file will be created under
<profile-home>/logs/FFDC.
The main use of this FFDC logs is to get support from IBM (when you request a PMR
(problem management record) request to IBM to analyze the log file).
Thread Dump:
Thread dumps are the snapshots of a JVM at a given time. Thread dump contain info
about the threads and it is used mostly to debug hung threads.
We can generate a thread dump by using a command.
Kill -3 <pid>
It will generate a thread dump file called
Javacore.<timestamp>.pid.<dump no>.txt file under profile home directory.
Alternatively we can generate a thread dump by using a command
$AdminControl invoke $jvm dumpThreads.
Heap Dump:
It contains information about java objects like size of the object and relation between
the object and references for that object etc. Heap dumps are mostly useful in debugging
memory leaks.
We can generate a heap dump by using a command
$AdminControl invoke $jvm generateHeapdump
Click New.
Click ok.
0.91&1.0&1.
JSP 1.1 1.2 1.2 2.0 2.0 2.1
1
Session Affinity:
In a clustered environment, any HTTP requests associated with an HTTP session must be routed
to the same Web application in the same JVM. This ensures that all of the HTTP requests are
processed with a consistent view of the user’s HTTP session. The exception to this rule is when
the cluster member fails
or has to be shut down.
WebSphere assures that session affinity is maintained in the following way: Each server ID is
appended to the session ID. When an HTTP session is created, its ID is passed back to the
browser as part of a cookie or URL encoding. When the browser makes further requests, the
cookie or URL encoding will be sent back to the Web server. The Web server plug-in examines
the HTTP session ID in the cookie or URL encoding, extracts the unique ID of the cluster
member handling the session, and forwards the request.
This situation can be seen in Figure 12-3, where the session ID from the HTTP header,
request.getHeader(“Cookie”), is displayed along with the session ID from session.getId(). The
application server ID is appended to the session ID from the HTTP header. The first four
characters of HTTP header session ID are the cache identifier that determines the validity of
cache entries.
The JSESSIONID cookie can be divided into these parts: cache ID, session ID, separator, clone ID,
and partition ID. JSESSION ID will include a partition ID instead of a clone ID when memory-to-
memory replication in peer-to-peer mode is selected. Typically, the partition ID is a long
numeric number.
Table 12-1 shows their mappings based on the example in Figure 12-3. A clone
ID is an ID of a cluster member.
The application server ID can be seen in the Web server plug-in configuration
file, plug-in-cfg.xml file, as shown in Example 12-4.
Example 12-4 Server ID from plugin-cfg.xml file
<?xml version="1.0" encoding="ISO-8859-1"?><!--HTTP server plugin
config file for the cell ITSOCell generated on 2004.10.15 at 07:21:03
PM BST-->
<Config>
......
<ServerCluster Name="MyCluster">
<Server CloneID="vuel491u" LoadBalanceWeight="2"
Name="NodeA_server1">
<Transport Hostname="wan" Port="9080" Protocol="http"/>
<Transport Hostname="wan" Port="9443" Protocol="https">
......
</Config>
Note: Session affinity can still be broken if the cluster member handling the request fails. To
avoid losing session data, use persistent session management. In persistent sessions mode,
cache ID and server ID will change in the cookie when there is a failover or when the session is
read from the persistent store, so do not rely on the value of the session cookie remaining the
same for a given session.
Issues:
Issue with Application Module: We are able to access the application but whenever we select
modules to view the data we are not getting the data.
This is an incident ticket and it is sev1 ticket. To troubleshoot this issue, I look in to the
jvm logs because i found an entry in the acces.log file and also web server is up and
running there is no issue with the web server but in the jvm logs systemerr.log i found
that could not invoke service method on servlet detail servlet and the exception i have
noticed java.lang.illegal state exception.
I know this exception may cause due to errors in the code or due to invalid session
objects at the time of response generation. There are no changes in the code because
we deployed that application before one week and we are able to access that app after
deployment. I suspect that there may be a problem with session objects, because
including illegal state exception it is showing session object internal with an id.
To resolve this issue i have logged in to the admin console and increases the set time
value for that application under additional properties after getting the approval and
restarted the application. Then we didn't face any time this kind of issue from that
module to resolve this issue i went through the HTTP session interface life cycle and
understood that how new session objects will be created and how invalidate method
will be called after getting clear understanding about http session interface life cycle. I
noticed that i have to increase set time out values an issue got resolved.
Synchronization:
There are two types of synchronizations in WebSphere.
1) Partial or Normal Synchronization.
2) Full Synchronization.
Usually auto sync is the partial sync and also auto sync can do full sync i.e. whenever
sync happens first time after the node agent started.
Whenever user clicks synchronize option in admin console i.e. a partial or normal sync.
we cannot do partial sync by using syncnode.sh.
Whenever issue the command syncnode.sh i.e. full sync.
Sync uses a cache mechanism called epoch that will be used by node agent and dmgr to
determine what files in the master repository have changed after the last sync.
Whenever the next sync operation is invoked only the folders in the cache will be
compared. If the epoch's for a folder in the cache are different such folders are
considered as modified and will be checked in the sync process.
The refresh repository epoch jmx cell cleans up the cache which requires the next sync
operation to check all folders in the repository resulting a full sync operation.
In case of full sync all folders are checked into the cache including the files which are
modified manually and the updated file will be detected and files will get updated in the
corresponding location.
In case of partial sync or auto sync the manually changed files will not be pushed to the
node repository, because dmgr is not aware the files that are changed and does not put
the folders which contains the modified files into cache.
The partial sync does not know the updated files and folders and it will not be checked.
In case of full sync including all folders, files and also manually modified files will be
identified and it will maintain accurate data.
HTTP 404 ERROR
1. This error may occur due to problem in the web server configuration, issue with the plug-ins, virtual
host configuration and also if the application server (or) if the application is under not running.
2. To resolve this issue , first check the web server is responding or not, we can check it by using
3. If the web server is up and running, we will get a welcome page and here we can confirm that there is
no issue with web server.
4. If we didn’t get a response there may be issue with the web server and we need to look into that “log
files”
5. Check the status of the application server and the application which we are trying to access , if this
server is not running ,we have to start the server and if the application is not started , we have to start
the application.
6. If the failure occurs in the application server “systemout.logs” file contains information regarding the
issue, if there are no messages regarding the failure , may be the request are not reaching to the
application server.
7. If the customer gives incorrect “URL” at the time also there is a chance of getting 404 error code in
the “access.log” file and also due to the unavailability of the component which is required to access that
resource.
8. It is better to make a note of the URL at that time , when the error is displayed.
9. There are some cases , we get an error msg, ”JSP ERROR:FAILED TO FIND A RESOURCE” with a code
“JSPG0036E” at that time we can get a 404 status code, this may occur due to page is not available in
the server and also due to the web server and plug-in configuration with the app. server.
10. We have to look into the access.log or error.log file which are under web server logs ,in the
access.log file we have got an entry as follows:-
11. In the error.log file we have the log entries with the format <TIMESTAMP>[ERROR] [CLIENT
REMOTEHOST] error message.
12. For example if the access.log file contains 404 STATUS CODE and error.log file contains “FILE
DOESNOT EXIST WITH FILEPATH” Based on that error msg “file does not exist”. This is because ,the file
does not exist in the web server and also verify “URL” is correct and make sure that file is available in
the proper location.
13. If there are no error message in error.log file and if access.log file contains 404 STATUS CODE , this
may be the problem with the web server plug-in or with the application server . It indicates that web
server did not consider this as a request that it should handle .
14. Make sure that Plugin-cfg.xml file is generated after the changes to the environment. If it is not
generated check the timestamp and generate a new plugin-cfg.xml file and make sure that we are able
to access with the new updates.
15. Sometimes it is necessary to look into “URL” pattern of the deployment descriptor i.e web.xml file
which is inside your application to access with proper URL.
16. Verify the Virtual Host configuration under environment Virtual HostHost Aliasesedit the host
name (or) port number if necessary and generate a plugin-cfg.xml file.
17. Access ur application directly from the application server by using application server port and we can
confirm that the issue with the app. server or web server and plug-ins .
18. This is the way to resolve 404 issue . It will be easy to identify the problem i.e.:- occurring either in
the web server, plug-ins or application server, If we start debugging from the web server level.
Garbage Collection:
It is a mechanism provided by JVM to reclaim heap space from java objects. In case of c or c++ the
programmer have to take the responsibility for the memory management. Where as in case of java
memory management is taken care by Garbage Collector. so that a java developer can focus more time
on business logic.
Throughput:
Pause Time:
It is the duration of time in which garbage collection has paused (hold) all application thread to collect
heap.
Optimal Throughput:
This is the default GC policy.It is typically used for application where throughput is more impotant than
short GC passes.
We can choose this optimal avg pause for high throughput and for shorter gc passes by performing
some of the garbage collection concurrently, if any where the application is paused for shorter periods.
It handles short lived objects differently than objects that are long alive. we can use this policy if the
applications have many short lived objects and looks shorter pass times to produce a good throughput.
Subpool or subpooling:
It is more suitable for large multiprocessor machines and it is advisable to go for SMP machines with 16
more processors. This policy is only available on IBM p series and z series.
Applications that need to scale on large machines can benifit from this policy.
Verbose GC is a mechanism which logs the information about GC, when GC is called which objects are
removed.
On a network deployment installation, verify that the deployment manager is running before you install an
application. Use the startManager command utility to start the deployment manager.
There are two ways to complete this task. Complete the steps in this topic to use the AdminApp object to
install enterprise applications. Alternatively, you can use the scripts in the AdminApplication script library
to install, uninstall, and administer your application configurations.
The scripting library provides a set of procedures to automate the most common administration functions.
You can run each script procedure individually, or combine several procedures to quickly develop new
scripts.
About this task
Use this topic to install an application from an enterprise archive file (EAR), a Web archive (WAR) file, a
servlet archive (SAR), or a Java archive (JAR) file. The archive file must end
in .ear, .jar, .sar or .war for the wsadmin tool to complete the installation. The wsadmin tool uses
these extensions to determine the archive type. The wsadmin tool automatically wraps WAR and JAR
files as an EAR file.
Best practice: Use the most recent product version of the wsadmin tool when installing
applications to mixed-version environments to ensure that the most recent wsadmin options and
commands are available.
Procedure
For example, if your configuration consists of a node, a cell, and a server, you can specify that
information when you enter the installcommand. Review the list of valid options for
the install and installinteractive commands in the Options for the AdminApp object install,
installInteractive, edit, editInteractive, update, and updateInteractive commands using wsadmin
scripting topic to locate the correct syntax for the -node, -cell, and -server options. For this
configuration, use the following command examples:
Using Jython:
AdminApp.install('location_of_ear.ear','[-node nodeName -cell cellName
-server serverName]')
Using Jacl:
$AdminApp install "location_of_ear.ear" {-node nodeName -cell cellName
-server serverName}
You can also obtain a list of supported options for an enterprise archive (EAR) file using
the options command, for example:
Using Jython:
print AdminApp.options()
Using Jacl:
$AdminApp options
You can set or update a configuration value using options in batch mode. To identify which
configuration object is to be set or updated, the values of read only fields are used to find the
corresponding configuration object. All the values of read only fields have to match with an
existing configuration object, otherwise the command fails.
You can use pattern matching to simplify the task of supplying required values for certain
complex options. Pattern matching only applies to fields that are required or read only.
You can install the application in batch mode, using the install command, or you can install the
application in interactive mode using theinstallinteractive command. Interactive mode prompts
you through a series of tasks to provide information. Both the install command and
theinstallinteractive command support the set of options you chose to use for your installation in
the previous step.
4. Install the application. For this example, only the server option is used with the install command,
where the value of the server option isserv2. Customize
your install or installInteractive command with on the options you chose based on your
configuration.
AdminApp.install('c:/MyStuff/application1.ear', '[-
cluster cluster1]')
AdminApp.install('c:/MyStuff/application1.ear', ['-
cluster', 'cluster1'])
Using Jacl:
Table 1. install cluster command elements. Run the install command with the -cluster option.
AdminApp.installInteractive('c:/MyStuff/application1.ear')
Using Jacl:
Table 2. installInteractive command elements. Run the installInteractive command with the -name of the
application to install.
AdminConfig.save()
AdminApp.install('/root/Desktop/DefaultApplication.ear','[-node
app_node -cell appCell01 -server server1]')
print AdminApp.options()
AdminConfig.save()
Dmgr/bin>wsadmin –lang jython
Procedure
wsadmin>AdminApp.uninstall('DefaultApplication.ear')
ADMA5017I: Uninstallation of DefaultApplication.ear started.
ADMA5106I: Application DefaultApplication.ear uninstalled successfully.
''
3.Save the configuration changes.
Use the following command example to save your configuration changes:
wsadmin>AdminConfig.save()
4. In a network deployment environment only, synchronize the node.
Wsadmin>AdminNodeManagement.syncActiveNodes()
Maximum heap size?
For 32 bit the Max heap size is 2GB AND for 64 bit the Max heap size unlimited.
Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the System Administration heading and click Deployment
manager.
3. Click the name of the node agent that you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the deployment manager for the changes to take effect.
You might need to increase the WebSphere® Application Server heap size.
Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the Servers heading and click Application servers.
3. Click the name of the WebSphere Application Server you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the application server for the changes to take effect.
You might need to increase the WebSphere® Application Server Node Agent heap size.
Procedure
1. Open the Integrated Solutions Console.
2. On the left-hand side, expand the System Administration heading and click Deployment
manager.
3. Click the name of the node agent that you want to modify.
4. Under Server Infrastructure, expand the Java and Process Management heading and
click Process Definition.
5. Under Additional properties, select Java Virtual Machine.
6. In the Maximum Heap Size text box, specify the new maximum heap size.
7. Click OK and save.
8. Restart the deployment manager for the changes to take effect.
If we give heap size value same for both min and max then what are the
advantages and what are the disadvantages?
The Java heap parameters influence the behaviour of garbage collection. Increasing the heap size
supports more object creation. Because a large heap takes longer to fill, the application runs longer
before a garbage collection occurs. However, a larger heap also takes longer to compact and causes
garbage collection to take longer.
The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached,
the garbage collector gets invoked to free up unused storage. Therefore, garbage collection can cause
significant degradation of Java performance. Before changing the initial and maximum heap sizes, you
should consider the following information:
In the majority of cases you should set the maximum JVM heap size to value higher than the
initial JVM heap size. This allows for the JVM to operate efficiently during normal, steady state periods
within the confines of the initial heap but also to operate effectively during periods of high transaction
volume by expanding the heap up to the maximum JVM heap size.
In some rare cases where absolute optimal performance is required you might want to specify
the same value for both the initial and maximum heap size. This will eliminate some overhead that
occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is
large enough to hold the specified JVM heap.
Beware of making the Initial Heap Size too large. While a large heap size initially improves
performance by delaying garbage collection, a large heap size ultimately affects response time when
garbage collection eventually kicks in because the collection process takes more time.
Synopsis:
rotatelogs [ -l ] [ -f ] logfile rotationtime|filesizeM [ offset ]
Options:
-l
Causes the use of local time rather than GMT as the base for the interval or
for strftime(3) formatting with size-based rotation. Note that using -l in an
environment which changes the GMT offset (such as for BST or DST) can lead to
unpredictable results!
-f
Causes the logfile to be opened immediately, as soon as rotatelogs starts,
instead of waiting for the first logfile entry to be read (for non-busy sites, there may
be a substantial delay between when the server is started and when the first
request is handled, meaning that the associated logfile does not "exist" until then,
which causes problems from some automated logging tools). Available in version
2.2.9 and later.
logfile
The path plus basename of the logfile. If logfile includes any '%' characters, it is
treated as a format string for strftime(3). Otherwise, the suffix .nnnnnnnnnn is
automatically added and is the time in seconds. Both formats compute the start
time from the beginning of the current period. For example, if a rotation time of
86400 is specified, the hour, minute, and second fields created from
the strftime(3) format will all be zero, referring to the beginning of the current
24-hour period (midnight).
When using strftime(3) filename formatting, be sure the log file format has
enough granularity to produce a different file name each time the logs are rotated.
Otherwise rotation will overwrite the same file instead of starting a new one. For
example, if logfile was /var/logs/errorlog.%Y-%m-%d with log rotation at 5
megabytes, but 5 megabytes was reached twice in the same day, the same log file
name would be produced and log rotation would keep writing to the same file.
rotationtime
The time between log file rotations in seconds. The rotation occurs at the beginning
of this interval. For example, if the rotation time is 3600, the log file will be rotated
at the beginning of every hour; if the rotation time is 86400, the log file will be
rotated every night at midnight. (If no data is logged during an interval, no file will
be created.)
filesizeM
The maximum file size in megabytes followed by the letter M to specify size rather
than time.
offset
The number of minutes offset from UTC. If omitted, zero is assumed and UTC is
used. For example, to use local time in the zone UTC -5 hours, specify a value of-
300 for this argument. In most cases, -l should be used instead of specifying an
offset.
Expamples:
CustomLog "|bin/rotatelogs /var/logs/logfile 86400" common
This creates the files /var/logs/logfile.nnnn where nnnn is the system time at which the log nominally
starts (this time will always be a multiple of the rotation time, so you can synchronize cron scripts
with it). At the end of each rotation time (here after 24 hours) a new log is started.
This creates the files /var/logs/logfile.yyyy.mm.dd where yyyy is the year, mm is the month, and dd is
the day of the month. Logging will switch to a new file every day at midnight, local time.
This configuration will rotate the logfile whenever it reaches a size of 5 megabytes.
On Windows:
CustomLog “|D:/viewstore/snapshot/ralberts/apache/bin/rotatelogs.exe
D:/viewstore/snapshot/ralberts/apache/logs/access.%Y-%m-%d-%H_%M_%S.log 60″ common
Portability:
The following logfile format string substitutions should be supported by all strftime(3) implementations,
see the strftime(3) man page for library-specific extensions.