Agility Eservices Performance Test Summary Report For Internet Based
Agility Eservices Performance Test Summary Report For Internet Based
Version 1.0
Submitted To
Agility Logistics
Submitted By
www.AppLabs.com
22-01-2010
Venkateswarlu Gunduboyina
Agility E Services is a Product of Agility Logistics – The GIL business group offers an integrated portfolio
of logistics solutions supported by a comprehensive network of warehousing facilities, transportation
and freight management services worldwide. DGS, our government contracting business group,
leverages our global logistics network and our track record of exceptional services to provide
comprehensive logistics solutions to the defence and government sector.
Agility Logistics has several applications for its operations and is categorized into different applications
based on the criticality and business requirements.
AppLabs would do proof of concept (POC) test on the UAT staging server to demonstrate its
effectiveness in terms of reporting the client side and server side observation, identifying the bottleneck
and providing appropriate recommendation if any.
AppLabs is working towards testing the Agility E Services application in order to observe the
performance and identify any critical areas causing the response time delays. Agility E Services
application has various modules which have been deployed in IIS web server having SQL Server as the
database backend.
The objective of the test is to generate 750 concurrent user load on the Agility E Services application
over the Internet with specified transaction scenario and think times and to observe the client side and
server side performance. Agility team gave the application walkthrough on 28th December 2009.
Scripting of the defined scenarios was started using silk performer 2008 r2, once after the approval of
the test plan. Initially few users and data were loaded for scripting purpose and later all the necessary
data for the load test was loaded. 5 machines are allocated for console and load generation at Applabs,
for Internet based testing. Silk performer was installed in all the machines and the 1000 user license
was installed on one of the machine which was used as console.
Scripting framework was developed initially for all scripts including PreOrder booking, Orderbooking,
Cargo Arrival, Prepare Stuffing Plan, Stuffing Plan Execution, Container land Movement, Update MBL
details, Departure Confirmation. Few Application functionality issues were identified and escalated to
Agility team and they were immediately fixed and made ready for scripting.
Server scripts for windows were given to data centers to deploy them and validate the logs for the
application and the database server. The Logs were validated and appropriate scripts were copied and
made ready for generating the server side metrics during the main test. A 100 user smoke test was
performed to validate the script, server logs and the LG’s capability.
The Application server resource utilization was normal and the CPU Processor time was at 50% at
average. This was the stage even before the test started, few spikes were noticed while during the test
and it reached a maximum of 100 % at one point. The Application server memory has been stable.
Paging activities were not noticed and the memory was well managed by the application server.
Minimal disk resources were utilized during the test.
Database server CPU peaked at 10% and was consistently at 9% till the end of the test. Memory was
not much utilized and also the disk usage was optimal.
MOSS Server CPU peaked at 9.1% and averaged at 1.9%. Memory and disk utilization were also
optimal. ASP.net requests were getting queued and this causes high response times for the end user.
IIS server spiked to 100% and averaged at 25%. Memory and disk utilization were optimal.
Application was able to respond till 400 user load level. Whenever the load increased above 400, the
application responsiveness decreased and threw more number of Internal server errors and
Redirections.
# of
Module Step Error Description Reason for Error
Errors
Hits per second depict the total number of hits generated by each and every request. The HPS will
follow the same pattern as throughput unless there are any abnormal errors and same page / object are
downloaded for huge number of time.
From the above graph it can be observed that as the user load increases the hits also increase and
reduce slowly with spikes as usually.
There were totally 1,685 successful transaction (excluding the login and logout transaction) and
458 errors. The following table shows the individual breakup of transaction module wise
Create PreOrder:
OrderBooking:
There are three types of timers (Page timers, Step timers and Transaction timers)
Page timers represent the timers for individual page inside every step.
For example: Get Activity item is considered to be a step and the following pages are considered
to be part of each step.
1. Selecting item
2. Selecting date
Step timers include the timers defined between each steps of the scenario.
For example: Preorder Booking Activity is considered to be a scenario and the following steps are
considered to be part of it.
2. Create PreOrderRequest
3. Enter Details
4. Submit
5. Logout
Transaction timers are the timers for every transaction scenarios. They are
Agility E Services AppLabs
Page |9
1. Preorder booking
2. Orderbooking
3. Cargo Arrival
The following graph represents the response time statistics observed for
Agility E Services Activity Sheet scenario.
The following tables represent the response time statistics for step and transaction timer observed on
the client side at 200,400 and 750 user loads respectively.
400 Users Steady Period Avg Min Max
Overall Response Time 456.403 4.657 2,202.20
AES_CA_S01_Login 810.834 138.422 2,202.20
AES_CA_S02_ClickInstance 270.608 88.172 886.516
AES_CA_S03_ReceiveCargoArrivals 564.545 207.656 1,168.70
AES_CA_S04_SubmitCargoDetails 243.944 125.625 329.032
AES_CA_S05_ForcedSignout 282.135 98.422 906.062
AES_CA_S05_Signout 182.066 4.657 392.14
3.3 Throughput:
The following graph represents the Network Throughput statistics observed on the client side
675 90
600 80
525 70
CPU Statistics
450 60
Load Size
375 50
300 40
225 30
150 20
75 10
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Description:
CPU statistics provides the percentage of time processor spends on User applications, system processing (Privileged
time).
Observations:
The MOSS WFE (Web front end) server was stable during the test. The highest CPU utilization on this server was
observed 9.1% and average CPU utilization was observed less than 1.9%.
675 2250
600 2000
525 1750
Memory Statistics
450 1500
Load Size
375 1250
300 1000
225 750
150 500
75 250
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Elapsed Time (hh:mm)
Load Size Available Memory(MB) Total Memory(MB)
Description:
Memory statistics provides the Available memory during the test mapped
Observation:
The maximum memory available was 2041 MB and minimum during the test was 1024 MB. The total available memory
on the machine is 4096 MB.
Description:
Average disk queue length is the average number of requests waiting for disk activities.
Observation:
The highest value for Average disk queue length was observed less than 0.02. Considering overall duration of the test,
disk performance was stable.
AGILITY-MOSS-RequestsInApplicationQueue
750 2000
675 1800
525 1400
Load Size
450 1200
375 1000
300 800
225 600
150 400
75 200
0 0
0:00 0:11 0:23 0:34 0:46 0:58 1:09 1:21 1:32 1:44 1:56
Description:
Observation:
We observed that hundreds of requests are getting queued in the MOSS server request queue. Should identify the
reason for more requests getting queued like shortage of worker threads to service the requests in queue etc.
900 225
800 200
Connections Statistics
700 175
600 150
Load Size
500 125
400 100
300 75
200 50
100 25
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Description:
Connection statistics gives the details of TCP connections established with MOSS server during the test.
Observation:
The connections established were stable during the test. The highest number of Connections Active was 2265.
675 90
600 80
525 70
CPU Statistics
450 60
Load Size
375 50
300 40
225 30
150 20
75 10
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Description:
CPU statistics provides the percentage of time processor spends on User applications, system processing (Privileged
time).
Observations:
The BizTalk server was stable during the test. The highest CPU utilization on the BizTalk server was observed 9.66%
and average CPU utilization was observed less than 2.31%.
675 3780
600 3360
525 2940
450 2520
Memory Statistics
Load Size
375 2100
300 1680
225 1260
150 840
75 420
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Description:
Memory statistics provides the Available memory during the test mapped
Observation:
The memory utilization was stable during the test. Minimum available memory was 344 MB and maximum available was
512 MB. Total memory is 4096 MB.
Description:
Average disk queue length is the average number of requests waiting for disk activities.
Observation:
The highest value for Average disk queue length was observed less than 2.06. Considering overall duration of the test,
disk performance was stable.
675 4.5
600 4
Throughput statistics
525 3.5
450 3
Load Size
375 2.5
300 2
225 1.5
150 1
75 0.5
0 0
0:00 0:11 0:23 0:34 0:46 0:58 1:09 1:21 1:32 1:44 1:56
Elapsed Time (hh:mm)
Load Size Bytes Received (in Mbps) Bytes Sent (in Mbps) Throughput (in Mbps)
Description:
Throughput is the total number of bits sent/received per second during the test.
Observation:
The average throughput was around 0.5 Mbps and highest throughput was 1.9 Mbps.
TCP Statistics:
The following graph represents the Network TCP statistics observed on the SQL server.
675 1080
600 960
525 840
Connections Statistics
450 720
Load Size
375 600
300 480
225 360
150 240
75 120
0 0
00:00 00:11 00:23 00:34 00:46 00:58 01:09 01:21 01:32 01:44 01:56
Description:
Connection statistics gives the details of TCP connections established with BizTalk server during the test.
Observation: The connections established were stable during the test. The highest number of Connections Active was
1150.
5 Scope of Testing
The scope of this project is testing the application which is deployed in the IIS server. The objective is to test the
performance of the application with 750 users and monitor the Web and database server.
6 Test Approach
The following approach has been adopted
Think time of 10 seconds was implemented between each step and 60 seconds will be implemented
between every iteration.
If any error has occurred, the user will exit the current transaction and re-login to create a new session.
ASP.NET request queuing issue needs to be resolved. This should be likely solved by tuning the availability of
worker threads. Identify all possible reasons behind the sudden spikes in request queues.
The high number of HTTP 500 errors should be resolved. Please check the event and extended logs in MOSS
server.
IIS Server:
Tune the IIS server to reduce CPU consumption and bring down the number of failures.
Tune the object level performance issues to reduce the queue length on IIS server.
Savvion server:
Server errors:
Observed too many HTTP 500 internal server errors. Please review the event logs on the MOSS server,
identify the root cause of performance issues and fix them to reduce the server errors.
Observed data loading issues at run time on the INBOX. So please verify the connections between the
application and database layers to get the data loaded into INBOX.
Observed issue with Sign out from the application when the server was at peak load. So please tune the MOSS
server and fix the same.
Observed too many redirections from the MOSS web front end server at peak load.