LoadRunner PDF
LoadRunner PDF
Manoj Xerox
Cell:8978989710
1.2 Execute
Execution will be done in multiple phases. It consists of various types of testing like
Baseline Testing
This testing is done to ensure that both applications’ functionality & environment
setup for performance testing is proper. This will be done with 10 to 20 users.
Performance Tests
This includes running the scripts with Average and Maximum load with specified no
of transactions. We will be simulating the real time environment with different load
conditions using different Scenarios, Think Time, Pace Time and no. of users.
Benchmark Tests
These are designed to measure and compare the performance of each machine type,
environment or build of the application in ideal situation. These tests are run after
the system undergoes scalability testing to understand the performance impact of
different architectures.
Analysis and Tuning
During the performance testing we will be capturing all the details related to the
system like Response time and System Resources for identifying the major
bottlenecks of the system. After the bottlenecks are identified we have to tune the
system to improve the overall performance.
Performance Optimization
This quantifies to improve the performance of the system as measured by improved
end-user response time or by reducing the overall required hardware infrastructure.
Load tests are performance tests, which are focused on determining or validating
performance characteristics of the product under test when subjected to workload
models and load volumes anticipated during production operations.
What are the benefits?
• How many users can the application handle before “bad stuff” happens
• How much data can my database/file server handle?
• Are the network components adequate?
• Many users requesting a certain page at the same time or using the site
simultaneously.
• Increase the number of users and keep the data constant.
• Memory leaks
• Disk I/O (thrashing)
• Slow return to steady – state
Capacity testing is related to stress testing .It determines your server's ultimate
failure point. You perform capacity testing in conjunction with capacity planning.
You use capacity planning to plan for future growth, such as an increased user base
or increased volume of data.
Vugen
Generating the script, recording the script, transactions and Rendezvous points.
Vugen is used to monitor the communication between the application and the server.
Controller
You use the Load Runner Controller to manage and maintain your scenarios. Using
the Controller, you control all the Vusers in a scenario from a single workstation.
Load generator
When you execute a scenario, the Load Controller distributes each Vuser in the
scenario to a load generator. The load generator is the machine that executes the
Vuser script, enabling the Vuser to emulate the actions of a human user.
Analysis
Vuser scripts include functions that measure and record system performance during
load-testing sessions. During a scenario run, you can monitor the network and server
resources. Following a scenario run, you can view performance analysis data in
reports and graphs.
Load Runner works by creating virtual users who take the place of real users
operating client software, such as Internet Explorer sending requests using the
HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients
are generated by "Load Generators" in order to create a load on various servers
under test. These load generator agents are started and stopped by Mercury's
"Controller" program. The Controller controls load test runs based on "Scenarios"
invoking compiled "Scripts" and associated "Run-time Settings". Scripts are crafted
using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-
language script code to be executed by virtual users by capturing network traffic
between Internet application clients and servers. With Java clients, VuGen captures
calls by hooking within the client JVM. During runs, the Controller monitors the
status of each machine. At the end of each run, the Controller combines its
monitoring logs with logs obtained from load generators, and makes them available
to the "Analysis" program, which can then create run result reports and graphs for
Microsoft Word, Crystal Reports, or an HTML webpage browser. Each HTML report
page generated by Analysis includes a link to results in a text file, which Microsoft
Excel can open to perform additional analysis.
2.6 Some of the most commonly used terms in the Load Runner
Scenarios
Using Load Runner, you divide your application performance testing requirements
into scenarios. A scenario defines the events that occur during each testing session.
Thus, for example, a scenario defines and controls the number of users to emulate,
the actions that they perform, and the machines on which they run their emulations.
Transactions
Measures the time takes for the server to respond to specify Vuser requests.
Rendezvous points
You insert Rendezvous points into Vuser scripts to emulate heavy user load on the
server. Rendezvous points instruct Vusers to wait during test execution for multiple
Vusers to arrive at a certain point, in order that they may simultaneously perform a
task. For example, to emulate peak load on the bank server, you can insert a
rendezvous point instructing 100 Vusers to deposit cash into their accounts at the
same time.
Scripting
Scripts represent recorded user actions issued by a browser to a web application
during a web session. They are created by passing HTTP traffic through a proxy
server then encoding the recorded data, which can be edited later for use in creating
different scenarios.
Key Features
• Record and play back.
• Ability to recognize web page components (tables, links, dropdown menus,
radio buttons).
Scenario creation
Ability to determine custom load scenarios including number of Vusers, the scripts
being executed the speed of the end user connection and browser type, and the
rampup profile. In some instances, scenarios can be modified “on the fly” to create
“whatif” scenarios.
Key features:
• Vuser creation and support.
• Weighting Vusers
• Adjust Vusers access speed.
• Ability to combine scripts to create scenarios.
Once you determine the current capacity, you can decide if resources need to be
increased to support additional users.
Section 4.1:
For the Protocol and Port Mapping tabs, we are using default settings.
Section 4.2: This tab in recording options is one of the most important and decisive
ones.
Selecting a Recording Level
VuGen lets you specify what information to record and which functions to use when
generating a Load Runner script by selecting a recording level. The recording level
you select depends on your needs and environment. The available levels are HTML-
based (context sensitive) script, and URL-based script.
Follow these guidelines in deciding which recording level to choose.
• For browser applications without JavaScript, use the HTML-based level.
• For non-browser applications, use the URL-based level.
The HTML-based script level generates a separate step for each HTML user
action. The steps are also intuitive, but they do not reflect true emulation of
the JavaScript code.
The URL-based script mode option instructs VuGen to record all browser requests
and resources from the server that was sent due to the user's actions. It
automatically records every HTTP resource as URL steps (web_url statements). For
normal browser recordings, it is not recommended to use the URL-based mode since
is more prone to correlation related issues. If, however, you are recording pages
such as applets and non-browser applications, this mode is ideal. URL-based scripts
are not as intuitive as the HTML-based scripts, since all actions are recorded as
web_url steps instead of web_link, web_image, etc. In HTML recording, in order to
maintain context it looks through the previous page to make sure the current
operation is available. This adds extra overhead because it is performed by a
background utility called the runtime parser. Although the parser uses extra
overhead, it does saves us work in correlation and page checks. This because if a
link/request is not available, the runtime parser will see that and throw an error
telling us that the request is not there. (The old "Requested Form Not Found" error).
Section 4.3:
Record think time: (single-protocol only) Think time emulates the time that a
real user waits between actions, to influence how the Vuser uses the recorded think
time when running the script. To record user think time, select Record think time.
Reset Context for Each Action: (Web, Oracle NCA only) this setting, enabled by
default, tells VuGen to reset all HTTP contexts between actions. Resetting contexts
allows the Vuser to more accurately emulate a new user beginning a browsing
session. This option resets the HTML context, so that a contextless function is always
recorded in the beginning of the action. It also clears the cache and resets the user-
names and passwords.
Full trace recording log: (single-protocol only) This setting creates a trace log
during recording. This log is used internally by Mercury Interactive Customer Support
and is disabled by default.
Save snapshot resources locally: This option instructs VuGen to save a local copy
of the snapshot resources during record and replay. This feature lets VuGen create
snapshots more accurately and display them quicker.
Generate web_reg_find functions for page titles: (Web, Oracle NCA only)
This option enables the generation of web_reg_find functions for all HTML page titles.
VuGen adds the string from the page's title tag and uses it as an argument for
web_reg_find.
Add comment to script for HTTP errors while recording: This option adds a
comment to the script for each HTTP request error. An error request is defined as
one that generated a server response value of 400 or greater during recording
Section 4.4:
VuGen's correlation engine allows you to automatically correlate dynamic data during
your recording session using one of the following mechanisms:
Built-in Correlation
The Built-in correlation detects and correlates dynamic data for supported application
servers. Most servers have clear syntax rules, or contexts, that they use when
creating links and referrals.
You begin the process of developing a Vuser script by recording a basic script. Load
Runner provides you with a number of tools for recording Vuser scripts. You enhance
the basic script by adding control-flow structures, and by inserting transactions and
rendezvous points into the script.
You then configure the run-time settings. The run-time settings include iteration, log,
and timing information, and define how the Vuser will behave when it executes the
Vuser script. To verify that the script runs correctly, you run it in stand-alone mode.
When your script runs correctly, you incorporate it into a Load Runner scenario.
When you run multiple iterations of a Vuser script, only the Actions sections
of the script are repeated—the vuser_init and vuser_end sections are not repeated.
2. Select File > New or click the New button. The New Virtual User dialog box
Opens.
5. Click Options to set the recording mode, browser, proxy, and additional recording
options.
6. Type a Web site address (URL) in the URL box, or select one from the list. This is
where you will start recording the script.
7. From the Record into Action list, select the action into which you want to begin
recording, or create a new action.
To create a new action, click the New button. The Create new action dialog box
opens.
Type a name for the new action in the Action name box, or accept the default name,
and click OK. When you create a new action, VuGen adds it to the Actions list in the
skeleton Web Vuser script.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
28
HP Performance Center (Load Runner)
8. Click OK to launch the Web browser and start recording. The floating recording
toolbar appears.
9. Navigate through your Web site by clicking hypertext and hyper graphic links, and
submitting forms. Each link you click adds an Action icon to the Web Vuser script.
Each form you submit adds a Submit Form icon to the Vuser script.
10. After performing all the required user processes, click the Stop Recording button
on the floating recording toolbar. VuGen closes the browser and restores the VuGen
main window.
By default, your recorded script appears in the tree view. If your script appears in
the text-based script view, select View > Tree View to switch to the tree view.
11. Select File > Save, or click the Save button to save the Vuser script. Specify a
file name and location in the Save Test dialog box, and click Save.
The tree view of a Vuser script is composed of icons. Each icon represents an action
of the Vuser or a step in the Web Vuser script. The icons are divided into four
categories:
➤ Action Icons
➤ Control Icons
➤ Service Icons
➤ Web Check Icons
Service Icons
A Service icon represents a step that does not make any changes in the Web
application context. Rather, service steps perform customization tasks such as
setting proxies, providing authorization information, and issuing customized headers.
Service steps in a Vuser script override any run-time settings that are set for the
script.
Within a Vuser script, you can mark an unlimited number of transactions. You insert
transaction statements into your script either while recording or after the recording
session.
During a scenario execution, the Controller measures the time it takes to perform
each transaction. After a scenario run, you use LoadRunner’s graphs and reports to
analyze the server’s performance.
2. Click the arrow in the Transaction Name box to display a list of open transactions.
Select the transaction to close.
3. Select the transaction status from the Transaction Status list. You can manually
set the status of the transaction, or you can allow LoadRunner to detect it
automatically.
• To manually set the status, you perform a manual check within the code of
your script, evaluating the return code of a function. For the "succeed" return
code, set the status to LR_PASS. For the "fail" return code, set the status to
LR_FAIL.
• To instruct LoadRunner to automatically detect the status, specify LR_AUTO.
LoadRunner returns the detected status to the Controller.
4. Click OK to accept the transaction name and status. VuGen inserts an
lr_end_ transaction (“Transaction Name”, ”Status Of Transaction”);
Replay Log
The Output window's Replay Log displays messages that describe the actions of the
Vuser as it runs. This information tells you how the script will run when executed in a
scenario, session step, or profile.
When script execution is complete, you examine the messages in the Replay Log to
see whether your script ran without errors. Various colors of text are used in the
Replay Log.
o Black: Standard output messages
o Red: Standard error messages
o Green: Literal strings that appear between quotation marks (e.g. URLs)
o Blue: Transaction Information (starting, ending, status and duration)
o Orange: The beginning and ending of iterations.
Recording Log
To view a log of the messages that were issued during recording, click the
Recording Log tab. You can set the level of detail for this log in the
Advanced tab of the Recording options.
Generation Log
To view a summary of the script's settings used for generating the code, select the
Generation Log tab. This view shows the recorder version, the recording option
values, and other additional information.
Correlation Results
Choose Vuser > Scan for Correlations or click the Find Correlations button.
VuGen scans the script for dynamic values that need to be correlated and displays
them in the Correlation Results tab.
The detailed description about different option on the logging window is given as
follows:
Enable Logging
This option enables automatic logging during replay—VuGen writes log messages
that you can view in the Execution log.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
38
HP Performance Center (Load Runner)
Log Options
The Log run-time settings allow you to adjust the logging level depending on your
development stage. You can indicate when to send log messages to the log: Send
messages only when an error occurs or Always send messages. During development,
you can enable all logging. Once you debug your script and verify that it is
functional, you can enable logging for errors only.
Setting the Log Detail Level
You can specify the type of information that is logged, or you can disable logging
altogether, which is done when we are running scripts in the controller as it results
unnecessary usage of system resources and can skew the performance results.
Standard Log: Creates a standard log of functions and messages sent during script
execution to use for debugging. Disable this option for large load testing scenarios.
Extended Log: Creates an extended log, including warnings and other messages.
Disable this option for large load testing scenarios. If logging is disabled or if the
level is set to Extended, adding it to a scenario does not affect the log settings.
You can specify which additional information should be added to the extended log
using the Extended log options:
• Parameter substitution: Select this option to log all parameters assigned to the
script along with their values.
• Data returned by server: Select this option to log all of the data returned by the
server.
• Advanced trace: Select this option to log all of the functions and messages sent
by the Vuser during the session. This option is useful when you debug a Vuser
script.
Error Handling
Continue on Error: This setting instructs Vusers to continue script execution when
an error occurs. This option is disabled by default.
Fail open transactions on lr_error_message: This option instructs VuGen to
mark all transactions in which an lr_error_message function was issued, as Failed.
Generate Snapshot on Error: This option generates a snapshot when an error
occurs. You can see the snapshot by viewing the Vuser Log and double clicking on
the line at which the error occurred.
Multithreading
The primary advantage of a multithread environment is the ability to run more
Vusers per load generator. Only threadsafe protocols should be run as threads.
The Controller uses a driver program (e.g., mdrv.exe, and r3vuser.exe) to run
your Vusers. If you run each Vuser as a process, then the same driver program is
launched (and loaded) into the memory again and again for every instance of the
Vuser. Loading the same driver program into memory uses up large amounts of
RAM (random access memory) and other system resources. This limits the
numbers of Vusers that can be run on any load generator. Alternatively, if you
run each Vuser as a thread, the Controller launches only one instance of the
driver program (e.g., mdrv.exe), for every 50 Vusers (by default). This driver
process/program launches several Vusers, each Vuser running as a thread. These
threaded Vusers share segments of the memory of the parent driver process.
This eliminates the need for multiple re-loading of the driver program/process
saves much memory space, thereby enabling more Vusers to be run on a single
load generator.
Automatic Transactions
You can instruct LoadRunner to handle every step or action in a Vuser script as a
transaction. This is called using automatic transactions. LoadRunner assigns the
step or action name as the name of the transaction. By default, automatic
transactions per action are enabled.
Preferences
Image and Text Checks
The Enable image and text checks option allows the Vuser to perform verification
checks during replay by executing the verification functions: web_find or
web_image_check. This option only applies to statements recorded in HTML-based
mode. Vusers running with verification checks use more memory than Vusers who do
not perform checks (disabled by default).
Non-critical item errors as warnings: This option returns a warning status for a
function which failed on an item that is not critical for load testing, such as an image
or Java applet that failed to download. This option is enabled by default. If you want
a certain warning to be considered an error and fail your test, you can disable this
option.
Save a local copy of all snapshot resources during replay: Instructs VuGen to
save the snapshot resources to files on the local machine. This feature lets the Run-
Time viewer create snapshots more accurately and display them quicker.
DNS caching: Instructs the Vuser to save a host's IP addresses to a cache after
resolving its value from the Domain Name Server. This saves time in subsequent
calls to the same server. In situations where the IP address changes, as with
certain load balancing techniques, be sure to disable this option to prevent Vuser
from using the value in the cache. (enabled by default)
HTTP version: Specifies which version HTTP to use: version 1.0 or 1.1. This
information is included in the HTTP request header whenever a Vuser sends a
request to a Web server. The default is HTTP 1.1..
Keep-Alive HTTP connections: Keep-alive is a term used for an HTTP
extension that allows persistent or continuous connections. These long-lived HTTP
sessions allow multiple requests to be sent over the same TCP connection. This
improves the performance of the Web server and clients.
The keep-alive option works only with Web servers that support keep-alive
connections. This setting specifies that all Vusers that run the Vuser script have
keep-alive HTTP connections enabled. (Yes by default)
Step timeout caused by resources is a warning: Issues a warning instead of
an error when a timeout occurs due to a resource that did not load within the
timeout interval. For non-resources, VuGen always issues an error. (No by
default)
running multiple Vusers from the Controller, every Vuser uses its own network
buffer
Sequential: each VUser (in a multi-user scenario) will traverse the whole list
individually.
Random: the value will be randomly chosen.
Unique: all users will traverse the list together and a unique value (row) is assigned
to each VUser (unless the list is exhausted).
8.5 Parameter types
• Date/Time
• File
• Group Name
• Iteration Number
• Load Generator Name
• Random Number
• Table
• Unique Number
• User Defined Function
• Vuser ID
We should be cautious while working with big data files in Notepad, as columns do
not automatically line up. It is better to save parameter files as ‘tab delimited’ files
and then opening the file in Excel to edit and maintain it. When you are done
modifying with the file, save it as 'Text (tab delimited)' and overwrite the script’s
DAT file. Please refer to go to mercury support site (http://support.mercury.com) for
more doubts and refer to these threads:
23501, 23416, 15494, 15318,15821, 12725, 19703,12747 and 18876
As a rule of thumb, any value that changes every time you connect to the server
is a candidate for correlation. A correlated script will send the server the
Function Description
web_reg_save_param This is the latest function with some extra features for
a more powerful usage. The syntax is as following:
web_reg_save_param ( “Parameter Name” , <List of
Attributes>, LAST );
Each of these parameters is a pointer to a string. That
means that if they are entered as literal text, they
need to be enclosed in quotes, with each parameter
separated by a comma. The supported attributes
include LB (Left boundary), RB (Right boundary),
RelFrameID, ORD, Search, SaveOffset, SaveLen, and
Convert. These attributes can appear in any order
because they contain within them what they are.
Detail information about each attribute can be found
on the function reference online.
A list of all the three functions, along with documentation and examples, can be
found in the on-line documentation. From VuGen, go to Help → Function reference →
Contents → Web and Wireless Vuser Functions → Correlation Functions.
A hypothetical example
You are logging onto a website. When you send the server your user name and
password, it replies with a Session ID that is good for that session. The Session ID
needs to be correlated for replay. You need to capture this value during replay to
use in the script in place of the hard-coded value.
There are two easy ways to identify the values and to correlate:
1. Correlation with the auto correlation engine, and
2. Manually correlation.
Correlation Description
mechanism
Rules Rules Correlation operates during recording. There are two types of
Correlation Rules correlation:
• Built-in Correlation
The Built-in Correlation detects and correlates dynamic data for
supported application servers. If you are recording a session with a
supported application server, you can use one of the existing rules. An
application server may have more than one rule. You can enable or
disable a specific rule by selecting or clearing the checkbox adjacent to
the rule. VuGen displays the rule definitions in the right pane.
• User-defined Rules Correlation
User-defined Rules Correlation allows you to define your own
correlation rules before you record a session. The rules include
information such as the boundaries of the dynamic data you want to
correlate and other specifications about the match such as binary, case
matching, and the instance number.
Correlation Correlation Studio operates after replay, which means that you need to
Studio run the script at least once. After replay, the studio automatically tries
to find matching values from the server’s responses and the client’s
request, and estimates the need for correlation. The studio is usually
used to initially find places to correlate, in unfamiliar environments.
2. Record a script.
Start a recording. The Web recorder will automatically correlate the dynamic
values that match the correlation rule defined on step 1. After the recording,
assuming there are dynamic values being detected and correlated, you will see
something similar to the following in your script:
Tips: Scan Action for Correlation window does not come up after replay
Tips: How to manually start the ‘Scan Action for Correlation’ feature
3. During the scan, the Correlation Studio will automatically compare the record and
replay response for dynamic values and report the differences to you. You can
see the dynamic values that are detected on the lower panel, under the
“Correlation Results” tab.
4. Look through the differences reported by the Correlation Studio. Correlate the
values on those necessary. To make the correlation process easy, you can
choose the option to correlate them either one at a time (Correlate) or all at once
(Correlate All).
Note: Correlation Studio often reports all the dynamic values, including those
where you do not need to correlate. Because of this, Mercury recommends you to
use the Correlate option, and look at the value to correlate one at a time.
When you correlate a value using this mechanism, VuGen inserts a
web_reg_save_param function and a comment into your script indicating that
a correlation was done for the parameter. It also indicates the original value.
If the logon process (step 1) fails on the first run and halts the replay, step 2 and
above of the example will not be executed. For such, the Correlation Studio will
NOT be able to report the dynamic values of step 2 and above. To handle this
case, what you will need to do is to repeat the correlation processes. The steps
needed are similar to the following:
1. Run the script once.
2. Identify and correlate the values needed to allow logon process.
3. Rerun the script.
4. Identify and correlate the values needed to retrieve employee information.
5. Run the script again.
6. Identify and correlate the values needed to update employee information.
There are cases where the Correlation Studio may not detect some dynamic values.
You may need to perform manual correlation as guided on the following chapter.
Tips: Why does Correlation Studio fail to detect all the dynamic values.
d. Go to the Recording Log (for Single Protocol) or Generation Log (for Multiple
Protocol) and place your cursor at the top. Press Control-F (CTRL+F) to bring
up the Find window.
• Paste the value copied from step 2c and search downward. You are looking
for the first occurrence of this value in the Recording Log or Generation Log.
• If you do not find the value, verify you are looking in the correct script’s
Recording Log or Generation Log. Remember you have two almost identical
scripts here.
• If you find the value, scroll up in the log, and make sure the value was sent
as part of a response from the server. The first header you come across
while looking up the script should be prefaced by a receiving response. This
indicates that the server sent the value to the client. If the value first
appears as part of a sending request, then the value originated on the client
side and does not need to be correlated, but rather parameterized. That is a
different topic all together. The response will have a comment before it that
looks like this:
Now, to identify the place to put the function, you need to replay the script once in
extended log. To do this,
• Go to the Vuser menu and select “Run-Time Settings.”
• Go to General Log
• Select “Enable Logging,” “Always sends messages,” “Extended log,” and all
the options under extended log.
Note: In choosing a right boundary, make sure you choose enough static text to
specify the end of the value. If the boundary you specify appears in the value that
you are trying to capture, then you will not capture the whole value.
At this point, you are ready to run the script to test if it works, or if it needs further
correlation, more work on this correlation.
5. Recap
That was a lot of looking through the logs and checking of values. Let’s just recap
what you have done. You have identified a value that you think needs to be
correlated. You then identified in the script where to place the statement that would
ultimately capture and save the value into a parameter. You then placed the
statement, and gave the text strings that appear on either side of the value that you
are looking for so that it can be found.
The flow of logic for this is the correlation functions tell the replay engine, what to
look for in the next set of replies from the server. The replay engine makes a
request of the server. The server replies. The replay engine looks thorough the
replies for the left and right boundaries. If it finds them, what is in-between is then
saved to a parameter of the name specified.
Remember, the parameter cannot have a value until after the next statement is
executed. The correlation statement only tells the replay what to look for; it does
not assign a value to the parameter. Assignment of a value to the parameter does
not happen until after generating request to the server and looks in the reply. If you
"Value_\"\\item\\\"value’
Value_"\item\"value’7875’ "’"
"
Value=
"Value=\r\n\"" "\""
"7898756"
9.11 Error: "File no longer available" when launching the WinDiff tool
The Compare tool in VuGen makes use of a standard utility called WinDiff. WinDiff
has certain limitations, in that it cannot handle directories and files with spaces. To
resolve the issue, move the script directory to a new drive without any spaces, and
try to keep the names short. Alternatively, open WinDiff manually, and open up the
scripts via the menu options. Simply compare the action1.c sections of the scripts in
question.
9.12 Error: “No match found for the requested parameter 'ParameterName'.
If the data you want to save exceeds xx bytes, use
web_set_max_html_param_len to increase the parameter size."
By default, the web_reg_save_param function issues the above error if the
boundaries cannot be found. If you need to use this function for other purposes, and
would like to avoid errors, use the "Notfound=warning" attribute so that the
replay will just issue a warning.
Now, the above error actually suggested two possible causes for the error,
1. The boundaries defined cannot be found, and
2. The data you want to save exceeds 256 bytes.
The advice given is just a recommendation that you must take into consideration.
Was the value you were trying to capture more than 256 characters long? In the
above example, it was only 40 characters long. Have a look at the Recording Log
(for Single Protocol) or Generation Log (for Multiple Protocol) and see how long the
Example:
You can set if you want to correlate it or ignore the value. You can also set the future
behavior for correlating as needed.
“Scan Action for Correlation” window does not come up after replay
This window may not come up if you have set to disabled it before. To re-enable this
feature:
1. Go to Tools → General options → Correlation tab.
2. Enable the option for “Show Scan for correlations popup after replay of Vuser.”
9.15 Why does Correlation Studio fail to detect all the dynamic values
The following are possible causes where Correlation Studio failed to detect all the
dynamic values, and suggestion to overcome it.
2. The dynamic value is generated by the client and not by the server. For example,
you have a client side JavaScript that generates some important dynamic values.
Since the Web replay does not execute client side scripting, you will need to add
your custom code in the script to deal with this.
3. The replay response’s HTML has a different format than the recordings. For this,
you will need to perform manual correlation.
The Customer Support website has a video for download that goes over correlation.
You can get it from http://support.mercury.com. After logging in, go to Downloads
→ Browse. Select the Mercury Interactive downloads radio button, choose
“LoadRunner” from the product selection drop-down box, and click on the “Retrieve”
button. Under “Training,” select the “LoadRunner Web Script Correlation Training”
link.
Since most of applications does have a login and password to enter, therefore there
will be generation of a temporary session id, which requires Correlation to replay the
scripts. Please refer to go to mercury support site (http://support.mercury.com) for
more doubts and refer to these threads: 11806, 18587, 31968, 18587, 15543,
14470, 9583, 24264, 17241, and 12725
b. The Single Protocol Web recorder obtains the proxy setting from the
HKEY_CURRENT_USER registry hive. Verify the following to make sure
that it is set up correctly:
i. Go to Start → Run, and enter regedit to open the registry editor.
ii. Navigate to
HKLM\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\InternetSett
ings.
iii. Verify the non-default key, ProxySettingsPerUser. If the
iv. ProxySettingsPerUser value exists and has a value data of 0 (zero), set
it to 1. It is okay if you do not have that registry key. As long as
ProxySettingsPerUser is not set to one, proxy settings will be obtained
from the HKEY_CURRENT_USER registry hive.
8. To make permanent change in default settings in VuGenerator: (Reference:
Michael Warner, ID 2412 on Support Site)
To keep from having to change the run-time settings every time that you create a
new script, here is an easy way to save your own default settings.
1. Create a new script
2. Make the setting changes that you need to make.
3. Save the script
4. Now, copy the default.cfg file out of the directory of the script file that you just
created to the Program Files\Mercury Interactive\LoadRunner\template\{dir}.
The {dir} is whatever directory that you are creating your scripts from. For
example, a Web/HTML vuser directory is the \qtweb\ directory. You can
also change the init.c, end.c, or action.c if you seem to always have to make
changes to these files every time you create a new script.
If you receive a timeout error such as the following, it may be the site's server or
your browser setting.
So that you do not wait endlessly for the server to come back with data when the
server has a problem, Internet Explorer imposes a time-out limit for the server to
return data. (Five minutes for versions 4.0 and 4.01)
In Internet Explorer 4.01 Service Pack 1 or later (IE 5, 5.5, 6), you can change the
timeout by adding a Receive Timeout DWORD value in registry key:
HKEY_CURRENT_USER\ Software\ Microsoft\ Windows\ Current Version\ Internet
Settings with a data value of (number of seconds)*1000. To set a 2 minute timeout
duration, set the Receive Timeout data value to (2*60*1000) = 120000. Restart
your computer.
13.1 Scenario:
A scenario defines the events that occur during each testing session.
For e.g.:- a scenario defines and controls the number of users to emulate, the
actions that they perform, and the machines on which they run their emulations.
There are 2 methods for creating a scenario:
• Manual Scenario.
• Goal-Oriented Scenario.
• Click on the New Scenario button, New Scenario dialog opens with the list of
Vuser Scripts and select the default option Manual Scenario for scenario
types.
2. In the Group Name box, enter a name for the Vuser group.
3. From the Vuser Quantity box, select the number of Vusers that you want to create
in the group.
4. Select a load generator from the Load Generator Name list. The Load Generator
list contains all load generators that you previously added to the scenario.
5. To use a load generator that does not appear, select Add from the Load Generator
Name list. The Add Load Generator dialog box opens: Type the name of the load
generator in the Name box. In the Platform box, select the type of platform on which
the load generator is running. By default, LoadRunner stores temporary files on the
load generator during scenario execution, in a temporary directory specified by the
load generator’s TEMP or TMP environment variables. To override this default for a
specific load generator, type a location in the Temporary Directory box.
6.Click OK to close the Add Group dialog box. The new group’s properties appear in
the Scenario Groups window
2. From the Group Name box, select the name of the Vuser group.
3. From the Quantity to add box, select the number of Vusers that you want to add
to the group.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
91
HP Performance Center (Load Runner)
4. Select a load generator from the Load Generator Name list.
5. Select a script from the script list. The script list contains all scripts that you
previously added to the scenario.
6. Click OK to close the Add Vusers dialog box. The new Vuser’s properties appear in
the Vusers dialog box.
• Select the Manual Scenario option and also the check the check box “User the
percentage option.”
• Select the required script/scripts by clicking on the Add button.
• Click on Ok to add the script to the scenario
You can also limit the time duration of a scenario. You specify the number of minutes
a scenario should be in the running state. When a scenario reaches its time
limitation, it finishes.
Delaying the Start of a Scenario For both manual and goal-oriented scenarios, you
can instruct LoadRunner to start running the scenario at a later point in time. You
can specify either the number of minutes you want LoadRunner to wait from the
time a Run command is issued, or the specific time at which you want the scenario
to begin.
1. Select Scenario > Start Time. The Scenario Start dialog box opens, with the
default option, without delay is selected.
The available properties are Duration, Initializing, Ramp Up, and Ramp Down.
Initializing: the number of Vusers to initialize to the ready state during the
specified time interval, this emulates a logon procedure usually done once a day.
Ramp Up: The pace at which Vusers run this script (i.e. Transition from Ready to
Run state). This usually emulates actions that are repeated multiple times.
Duration: The amount of time the current group participates in the scenario.
Ramp down: The pace at which Vusers are terminated, this setting is only
available if the group is set to run for a fixed duration.
To set the scheduling options for a scenario:
1. Select the Schedule By Scenario option.
4. To set the ramp up for the group click the Ramp Up tab.
6. To determine how to stop the Vuser group, click the Ramp Down tab.
You can also manipulate individual Vusers within the Vuser groups you have defined
by selecting a group and clicking the Vusers button. The Vusers dialog box appears,
with a list of the ID, Status, Script, Load Generator, and Elapsed Time (since the
beginning of the scenario) for each of the Vusers in the group
Note that you can detach the Scenario Status window from the Run view, thereby
enlarging the Scenario Groups window.
While the scenario runs, the Vusers and load generators send error, notification,
warning, debug, and batch messages to the Controller. You can view these messages
in the Output window (View > Show Output).
15.0 IP Spoofing
If the hardware configuration under test balances load across a “farm” of several (we
or database) servers, you can spoof how routers distribute work among those
servers so that each Vuser uses a unique IP address instead of same IP address for
its host machine. So to use this “IP spoofing” to emulate a more realistic pattern of
IP address:
• On each Vuser host machine use the IP wizard “new setting” dialog to enter
the IP address of the web server and the IP addresses to assign Vusers in
their stead. Individual addresses or—in a closed – network – a range of
addresses (from any IP class) can be specified.
• Reboot to refresh the server’s “Routing table” (done at start up)
• On the controller’s “Scenario” menu select “Enable IP spoofer”.
Monitor options: global sampling rate, error handling, debugging, and the
frequency settings.
Graph properties: refresh rate, display type, graph time for the x-axis, and the y-
axis scale.
Measurement settings: line color, scale of the y-axis, and whether to show or hide
the line.
In the following example, the graph is shown with the Don’t Show and Clock Time
options:
• Graph Time: The Graph Time settings indicate the scale for a graph’s x-axis
when it is time-based. A graph can show 60 or 3600 seconds of activity. To
see the graph in greater detail, decrease the graph time. To view the
performance over a longer period of time, increase the graph time. The
available graph times are: Whole scenario, 60, 180, 600, and 3600 seconds.
• Display Type: You can specify whether LoadRunner displays the Network
Delay Time graph as a line, pie, or area graph. By default, the graph is
displayed as a line graph.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
109
HP Performance Center (Load Runner)
• Y-Axis Style You can instruct LoadRunner to display graphs using the default
y-axis scale, or you can specify a different y-axis scale. Click Automatic if
you want LoadRunner to use the default y-axis values.
5. Select a value from the Graph Time box. The graph time is the time in seconds
displayed by the x-axis.
6. For the Network Delay Time graph, select a graph style (Line, Pie, or Area) from
the Display Type box.
7. Select a maximum or minimum value for the y-axis, or choose Automatic to view
graphs using the default y-axis scale.
8. Click OK to save your settings and close the Graph Configuration dialog box.
In the following example, the same graph is displayed with a scale of 1 and 10.
Un-indexed Tables All the tables should have at least one Primary Key.
Tile:
It displays the graph one above the other by using X-axis..
Correlate:
It displays the graph against to each other
To view the summary data, choose Tools > Options, and select the Result
Collection tab. Select Display summary data while generating complete data if
you want the Analysis to process the complete data graphs while you view the
summary data, or select Generate summary data only if you do not want
LoadRunner to process the complete Analysis data.
Note: The following graphs are not available when viewing summary data only:
Rendezvous, Data Point (Sum), Web Page Breakdown, Network Monitor, and Error
graphs.
For example, the Hits per Second graph is displayed using different granularities.
The y-axis represents the number of hits per second within the granularity interval.
For a granularity of 1, the y-axis shows the number of hits per second for each one-
second period of the scenario.
For a granularity of 5, the y-axis shows the number of hits per second for every five-
second period of the scenario.
In the above graphs, the same scenario results are displayed in a granularity of 1, 5,
and 10. The lower the granularity, the more detailed the results. For example, using
a low granularity as in the upper graph, you see the intervals in which no hits
occurred. It is useful to use a higher granularity to study the overall Vuser behavior
throughout the scenario. By viewing the same graph with a higher granularity, you
can easily see that overall, there was an average of approximately 1 hit per second.
You can view the actual raw data collected during test execution for the active
graph. The Raw Data view is not available for all graphs. Viewing the raw data
can be especially useful for:
To display a report:
1. Open the desired Analysis session file (.lra extension), or LoadRunner result file
(.lrr extension), if it is not already open.
2. From the Reports menu choose a report. The report is generated and displayed.
You can display multiple copies of the same report.
Duration: the duration of the transaction in the following format: hrs: minutes:
seconds: milliseconds. This value includes think time, but does not include wasted
time.
Think time: the Vuser’s think time delay during the transaction.
Wasted time: the LoadRunner internal processing time not attributed to the
transaction time or think time. (Primarily RTE Vusers)
Results: the final transaction status, either Pass or Fail.
The 90th percentile is a measure of statically distribution, not unlike the median. The
median is the middle value. The median is the value for which 50% of the values
were bigger, and 50% smaller. The 90th percentile tells you the value for which 90%
of the data points are smaller and 10% are bigger.
Each value is counted in a range of values. For example, 5 can be counted in a range
of 4.95 to 5.05, 7.2 in a range of 7.15 to 7.25. The 90% is taken from the range of
values that the number of transaction in it and before it is >= (0.9 * Number of
Values).
This difference in the methods can lead to different 90% values. Again, both
methods lead to correct values as defined by the 90th percentile. However, the
algorithm to calculate these figures has changed in Load Runner 7 and above.
As we look through the data, we can begin dissecting the problem. Response time
Exceeded the goal at about 20+ minutes into the test. What else was happening
then?
• Click the “+” <New Graph> icon on the toolbar menu. The Open a New
Graph window opens.
• Select the graph that you need.
• Click the Open Graph button to open the graph.
• Only graphs which contain data are selectable.
Drilldown
Setting Filters
• Filters let you display only a specific transaction status, transaction name, group,
Vuser, or other condition in the Summary Report.
• You can set filters globally or for the selected graph.
• Select criteria and values for each filter condition that you want to employ.
• As load increases, the throughput and hits graphs should reflect the increase.
• A fall-off in throughput under increasing load could indicate a problem with
network saturation.
The first step in isolating the issue in a specific subsystem is to understand how the
network performed during the test.
By looking at these three graphs, we see that Running Vusers, Hits per Second, and
Throughput graphs all correlate. As an analogy, think of a busy highway as your
network. The throughput can be represented by the ability of the highway to handle
the traffic. At the highway’s saturation point, the cars begin to bottleneck, and the
number of cars on the road flattens out to the maximum.
A bandwidth issue occurs, if throughput has flattened out while the number of Vusers
increases additionally, the number of Hits per second is like a parking lot (web
server). Each car is trying to find a spot, and when the lot is full, the cars are forced
to drive around until a Spot opens up.
If Hits per second flattens out as Vusers increase, there is likely a web server
connection issue. Since there is a direct correlation with Vusers, Hits and
Throughput, it does not seem likely that we have a network issue. Let’s validate that
the network was “clean”.
Right-click on any graph and select AUTO CORRELATE (or from the toolbar menu
select VIEW→ AUTO CORRELATE). This will open the Auto Correlate window.
The selected measurement is displayed in the graph.
1. To specify the graphs you want to correlate with a selected measurement and the
type of graph output to be displayed, click the Correlation Options tab.
2. In the SELECT GRAPHS FOR CORRELATION section, choose the graphs whose
measurements you want to correlate with your selected measurement:
In the DATA INTERVAL section, select one of the following two options:
Automatic: Instructs the Analysis to use an automatic value, in order to calculate
the interval between correlation measurement polls.
Correlate data based on X second intervals: Enter the number of seconds you
want the Analysis to wait between correlation measurement polls.
In the OUTPUT section, select one of the following two options:
• Show the X most closely correlated measurements: Specify the number
of most closely correlated measurements you want the Analysis to display.
On the Internet, companies whose Web sites get a great deal of traffic usually use
load balancing. For load balancing Web traffic, there are several approaches. For
Web serving, one approach is to route each request in turn to a different server host
address in a domain name system (DNS) table, round-robin fashion. Usually, if two
servers are used to balance a workload, a third server is needed to determine which
server to assign the work to. Since load balancing requires multiple servers, it is
usually combined with fail over and backup services. In some approaches, the
servers are distributed over different geographic locations.
7A memory leak is the gradual loss of available computer memory when a program
(an application or part of the operating system) repeatedly fails to return memory
that it has obtained for temporary use. As a result, the available memory for that
application or that part of the operating system becomes exhausted and the program
can no longer function. For a program that is frequently opened or called or that runs
continuously, even a very small memory leak can eventually cause the program or
the system to terminate. A memory leak is the result of a program bug.
Some operating systems provide memory leak detection so that a problem can be
detected before an application or the operating system crashes. Some program
development tools also provide automatic "housekeeping" for the developer. It is
always the best programming practice to return memory and any temporary file to
the operating system after the program no longer needs it.
20.2 Bottleneck
A bottleneck, in a communications context, is a point in the enterprise where the
flow of data is impaired or stopped entirely. Effectively, there isn't enough data
handling capacity to handle the current volume of traffic. A bottleneck can occur in
the user network or storage fabric or within servers where there is excessive
contention for internal server resources, such as CPU processing power, memory, or
I/O (input/output). As a result, data flow slows down to the speed of the slowest
point in the data path. This slow down affects application performance, especially for
databases and other heavy transactional applications, and can even cause some
applications to crash.
100 Continue
A status code of 100 indicates that (usually the first) part of a request has been
received without any problems, and that the rest of the request should now be sent.
HTTP 1.1 is just one type of protocol for transferring data on the web, and a status
code of 101 indicates that the server is changing to the protocol it defines in the
"Upgrade" header it returns to the client. For example, when requesting a page, a
browser might receive a status code of 101, followed by an "Upgrade" header
showing that the server is changing to a different version of HTTP.
Successful
200 OK
The 200-status code is by far the most common returned. It means, simply, that the
request was received and understood and is being processed.
201 Created
A 201-status code indicates that a request was successful and as a result, a resource
has been created (for example a new page).
The status code 202 indicates that server has received and understood the request,
and that it has been accepted for processing, although it may not be processed
immediately.
A 203-status code means that the request was received and understood, and that
information sent back about the response is from a third party, rather than the
original server. This is virtually identical in meaning to a 200-status code.
204 No Content
The 204-status code means that the request was received and understood, but that
there is no need to send any data back.
The 205-status code is a request from the server to the client to reset the document
from which the original request was sent. For example, if a user fills out a form, and
submits it, a status code of 205 means the server is asking the browser to clear the
form.
Redirection
The 300-status code indicates that a resource has moved. The response will also
include a list of locations from which the user agent can select the most appropriate.
A status code of 301 tells a client that the resource they asked for has permanently
moved to a new location. The response should also include this location. It tells the
client to use the new URL the next time it wants to fetch the same resource.
302 Found
A status code of 302 tells a client that the resource they asked for has temporarily
moved to a new location. The response should also include this location. It tells the
client that it should carry on using the same URL to access this resource.
A 303-status code indicates that the response to the request can be found at the
specified URL, and should be retrieved from there. It does not mean that something
has moved - it is simply specifying the address at which the response to the request
can be found.
The 304-status code is sent in response to a request (for a document) that asked for
the document only if it was newer than the one the client already had. Normally,
when a document is cached, the date it was cached is stored. The next time the
document is viewed, the client asks the server if the document has changed. If not,
the client just reloads the document from the cache.
A 305-status code tells the client that the requested resource has to be reached
through a proxy, which will be specified in the response.
307 is the status code that is sent when a document is temporarily available at a
different URL, which is also returned. There is very little difference between a 302-
status code and a 307-status code. 307 was created as another, less ambiguous,
version of the 302-status code.
A status code of 400 indicates that the server did not understand the request due to
bad syntax.
401 Unauthorized
A 401-status code indicates that before a resource can be accessed, the client must
be authorized by the server.
The 402-status code is not currently in use, being listed as "reserved for future use".
403 Forbidden
A 403-status code indicates that the client cannot access the requested resource.
That might mean that the wrong username and password were sent in the request,
or that the permissions on the server do not allow what was being asked.
The best known of them all, the 404-status code indicates that the requested
resource was not found at the URL given, and the server has no idea how long for.
A 405-status code is returned when the client has tried to use a request method that
the server does not allow. Request methods that are allowed should be sent with the
response (common request methods are POST and GET).
The 406-status code means that, although the server understood and processed the
request, the response is of a form the client cannot understand. A client sends, as
part of a request, headers indicating what types of data it can use, and a 406 error is
returned when the response is of a type not i that list.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
151
HP Performance Center (Load Runner)
407 Proxy Authentication Required
The 407-status code is very similar to the 401-status code, and means that the
proxy must authorize the client before the request can proceed.
A 408-status code means that the client did not produce a request quickly enough. A
server is set to only wait a certain amount of time for responses from clients, and a
408 status code indicates that time has passed.
409 Conflicts
A 409-status code indicates that the server was unable to complete the request,
often because a file would need to be edited, created or deleted, and that file cannot
be edited, created or deleted.
410 Gone
A 410-status code is the 404's lesser-known cousin. It indicates that a resource has
permanently gone (a 404 status code gives no indication if a resource has give
permanently or temporarily), and no new address is known for it.
The 411-status code occurs when a server refuses to process a request because a
content length was not specified.
A 412-status code indicates that one of the conditions the request was made under
has failed.
The 413-status code indicates that the request was larger than the server is able to
handle, either due to physical constraints or to settings. Usually, this occurs when a
The 414-status code indicates the URL requested by the client was longer than it can
process.
A server to indicate that part of the request was in an unsupported format returns a
415-status code.
A 416-status code indicates that the server was unable to fulfill the request. This
may be, for example, because the client asked for the 800th-900th bytes of a
document, but the document was only 200 bytes long.
The 417-status code means that the server was unable to properly complete the
request. One of the headers sent to the server, the "Expect" header indicated an
expectation the server could not meet.
Server Error
A 500 status code (all too often seen by Perl programmers) indicates that the server
encountered something it didn't expect and was unable to complete the request.
The 501-status code indicates that the server does not support all that is needed for
the request to be completed.
A 503-status code is most often seen on extremely busy servers, and it indicates
that the server was unable to complete the request due to a server overload.
A 504-status code is returned when a server acting as a proxy has waited too long
for a response from a server further upstream.
A 505-status code is returned when the HTTP version indicated in the request is no
supported. The response should indicate which HTTP versions are supported.
Commands
a) vmstat Command
b) ioStat Command
c) netstat Command
d) lockstat command
e) mpstat command
VMSTAT Command
Purpose:
It reports Virtual memory statistics.
Syntax:
vmstat [ -f ] [ -i ] [ -s ] [ PhysicalVolume ... ] [ Interval [ Count ] ]
Description
The vmstat command reports statistics about kernel threads, virtual memory, disks,
traps and CPU activity. Reports generated by the vmstat command can be used to
balance system load activity. These system-wide statistics (among all processors)
are calculated as averages for values expressed as percentages, and as sums
otherwise.
If the vmstat command is invoked without flags, the report contains a summary of
the virtual memory activity since system startup. If the -f flag is specified, the
vmstat command reports the number of forks since system startup. The
PhysicalVolume parameter specifies the name of the physical volume.
The Interval parameter specifies the amount of time in seconds between each report.
The first report contains statistics for the time since system startup. Subsequent
reports contain statistics collected during the interval since the previous report. If the
Interval parameter is not specified, the vmstat command generates a single report
and then exits. The Count parameter can only be specified with the Interval
parameter. If the Count parameter is specified, its value determines the number of
reports generated and the number of seconds apart. If the Interval parameter is
The kernel maintains statistics for kernel threads, paging, and interrupt activity,
which the vmstat command accesses through the use of the knlist subroutine and
the /dev/kmem pseudo-device driver. Device drivers maintain the disk
input/output statistics. For disks, the average transfer rate is determined by using
the active time and number of transfers information. The percent active time is
computed from the amount of time the drive is busy during the report.
The following example of a report generated by the vmstat command contains the
column headings and their description:
kthr: kernel thread state changes per second over the sampling interval.
Memory: information about the usage of virtual and real memory. Virtual pages are
considered active if they have been accessed. A page is 4096 bytes.
Page: information about page faults and paging activity. These are averaged over
the interval and given in units per second.
Faults: trap and interrupt rate averages per second over the sampling interval.
in Device interrupts.
sy System calls.
us User time.
sy System time.
wa CPU cycles to determine that the current process is wait and there is pending disk
input/output.
Disk: Provides the number of transfers per second to the specified physical volumes
that occurred in the sample interval. The PhysicalVolume parameter can be used to
specify one to four names. Transfer statistics are given for each specified drive in the
order specified. This count represents requests to the physical device. It does not
imply an amount of data that was read or written. Several logical requests can be
combined into one physical request.
Flags
Note: Both the -f and -s flags can be entered on the command line, but the
system will only accept the first flag specified and override the second flag.
-i Displays the number of interrupts taken by each device since system startup.
-s Writes to standard output the contents of the sum structure, which contains an
Examples
vmstat
vmstat 2 5
The first summary contains statistics for the time since boot.
3. To display a summary of the statistics since boot including statistics for logical
disks scdisk13 and scdisk14, enter:
vmstat -f
vmstat –s
Files
iostat Command
The iostat command is used for monitoring system input/output device loading by
observing the time the physical disks are active in relation to their average transfer
rates. The iostat command generates reports that can be used to change system
configuration to better balance the input/output load between physical disks.
The first report generated by the iostat command provides statistics concerning the
time since the system was booted. Each subsequent report covers the time since the
previous report. All statistics are reported each time the iostat command is run. The
report consists of a tty and CPU header row followed by a row of tty and CPU
statistics. On multiprocessor systems, CPU statistics are calculated system-wide as
averages among all processors. A disks header row is displayed followed by a line of
statistics for each disk that is configured. If the PhysicalVolume parameter is
specified, only those names specified are displayed.
The Interval parameter specifies the amount of time in seconds between each report.
The first report contains statistics for the time since system startup (boot). Each
subsequent report contains statistics collected during the interval since the previous
report. The Count parameter can be specified in conjunction with the Interval
parameter. If the Count parameter is specified, the value of count determines the
number of reports generated at Interval seconds apart. If the Interval parameter is
Note: Some system resource is consumed in maintaining disk I/O history for
the iostat command. Use the sysconfig subroutine, or the System
Management Interface Tool (SMIT) to stop history accounting.
Reports
The iostat command generates two types of reports, the tty and CPU Utilization
report and the Disk Utilization report.
The first report generated by the iostat command is the tty and CPU Utilization
Report. For multiprocessor systems, the CPU values are global averages among all
processors. Also, the I/O wait state is defined system-wide and not per processor.
The report has the following format:
Column Description
tin Shows the total number of characters read by the system for all ttys.
tout Shows the total number of characters written by the system to all ttys.
% user Shows the percentage of CPU utilization that occurred while executing at
the user level (application).
% sys Shows the percentage of CPU utilization that occurred while executing at
the system level (kernel).
% iowait Shows the percentage of time that the CPU or CPUs were idle during which
the system had an outstanding disk I/O request. This value may be slightly
inflated if several processors are idling at the same time, an unusual
occurrence.
This information is updated at regular intervals by the kernel (typically sixty times
per second). The tty report provides a collective account of characters per second
received from all terminals on the system as well as the collective count of
characters output per second to all terminals on the system.
The second report generated by the iostat command is the Disk Utilization Report.
The disk report provides statistics on a per physical disk basis. The report has a
format similar to the following:
% tm_act Indicates the percentage of time the physical disk was active (bandwidth
utilization for the drive).
Kbps Indicates the amount of data transferred (read or written) to the drive in
KB per second.
tps Indicates the number of transfers per second that were issued to the
physical disk. A transfer is an I/O request to the physical disk. Multiple
logical requests can be combined into a single I/O request to the disk. A
transfer is of indeterminate size.
For large system configurations where a large number of disks are configured, the
system can be configured to avoid collecting physical disk input/output statistics
when the iostat command is not executing. If the system is configured in the above
manner, the first Disk report displays the message Disk History Since Boot Not
Flags
-d The -d option is exclusive of the -t option and displays only the disk utilization
report.
-t The -t option is exclusive of the -d option and displays only the tty and cpu usage
reports.
Examples
1. To display a single history since boot report for all tty, CPU, and Disks, enter:
iostat
2. To display a continuous disk report at two second intervals for the disk with
the logical name disk1, enter:
iostat -d disk1 2
3. To display six reports at two second intervals for the disk with the logical
name disk1, enter:
iostat disk1 2 6
4. To display six reports at two second intervals for all disks, enter:
iostat -d 2 6
5. To display six reports at two second intervals for three disks named disk1,
disk2, disk3, enter:
When you invoke netstat with the –r flag, it displays the kernel routing table in the
way we've been doing with route. On vstout, it produces:
# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
127.0.0.1 * 255.255.255.255 UH 00 0 lo
172.16.1.0 * 255.255.255.0 U 00 0 eth0
172.16.2.0 172.16.1.1 255.255.255.0 UG 00 0 eth0
The –n option makes netstat print addresses as dotted quad IP numbers rather than
the symbolic host and network names. This option is especially useful when you want
to avoid address lookups over the network (e.g., to a DNS or NIS server).
The second column of netstatԜ's output shows the gateway to which the routing
entry points. If no gateway is used, an asterisk is printed instead. The third column
shows the “generality” of the route, i.e., the network mask for this route. When
given an IP address to find a suitable route for, the kernel steps through each of the
routing table entries, taking the bitwise AND of the address and the genmask before
comparing it to the target of the route.
The fourth column displays the following flags that describe the route:
H Only a single host can be reached through the route. For example, this is the
case for the loop back entry 127.0.0.1.
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.
The next three columns show the MSS, Window and irtt that will be applied to TCP
connections established via this route. The MSS is the Maximum Segment Size and is
the size of the largest datagram the kernel will construct for transmission via this
route. The Window is the maximum amount of data the system will accept in a single
burst from a remote host. The acronym irtt stands for “initial round trip time.” The
TCP protocol ensures that data is reliably delivered between hosts by retransmitting
a datagram if it has been lost. The TCP protocol keeps a running count of how long it
takes for a datagram to be delivered to the remote end, and an acknowledgement to
be received so that it knows how long to wait before assuming a datagram needs to
retransmitted; this process is called the round-trip time. The initial round-trip time is
the value that the TCP protocol will use when a connection is first established. For
most network types, the default value is okay, but for some slow networks, notably
certain types of amateur packet radio networks, the time is too short and causes
unnecessary retransmission. The irtt value can be set using the route command.
Values of zero in these fields mean that the default is being used.
Finally, the last field displays the network interface that this route will use.
When invoked with the –i flag, netstat displays statistics for the network interfaces
currently configured. If the –a option is also given, it prints all interfaces present in
the kernel, not only those that have been configured currently. On vstout, the output
from netstat will look like this:
# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
Flags
lo 0 0 3185 0 0 0 3185 0 0 0 BLRU
eth0 1500 0 972633 17 20 120 628711 217 0 0 BRU
The MTU and Met fields show the current MTU and metric values for that interface.
The RX and TX columns show how many packets have been received or transmitted
error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped
(RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-
OVR).
The last column shows the flags that have been set for this interface. These
characters are one-character versions of the long flag names that are printed when
you display the interface configuration with ifconfig:
netstat supports a set of options to display active or passive sockets. The options –
t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you
provide the –a flag in addition, sockets that are waiting for a connection (i.e.,
listening) are displayed as well. This display will give you a list of all servers that are
currently running on your system. Invoking netstat -ta on vlager produces this
output:
$ netstat -ta
Active Internet Connections
Proto Recv-Q Send-Q Local Address Foreign Address (State)
tcp 0 0 *:domain *:* LISTEN
tcp 0 0 *:time *:* LISTEN
tcp 0 0 *:smtp *:* LISTEN
tcp 0 0 vlager:smtp vstout:1040 ESTABLISHED
tcp 0 0 *:telnet *:* LISTEN
tcp 0 0 localhost:1046 vbardolino:telnet ESTABLISHED
tcp 0 0 *:chargen *:* LISTEN
tcp 0 0 *:daytime *:* LISTEN
tcp 0 0 *:discard *:* LISTEN
tcp 0 0 *:echo *:* LISTEN
tcp 0 0 *:shell *:* LISTEN
tcp 0 0 *:login *:* LISTEN
This output shows most servers simply waiting for an incoming connection. However,
the fourth line shows an incoming SMTP connection from vstout, and the sixth line
tells you there is an outgoing telnet connection to vbardolino.
Using the –a flag by itself will display all sockets from all families.
Notes
You can tell whether a connection is outgoing from the port numbers. The port
number shown for the calling host will always be a simple integer. On the host being
called, a well-known service port will be in use for which netstat uses the symbolic
name such as smtp, found in /etc/services.
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
168
HP Performance Center (Load Runner)
lockstat
Syntax
Description
The lockstat command reports statistics about contention in the operating system
among simple and complex kernel locks. Reports generated by the lockstat
command can be used to ensure that system performance is not being reduced by
excessive lock contention.
The lockstat command generates a report for each kernel lock, which meets all the
specified conditions. If no condition values are specified, default conditions are used.
The reports give information about the number of lock requests for each lock. A lock
request is a lock operation (such as taking or upgrading a lock), which in some cases
cannot be satisfied immediately. A lock request, which cannot be satisfied at once, is
said to block. A blocked request will either spin (repeatedly execute instructions
which do nothing) or sleep (allowing another thread to execute).
The column headings in the lockstat command listing have the following meanings:
%Block The ratio of blocking lock requests to total lock requests. A block occurs
%Sleep The percentage of lock requests that cause the calling thread to sleep.
Flags
-b BlockRatio Specifies a block ratio. A lock must have a block ratio that is higher
than the specified BlockRatio parameter in order to be listed. The
default value of the BlockRatio parameter is five percent.
-n CheckCount Specifies the number of locks that are to be checked. The lockstat
command sorts locks according to lock activity. The CheckCount
parameter determines how many of the most active locks will be
subject to further checking. Limiting the number of locks that are
checked maximizes system performance, particularly if the Count
parameter is used to run the lockstat command repeatedly. By
default, the 40 most active locks are checked.
Interval Specifies the amount of time in seconds between each report. Each report
contains statistics collected during the interval since the previous report. If
the Interval parameter is not specified, the lockstat command generates a
single report covering an interval of one second and then exits.
Count Determines the number of reports generated. The Count parameter can only
be specified with the Interval parameter. If the Interval parameter is
specified without the Count parameter, reports are continuously generated.
A Count parameter of 0 is not allowed./TD>
Examples
lockstat
2. To generate 100 lock statistic reports at one second intervals, displaying only
those locks which are more than 50 percent as active as the most active lock,
enter:
3. lockstat -p
50 1 100
File
The interval parameter specifies the amount of time in seconds between each
report. A value of 0 indicates that processors statistics are to be reported for the
time since system startup (boot). The count parameter can be specified in
conjunction with the interval parameter if this one is not set to zero. The value of
count determines the number of reports generated at interval seconds apart. If the
interval parameter is specified without the count parameter, the mpstat command
generates reports continuously.
Reports
The report generated by the mpstat command has the following format:
CPU
Processor number. The keyword all indicates that statistics are calculated as
averages among all processors.
%user
Show the percentage of CPU utilization that occurred while executing at the user
level (application).
%nice
Show the percentage of CPU utilization that occurred while executing at the user
level with nice priority.
%system
Show the percentage of CPU utilization that occurred while executing at the system
level (kernel). Note that this does not include the time spent servicing interrupts or
softirqs.
%iowait
Show the percentage of time that the CPU or CPUs were idle during which the
system had an outstanding disk I/O request.
%irq
Show the percentage of time spent by the CPU or CPUs to service interrupts.
%soft
Show the percentage of time spent by the CPU or CPUs to service softirqs. A
softirq (software interrupt) is one of up to 32 enumerated software interrupts
which can run on multiple CPUs at once.
%idle
Show the percentage of time that the CPU or CPUs were idle and the system did not
have an outstanding disk I/O request.
intr/s
Show the total number of interrupts received per second by the CPU or CPUs.
Options
-P cpu | ALL
Indicate the processor number for which statistics are to be reported. cpu is the
processor number. Note that processor 0 is the first processor. The ALL keyword
indicates that statistics are to be reported for all processors.
EXAMPLES
mpstat 2 5
Display five reports of global statistics among all processors at two-second intervals.
mpstat -P ALL 2 5
Display five reports of statistics for all processors at two-second intervals.
BUGS
/proc file system must be mounted for the mpstat command to work.
Only the Linux kernel for each processor supplies a few activities.
FILES
/proc contains various files with system statistics.
References:
1) Mercury Support site (http://support.mercury.com) knowledge base and
discussion forum.
2) LoadRunner 8.1 and 9.0 Manual.
3) http://www.wilsonmar.com/1loadrun.htm
4) Load Runner Yahoo group
http://groups.yahoo.com/group/LoadRunner/
5) Advanced LoadRunner Yahoo Group
http://groups.yahoo.com/group/Advanced-LoadRunner/
The application layer defines how certain services operate and how they can be used.
Examples are the FTP service for transferring files, HTTP for serving Web pages and
SMTP for e-mail.
These services are defined in a rather abstract manner. Two parties, called the client
and the server, set up a connection over which they exchange messages in
accordance with a specific protocol. The client starts the protocol by requesting the
service. Often the next step is for the server to authenticate the client, for example
by asking for a password or by executing a public-key based protocol.
Taking e-mail as an example, the protocol in question is called the Simple Mail
Transfer Protocol (SMTP). The client and the server set up an SMTP connection over
which they exchange identifying information. The client then tells who the message
is from and who the intended recipient is. The server then indicates whether it
accepts or refuses the message (for example if its spam or the intended recipient is
unknown). If the message is accepted, the client sends the actual content of the
message and the server stores it in the right mailbox.
On the Internet, the transport layer is realized by two protocols. The first is the
Transmission Control Protocol (TCP) and the second is the User Datagram Protocol
(UDP). Both break up a message that an application wants to send into packets and
attempt to deliver those packets to the intended recipient. At the recipient's side,
both take the payload from the received packets and pass those to the application
layer.
The main difference between TCP and UDP is that TCP is reliable and UDP is not. TCP
will collect incoming packets, put them in the right order and thereby reassemble the
original message. If necessary, TCP requests retransmission of lost or damaged
packets. UDP merely takes each incoming packet and delivers the payload (the
original message) to the application layer. Any errors or out-of-order data should be
taken care of by the application.
UDP is much faster than TCP, and so is mainly used for applications like audio and
video streaming, where the occasional error is less important than getting all the
data there at the right time. More generally, UDP is designed for applications that do
not require the packets to be in any specific order. Because of this, UDP is
sometimes called a "connection-less" protocol.
Taking the example of e-mail again, the e-mail client and server communicate over a
reliable TCP connection. The server listens on a certain port (port 25) until a
connection request arrives from the client. The server acknowledges the request, and
a TCP connection is established. Using this connection the client and server can
exchange data.
The content of this data is not really relevant at this level: that's the responsibility of
the application layer. The e-mail message and all the other information exchanged at
that SMTP application layer are merely payload, data that needs to be transported.
Hence the name transport layer.
The network layer is responsible for transmitting and routing data packets over the
network. The Internet uses the Internet Protocol or IP as its network layer. Each
node on the network has an address, which of course is called the IP address. Data is
sent as IP packets.
When the client sends its TCP connection request, the network layer puts the request
in a number of packets and transmits each of them to the server. Each packet can
take a different route, and some of the packets may get lost along the way. If they
all make it, the transport layer at the server is able to reconstruct the request, and it
will prepare a response confirming that a TCP connection has been set up. This
response is sent back again in a number of IP packets that will hopefully make it to
the client.
The Internet Protocol basically assumes all computers are part of one very large
"web" of nodes that can all pass packets to other nodes. There's always a route from
one node to another, even if sometimes a very large number of intermediate nodes
get involved. The link layer is what makes this assumption true.
The link layer provides a network connection between hosts on a particular local
network, as well as interconnection between such local networks. The e-mail client
runs on a personal computer in someone's home network, which is set up using the
Ethernet protocol. The link layer now is that Ethernet network. The IP packets that
this computer transmits are added as payload to Ethernet packets (called "frames")
that are transmitted over the local network to the ADSL modem that connects the
local network to the provider.
Physical Layer
The lowest layer is the physical layer, which defines how the cables, network cards,
wireless transmitters and other hardware connect computers to networks and
networks to the rest of the Internet. Examples of physical layer networks are
Ethernet, WiFi, Token Ring and Fiber Data Distributed Interface (FDDI). Note that
many of these technologies also have their own link layer protocol. Often link and
physical layer are closely related.
The physical layer provides the means to transfer the actual bits from one computer
to another. In an Ethernet network (a link layer protocol); a computer is connected
by plugging a network cable into its Ethernet card, and then plugging the other end
of that cable into a router or switch. The physical layer specifies how bits of data are
sent over that cable: how do the electrical currents or the pulses the card sends get
turned back into the data for the higher level layers. For wireless networks, this
works exactly the same, except of course there is no cable.
Each layer relies on the layer below it for the actual transmission of data, adding or
providing specific functionality for its own intended purpose. The link layer relies on a
network cable over which Ethernet packets can be sent. The network layer uses
these Ethernet packets to transport IP packets, and adds the ability to route the
packets across networks. The transport layer relies on IP packets to create and
establish the TCP connection, or to transport UDP packets.
At every layer certain messages are exchanged. Each message at a particular level
contains as payload all or part of a message that a higher layer wants to send. This
is called data encapsulation.
For example, a Web browser (an application) needs to send a request for a Web
page to a server. This request is passed on to the transport layer, which sets up a
connection to port 80 of the server and transmits a TCP message containing the
request. The server responds with a TCP message containing the response.
Embedded in the response is the Web page itself. The TCP layer strips off the
response and passes the payload, the Web page, to the browser which then renders
it.
The TCP request and response are both transmitted by the IP layer. The TCP layer
breaks them up into parts that get put in different IP packets. A sequence number is
added to each part, allowing the receiving TCP layer to re-assemble the parts and
thereby recover the actual message. If IP packets are received out of order, the
receiving TCP layer can re-order them. Any missing packets can also be detected.
The receiving TCP layer will then request retransmission.
The IP packets are then put into Ethernet frames or other link-layer messages. If
necessary the IP packets are again divided up into parts, each of which is put into
different link-layer messages. The receiving link layer then re-assembles the IP
packet and passes it to the network layer.
At the physical level, the Ethernet frames are turned into a series of ones and zeroes
in the form of electrical currents or pulses that are transmitted over the network
cable or through the air.
The TCP/IP protocols are the core protocols that make up the Internet. The packet
based design of TCP/IP has made the Internet very resilient packets.
The term ‘TCP?IP’ is both a general name for the Internet architecture as well as an
abbreviation of the two most important protocols in that architecture .The internet is
based on packet based data transmission. Data is divided into packets that are
transported individually from computer to computer and from network to network.
Each computer on the has its own IP address that is used to transfer the these
TCP/IP architecture
Transport
CP DP CMP SPF
Layer
Internet
P RP
Layer
TCP/IP networks are the most common type of network today. With such network, a
number of computers or nodes can communicate with each other. An important
aspect of this communication is routing. Getting data packets from one node to
another, in particular from one node on one network to another node on another
network.
Nodes, hubs and switches
The most common type of network (especially in the home) is the Ethernet network
shown in figure 1, where all nodes are connected to a central device. In its simplest
form this central node is called a hub.
Basically, a hub is a box with lots of connections (sockets) for Ethernet cables. The
hub repeats all messages it receives to all connected nodes, and these nodes filter
out only the messages that are intended for them. This filtering takes place at the
Ethernet level: incoming messages carry the Ethernet network address of the
intended recipient.
A problem with this approach is that hubs generate a lot of traffic, especially on
larger networks. Most of this traffic is wasted, since it is intended for only one node
but it is sent to all nodes on the network.
A commonly used solution today is a switch. A switch still connects all nodes to each
other, like a hub, but is more intelligent in which messages are passed on to which
node. A switch examines incoming Ethernet messages to see which node is the
intended recipient, and then directly (and only) passes the messages to that node.
This way other nodes do not unnecessarily receive all traffic.
Since switches are more expensive than hubs, a low-traffic part of the network could be
set up using a hub, with the more high-traffic nodes being interconnected to the switch.
The hub segment is then connected to the switch as well, as shown in figure 2.
A large network can be divided into multiple parts which are called segments. Each
segment can use its own network protocol, security rules, and firewalls and so on.
Nodes on different segments cannot directly communicate with each other. To make
this possible, a bridge is added between the segments, as shown in figure 3.
The bridge lets packet pass that are destined for a host on the other side. This seems to
turn the two segments into one big network again, but there is an important difference.
Data packets generated on one segment and intended for that same segment are not
passed to the other segment. This saves on data transmission on the network as a whole.
Routers and routing
The above examples all presented a single network at the Internet Protocol level.
Even when the network is segmented, all nodes are still able to communicate with
each other. To connect networks, a router or gateway is used.
A router is connected to two different networks and passes packets between them,
as shown in figure 4 to the right. In a typical home network, the router provides the
connection between the network and the Internet.
A gateway is the same as a router, except in that it also translates between one network
system or protocol and another. The NAT protocol for example uses a NAT gateway to
connect a private network to the Internet.
Figure 5 below illustrates how routers (and behind them, entire networks) may be
connected. There are now multiple routes from the node at the left to the node at
the right. Since routers transmit IP packets, and IP packets are all independent of
one another, each packet can travel along a different route to its destination.
The TCP protocol that runs in the transport layer above does not notice this,
although a user may notice if suddenly the connection seems faster or slower. That
could be caused by packets now following a different route that is faster or slower
than the old one.
Security of routing
Routing data packets in this way is very efficient, but not very secure. Every router
in between the source and the destination can examine every packet that comes
through. This enables for example systems like Carnivore (in Dutch) to examine
almost all Internet traffic. Using encrypted Internet transmissions avoids this.
Onion routing with systems like T or avoid even this risk, although they are much
slower than traditional routing systems
On Internet Protocol (IP) networks such as the Internet itself, data is sent in packets.
Each packet carries the addresses of the source and the destinations. These
addresses on IP networks are then of course called IP addresses. Every node
(computer) on an IP network needs to have its own IP address.
As humans usually prefer to use names, applications such as Web browsers will need
to translate those names (using DNS) into IP addresses before they can
communicate with the host in question.
IP version 4 addresses
IP version 4 (IPv4) is the main version of the Internet Protocol. This version is
currently used by almost all IP networks. An IP version 4 addresses is a 32-bit
number that is typically written as four decimal numbers separated by periods. An
example is "192.168.1.3".
IP version 6 (IPv6) addresses were introduced because the old IP version 4 addresses
were in danger of running out. An IP version 6 address is a 128-bit number that is
typically written as eight groups of four hexadecimal digits. The groups are separated by
colons. An example is "2001:610:113b:50a1: 136".
Class-based IP addresses
Originally, when the Internet Protocol was first defined, IP (version 4) addresses
were handed out to organizations in blocks. There are three classes of blocks: Class
A, Class B and Class C. The higher the class, the larger the number of IP addresses
in the block.
The class an IP address belongs to follows from the first decimal number: Class A
addresses have numbers between 1 and 127, Class B is between 128 and 191, and
Class C is 192 and higher. There are also Class D (224 to 247) and Class E (248-
255), but these are rarely used in practice.
The organization is itself responsible for dividing the IP addresses in its assigned
block to nodes in its IP network. For example, the Windhaven University of
Technology has been assigned the Class B block of 131.155, and so can use any IP
address between 131.155.0.1 and 131.155.255.254. The ".0.0" and ".255.255"
addresses are reserved.
It's clear that this method of dividing up the IP address blocks quickly runs out of
addresses. A Class B and especially a Class A block gives an enormous amount of
addresses to one organization, which probably does not need all of them. However the
remaining unneeded parts of the block cannot be reassigned to someone else.
Classless IP addresses
To allow a more fine-grained way of handing out addresses, today most IP version 4
address blocks are handed out as subnets. This approach avoids the class-based
division and its coarse-grained distribution of IP addresses. For IPv6, classless
assignment is the only way to obtain blocks of IP version 6 addresses.
A block of IP addresses can be divided into smaller, more manageable groups that
can each be assigned to different organizations. And even within one organization
different subnets can be set up for different networks within the organization.
For example every building could be given its own subnet, or the sales, marketing
and R&D departments could be given their own respective subnets. The subnets or
local networks can then be managed separately, for example with their own firewalls
or separate connections to the Internet.
Subnet masks
Subnets are defined by means of a subnet mask that specifies which parts of an IP
address belong to the group, the network, and which parts make up the individual
node's address. This requires comparing the IP address and the subnet mask (or just
net mask for short) in their binary forms. If a bit of the subnet mask is '1', the
corresponding bit in the IP address belongs to the group (subnet). If the subnet bit is
'0', the corresponding IP address bit is part of the individual address.
For example, consider the IP address "192.168.100.1". This is part of the Class B
block "192.168". The standard subnet mask in this block is "255.255.0.0", which
means there are 65,000 individual IP addresses in this network (these range from
192.168.0.1 to 192.168.255.254). To split this up, the network administrator can
define the subnet mask "255.255.255.0", which allows the creation of 255 networks
with 255 hosts in each network. More flexibility can be obtained with more creative
choices of subnets.
Default network
The zero at the end indicates that this is a network address. This is why individual
hosts can only have IP addresses ending in 1 or higher.
The same address can belong to a different network by changing the subnet mask.
For example, when the subnet mask is "255.255.240.0" instead, the network
address now is "192.168.96.*":
This choice of netmask allows 4,095 hosts on a single network, although only 16 of
these networks can be created within this block of IP addresses. The fact that this
network has the number '96' in its third decimal is misleading: it's actually the sixth
network (decimal six in binary is 0110), but because of the four zeroes after the
"0110" in the notation above, the third decimal in the IP address becomes "96".
Of course an administrator can use different subnets within one block. This way one
network can have 255 nodes and another can have 4,095 hosts. The netmasks should be
chosen carefully not to overlap each other of course.
Reserved addresses
Not everyone is interested in building IP-based networks where each node needs an
address on the worldwide Internet. Three special ranges (blocks) of IP addresses
have been reserved for use in local networks. These "private ranges" or "private
addresses" are:
HP Load Runner by Satyam QEdge Performance SO, Hyderabad
191
HP Performance Center (Load Runner)
10.0.0.0 through 10.255.255.255
127.0.0.0
These addresses can always be used in local networks that do not directly connect to
the Internet. In fact they are not even supposed to connect to the Internet, and any
node that receives messages from outside its own network with one of these
addresses as the sender will discard such messages rightaway.
The reserved range of 127.0.0.0 is intended for use on a single node. Addresses in
this range are called loopback addresses. Only applications on the same node can
send packets to these addresses. This makes it possible, for example, to run a
Webserver from the address 127.0.0.1 so changes to a Website can be tested from
the Web designer's computer. Other people can never access that Webserver.
These reserved addresses are often used in conjunction with the Network Address
Translation (NAT) scheme, sometimes also called "IP Masquerading" or "Network
Masquerading". This means the private addresses are mapped to a single public IP
address so the nodes with these private addresses can still access the Internet. This
way no public IP address block needs to be allocated.
The Network Address Translation (NAT) scheme is used to "hide" local networks from
the public Internet. Essentially, all communication from that local network appears to
come from a single node, the NAT gateway. The NAT gateway forwards requests
from other nodes in that network, and also pasess on responses from outside to the
right node on the internal network.
As shown in the figure, three computers on a local network have private addresses
10.0.0.1, 10.0.0.2 and 10.0.0.3. They are connected with a hub which in turn
connects to a NAT gateway. If any of these computers contacts the server on the
right via the Internet, the server always sees the NAT gateway's IP address (here
171.67.2.3).
The NAT gateway is both connected to the local network and to the Internet. It
receives the IP packets that are intended for outside nodes. It registers from which
local node those packets came, and then replaces the source address with its own,
public IP address before forwarding the packets to the real destination.
Incoming packets are also received by the NAT gateway, which determines the real
intended destination, replaces the destination address (its own IP address) with the
real, local IP address and forwards the packets onto the local network.
This on-the-fly address replacement is possible because the NAT gateway examines
the IP packets for the TCP or UDP ports mentioned in those packets. In TCP a
connection is established between a port on the client and a port on the server. The
gateway registers both the local source IP address and the source port. It then
replaces the source port number with a new port number it chooses itself. Incoming
A problem with NAT is that some application-level protocols do not work well, as they
rely on the IP address provided by the real source. Since that is often a private and
thus unreachable address, the communication will fail.
IP address assignment
Dynamic assignment is today most commonly done with the Dynamic Host
Configuration Protocol (DHCP). When trying to connect to a network, the computer
sends a request for an IP address. The DHCP server receives the request and assigns
or "leases" the address to that computer. A special "lease" message, containing the
IP address together with netmask and other network configuration information, is
then sent to that computer. The lease is valid for a certain period of time, after which
the computer will request a new one.
The address in a lease is taken from a certain block, called the pool in DHCP
terminology).The address that is sent in such a lease can be the same every time,
although this is not required.
Many Internet users are familiar with the even higher layer application protocols that
use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext
Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets
you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These
and other protocols are often packaged together with TCP/IP as a "suite."
Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used
instead of TCP for special purposes. Other protocols are used by network host
computers for exchanging router information. These include the Internet Control
Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway
Protocol (EGP), and the Border Gateway Protocol (BGP).
1111111111222222222233
01234567890123456789012345678901
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
+-+-+-+
| Length | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
+-+-+-+
| Data...
+-+-+-+-+-+-+-+-+-+-+-+-+-
FIGURE 9. UDP datagram format.
Source Port: Identifies the UDP port being used by the sender of the datagram; use
of this field is optional in UDP and may be set to 0.
Checksum: Provides bit error detection for the UDP datagram. The checksum field
covers the UDP datagram header and data, as well as a 96-bit pseudo header that
UDP provides two services not provided by the IP layer. It provides port numbers to
help distinguish different user requests and, optionally, a checksum capability to
verify that the data arrived intact.
HTTP concepts include (as the Hypertext part of the name implies) the idea that files
can contain references to other files whose selection will elicit additional transfer
requests. Any Web server machine contains, in addition to the Web page files it can
serve, an HTTP daemon, a program that is designed to wait for HTTP requests and
handle them when they arrive. Your Web browser is an HTTP client, sending requests
to server machines. When the browser user enters file requests by either "opening" a
Web file (typing in a Uniform Resource Locator or URL) or clicking on a hypertext
link, the browser builds an HTTP request and sends it to the Internet Protocol
address (IP address) indicated by the URL. The HTTP daemon in the destination
server machine receives the request and sends back the requested file or files
associated with the request. (A Web page often consists of more than one file.)
The abbreviation URL stands for Uniform Resource Locator. It is a simple way of
indicating the address of a certain resource and because of its easy format; it does
not a need a program to get parsed.
In a document with a given URL , it is possible to give the URL of another document
relative to the URL of the current document. This relative URL is usually much
shorter than the full URL.
Structure of an URL
http://me:[email protected]:81/users/galactus/file.html
^ ^ ^ ^ ^ ^--- local URL part
| | | | |------ port number
| | | ---------------------- hostname of server, or IP address
| | ----------------------------- password (optional)
| -------------------------------- username (optional)
--------------------------------------- protocol name
In most cases, the username, password and port number are omitted. It is also
possible that the local URL part ends in a slash, in which case it is called a directory
URL.
If the local URL starts with a slash, it is called an absolute local URL, otherwise it is
called a relative (local) URL.
What are relative URLs?
Put simply, it's an URL which needs some processing before it is valid. It is a local URL,
from which certain information is left out. Often this means some directory names have
been left off, or the special sequence ../ is being used.
The "relative" comes from the fact that the URL is only valid relative to the URL of
the current resource.
As said, a relative URL needs the URL of the current resource to be interpreted
correctly. With some simple manipulations, the relative URL is transformed into an
absolute URL, which is then fetched as usual.
A relative URL is always a local URL. The first part is therefore always the same as
that of the current URL. The relative URL is then turned into an absolute local URL
with the following simple steps:
1.Omit the filename of the current absolute local URL, if it's not a directory URL.
2.For every ../ at the beginning of the relative URL, chop off one directory name
from the current directory URL.
In these examples, we assume that the full URL of the current document is
http://www.foo.com/users/galactus/index.html
As you can see in the last example, it is quite possible that ../ ends you in a totally
different directory on the server.
FTP, the File Transfer Protocol, documented in RFC 959, is one of oldest Internet
protocols still in widespread use. FTP is implemented using the TCP Protocol.
As shown in the following diagram, FTP uses separate command and data
connections. The Protocol Interpreter (PI) implements the FTP protocol itself, while
the Data Transfer Process (DTP) actually performs data transfer. The FTP protocol
and the data transfer use entirely separate TCP sessions.
-------------
|/---------\|
|| User || --------
||Interface|<--->| User |
|\----^----/| --------
---------- | | |
|/------\| FTP Commands |/----V----\|
||Server|<---------------->| User ||
|| PI || FTP Replies || PI ||
|\--^---/| |\----^----/|
| | | | | |
-------- |/--V---\| Data |/----V----\| --------
| File |<--->|Server|<---------------->| User |<--->| File |
|System| || DTP || Connection || DTP || |System|
-------- |\------/| |\---------/| --------
---------- -------------
Server-FTP USER-FTP
File Transfer Protocol (FTP), a standard Internet protocol, is the simplest way to
exchange files between computers on the Internet. Like the Hypertext Transfer
Protocol (HTTP), which transfers displayable Web pages and related files, and the
Simple Mail Transfer Protocol (SMTP), which transfers e-mail, FTP is an application
protocol that uses the Internet's TCP/IP protocols. FTP is commonly used to transfer
Web page files from their creator to the computer that acts as their server for
everyone on the Internet. It's also commonly used to download programs and other
files to your computer from other servers.
As a user, you can use FTP with a simple command line interface (for example, from
the Windows MS-DOS Prompt window) or with a commercial program that offers a
graphical user interface. Your Web browser can also make FTP requests to download
programs you select from a Web page. Using FTP, you can also update (delete,
rename, move, and copy) files at a server. You need to logon to an FTP server.
However, publicly available files are easily accessed using anonymous FTP.
Basic FTP support is usually provided as part of a suite of programs that come with
TCP/IP. However, any FTP client program with a graphical user interface usually
must be downloaded from the company that makes it
Very briefly, FTP (File Transfer Protocol) is used to transfer programs or other
information from one computer to another. This simple tool will let you do many
things: download software, upload your own web pages, transfer information
between your home and work machines, and more. You don't need to learn a lot of
confusing commands, either. As is so often true with computers, the right tool makes
the job much easier
Ping: A utility that allows a user at one system to determine the status of other
hosts and the latency in getting a message to that host. Uses ICMP Echo messages.
The hierarchical structure of domain names is best understood if the domain name is
read from right-to-left. Internet hosts names end with a top-level domain name.
World-wide generic top-level domains (TLDs) include:
.edu: Educational institutions; largely limited to 4-year colleges and universities from
about 1994 to 2001, but also includes some community colleges (administered by
EDUCAUSE)
TOR or Tor is an abbreviation for The Onion Router. As the Tor homepage puts it
"Tor is a network of virtual tunnels that allows people and groups to improve their
privacy and security on the Internet." Tor uses so-called onion routing to defend
against privacy attacks. Onion routing relies on multiple layers of security that are
removed (like onion skin) one by one as a message is routed through the Tor
network.
To explain what onion routing is, I will elaborate on Tor as this is the leading
software utilising onion routing. I will first give a short introduction what routing and
routing protocols are, there after which technologies are used to achieve enhanced
communication anonymity and subsequently, how Tor makes use of these techniques
and in which way it differs from (traditional) onion routing.
Introduction to routing
Routing is most effective if one or more routing protocols are in use. A routing
protocol is in essence a set of rules used by nodes to determine the most
appropriate paths into which they should forward packets towards their intended
destinations. A routing protocol specifies amongst others how intermediary nodes
report changes in the network and share this information with the other nodes in the
network.
I will use a simplified example to clarify these definitions however the principle is the
same irrespective of the scale of the network. If Alice would like to connect to Bob,
she sends a request to her ISP to find out which route should be used to make a
connection and her provider will respond with the routing information.
After Alice's computer received the route to be used the request will be split into
smaller data packets and sent to the first node which in turn sends the packet to the
consecutive nodes to eventually reach Bob.
Even though routing is being used on an everyday basis and its use is widespread, it
does have a disadvantage. Public networks like the Internet are very vulnerable to
traffic analysis because, e.g. packet headers identify the IP addresses of the
recipient(s) and the packet routes can rather easily be tracked.
Digital mixing
If Alice wants to send a message to Bob, without a third person being able to find
out who the sender or recipient is, she would encrypt her message three times with
the aid of public key cryptography. She would then send her message to a proxy
server who would remove the first layer of encryption and send it to a second proxy
server through the use of permutation. This second server would then decrypt and
also permute the message and the third server would decrypt and send the message
to the intended recipient. This is illustrated in the figure below.
Using digital mixing is comparable to sending a letter encased in four envelopes pre-
addressed and pre-stamped with a small message reading, "please remove this
envelope and repost". (With the difference of course that the encryption of the
message is not easily removed as an envelope.) If the three successive recipients
would indeed post the letter, the letter would reach the intended recipient without
there being a paper trail between the initial sender and the intended recipient.
This system is effective because as long as the three successive recipients, the re-
senders, send enough messages it is impossible for a third person e.g. an ISP, and
subsequently government (policing) agencies, to find out what message was
originally sent by whom and to whom.
Digital mixing also has some downsides. First of all, it only works if the re-senders
send enough messages (at any given moment and during a set amount of time e.g.
a day). However, because (most) nodes, the resending servers, do not send enough
messages at the same time, digital mixing would be vulnerable to statistical analysis
such as data mining by governments or government policing agencies.
By making use of e.g. a threshold batching strategy the proxy servers are able to
solve this lack of messages at the same moment. However, this results in the fact
that the period between the sending and the eventual receiving of the message by
the intended recipient can be several hours depending on the amount of messages
deemed critical. Which means a threshold batching strategy makes digital mixing a
(rather) slow technique.
And because the use of public key cryptography in itself is not very fast, it would not
take several hours for a series of proxy servers to decrypt and permute a message
but it would be significantly longer than a normal transfer lasting milliseconds.
This all results in the conclusion that digital mixing is only effective in case of static data
packages such as e-mails, because if digital mixing were to be used for web browsing or
data transfer a high latency would be the result. This in turn means that digital mixing is
not a suitable solution for one who wants anonymity while web browsing or conducting
data transfers with the aid of e.g. FTP servers.
The system of proxy servers has many positive aspects. Firstly it is useable for both
static-, e.g. e-mail, and dynamic data packets, such as web browsing. Secondly,
proxy servers do not require expensive techniques like public key encryption to
function, and thirdly it is an easy system. A user only needs to connect to a proxy
server via his or her web browser to use it, as is illustrated by figure 3 above.
However, anonymizing proxies have one fatal flaw, which makes them less than reliable
for anyone who wants anonymous communication. If an unreliable third party controls
the proxy server, e.g. a group of criminals who use the proxy server for phishing, the user
is no longer guaranteed of a secure, and anonymous, communication.
If Alice wants to make a connection to Bob through the Tor network, she would first
make an unencrypted connection to a centralised directory server containing the
addresses of Tor nodes as illustrated. After receiving the address list from the
directory server the Tor client software will connect to a random node (the entry
node), through an encrypted connection. The entry node would make an encrypted
connection to a random second node which would in turn do the same to connect to
a random third Tor node.
That third node, the exit node, would then connect to Bob as visualised below. Every
Tor node is chosen at random (However, the same node cannot be used twice in one
connection and depending on data congestion some nodes will not be used.) from
the address list received from the centralised directory server, both by the client and
and by the nodes, to enhance the level of anonymity as much as possible.
If the same connection, the same array of nodes, were to be used for a longer period
of time a Tor connection would be vulnerable to statistical analysis, which is why the
client software changes the entry node every ten minutes, as illustrated.
To increase anonymity Alice could also opt to run a node herself. I mentioned earlier
that the identity of all Tor nodes is public, which could lead to the conclusion that
running a node would not increase the level of anonymity for Alice. This notion would
however not be correct.
I will try to explain why running a Tor node actually increases anonymity. If Alice
uses the Tor network to connect to Bob, she does this by connecting to a Tor node,
however if she functions as node for Jane she would also connect to a Tor node. This
would result in a situation in which a malevolent third party would not be able to
know which connection is initiated as a user and which as node.
Which means this makes data mining significantly more difficult, and in a
situation where Alice functions as a node for dozens of users, it makes data mining
virtually impossible. As Roger Dingledine said it poignantly, "Anonymity loves
company [...] it is not possible to be anonymous alone". This is also one of the
reasons why the United States Department of Defence funded and still funds,
amongst others through organisations such as DARPA and CHACS, the research,
development and refinement of Tor.
illustration above).
The SNMP manager and agent use an SNMP Management Information Base (MIB)
and a relatively small set of commands to exchange information. The SNMP MIB is
organized in a tree structure with individual variables, such as point status or
description, being represented as leaves on the branches. A long numeric tag or
object identifier (OID) is used to distinguish each variable uniquely in the MIB and in
SNMP messages.
SNMP uses five basic messages (GET, GET-NEXT, GET-RESPONSE, SET, and TRAP) to
communicate between the SNMP manager and the SNMP agent. The GET and GET-
NEXT messages allow the manager to request information for a specific variable. The
agent, upon receiving a GET or GET-NEXT message, will issue a GET-RESPONSE
message to the SNMP manager with either the information requested or an error
As you can see, most of the messages (GET, GET-NEXT, and SET) are only issued by
the SNMP manager. Because the TRAP message is the only message capable of being
initiated by an SNMP agent, it is the message used by DPS Remote Telemetry Units
(RTUs) to report alarms. This notifies the SNMP manager as soon as an alarm
condition occurs, instead of waiting for the SNMP manager to ask.
The small number of commands used is only one of the reasons SNMP is "simple."
The other simplifying factor is the SNMP protocol's reliance on an unsupervised or
connectionless communication link. This simplicity has led directly to the widespread
use of SNMP, specifically in the Internet Network Management Framework. Within
this framework, it is considered "robust" because of the independence of the SNMP
managers from the agents, e.g. if an SNMP agent fails, the SNMP manager will
continue to function, or vice versa. The unsupervised communication link does
however create some interesting issues for network alarm monitoring we will discuss
more thoroughly in a later issue of our SNMP tutorial.
Connecting to xyzzy.com....
Please enter your login: jsmith
Password? *****
Welcome, jsmith. You have 3 new messages.
Read them now (y/n)? y
If you've accessed BBSs by modem before, using a communications program, then
you'll find telnet is similar. But it allows you to access Internet-connected BBSs and
other systems world-wide.
In addition to being a type of program and a protocol, telnet can also be used as a
verb. To telnet to a system means to connect to a system with a telnet program