Software Testing Technical FAQs
Software Testing Technical FAQs
Are you a Software QA engineer or Software tester? Need to update your software QA/testing
knowledge or need to prepare for a job interview? Check out this collection of Software
QA/Testing Technical FAQs ...
The program is run on some test cases & results of the program’s performance are examined to
check whether the program operated as expected
E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data
flow analysis, control flow analysis
Answer2:
Static Testing: Verification performed with out executing the system code
Dynamic Testing: Verification and validation performed by executing the system code
Software Testing
Software testing is a critical component of the software engineering process. It is an element of
software quality assurance and can be described as a process of running a program in such a
manner as to uncover any errors. This process, while seen by some as tedious, tiresome and
unnecessary, plays a vital role in software development.
Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, 'if the user is in interface A of the application while using hardware B, and does
C, then D should happen'). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should. It is oriented to
'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization's size and business structure.
Answer2:
There could be several reasons for not catching a showstopper in the first or second build/rev. A
found defect could either functionally or physiologically mask a second or third defect.
Functionally the thread or path to the second defect could have been boken or rerouted to
another path or physiologically the tester who found the first defect knows the app must go back
and be rewritten so he/she procedes halfheartedly on and misses the second one. I've seen
both cases. It is difficult to keep testing on a known defective app. The testers seem to lose
interest knowing that what effort they put in to test it, will have to be redone on the next iteration.
This will test your metal as a lead to get them to follow through and maintain a professional
attitude.
Answer3:
The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just
provides information. Applying this information to your situation is the important part.
The other thing that you may be encountering is that testing tends to be exploratory in nature.
You have stated that these are existing bugs, but not stated whether tests already existed for
these bugs.
Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application
and its relationships and interactions will improve with time and thus more 'interesting' bugs tend
to be found in later iterations as testers expand their exploration (ie. think of new tests).
No matter how much time you have to read through the documents and inspect artefacts,
seeing the actual application is going to trigger new thoughts, and thus introduce previously
unthought of tests. Exposure to the application will trigger new thoughts as well, thus the longer
your testing goes, the more new tests (and potential bugs) are going to be found. Iterative
development is a good way to counter this, as testers get to see something physical earlier, but
this issue will always exist to some degree as the passing of time, and exploration of the
application allow new tests to be thought of at inconvenient moments.
Answe1:
Are you the programmer who has to fix them, the project manager who has to supervise the
programmers, the change control team that decides which areas are too high risk to impact, the
stakeholder-user whose organization pays for the damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed choice.
Testers should provide data to indicate the *severity* of bugs, but the project manager or the
development team do the prioritization.
When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups
often do follow-up tests to assess how serious a failure is and how broad the range of failure-
triggering conditions.
Priority depends on a wide range of factors, including code-change risk, difficulty/time to
complete the change, which stakeholders are affected by the bug, the other commitments being
handled by the person most knowledgeable about fixing a certain bug, etc. Many of these
factors are not within the knowledge of most test groups.
Answe2:
As a tester we don't fix the defects but we surely can prioritize them once detected. In our org
we assign severity level to the defects depending upon their influence on other parts of
products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it
has to be fixed ASAP. We have 5 levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic
Dev can group all the critical ones and take them to fix before any other defect.
Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3
Generally the defects are classified in aboveshown grid. Every organization / software has some
target of fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or
versions.
Thus the organization should decide its target and act accordingly.
Basically bugfree software is not possible.
Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a
large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely
on my knowledge of the application and the potential downstream impacts in the modeled
business process to prioritize defects.
If the customer doesn't then I fell the test organization should based on risk or other, similar
considerations.
What is retesting?
Answer1:
Retesting is usually equated with regression testing (see above) but it is different in that is
follows a specific fix--such as a bug fix--and is very narrow in focus (as opposed to testing entire
application again in a regression test). A product should never be released after any change has
been applied to the code, with only retesting of the bug fix, and without a regression test.
Answer2:
1. Re-testing is the testing for a specific bug after it has been fixed.(one given by your
definition).
2. Re-testing can be one which is done for a bug which was raised by QA but could not be
found or confirmed by Development and has been rejected. So QA does a re-test to make sure
the bug still exists and again assigns it back to them.
when entire project is tested & client have some doubts about the quality of testing, Re-Testing
can be called. It can also be testing the same application again for better Quality.
Answer3:
Regression Testing is, the selective retesting of a system that has been modified to ensure that
any bugs have been fixed and that no other previously working functions have failed as a result
of the reparations and that newly added features have not created problems with previous
versions of the software. Also referred to as verification testing
It is important to determine whether in a given set of circumstances a particular series of tests
has been failed. The supplier may want to submit the software for re-testing. The contract
should deal with the parameters for retests, including (1) will test program which are doomed to
failure be allowed to finish early, or must they be completed in their entirety? (2) when can, or
must, the supplier submit his software for retesting?, and (3) how many times can the supplier
fail tests and submit software for retesting ñ is this based on time spent, or the number of
attempts? A well drawn contract will grant the customer options in the event of failure of
acceptance tests, and these options may vary depending on how many attempts the supplier
has made to achieve acceptance.
So the conclusion is retesting is more or less regression testing. More appropriately retesting is
a part of regression testing.
Answer4:
Re-testing is simply executing the test plan another time. The client may request a re-test for
any reason - most likely is that the testers did not properly execute the scripts, poor
documentation of test results, or the client may not be comfortable with the results.
I've performed re-tests when the developer inserted unauthorized code changes, or did not
document changes.
Regression testing is the execution of test cases "not impacted" by the specific project. I am
currently working on testing of a system with poor system documentation (and no user
documentation) so our regression testing must be extensive.
Answer5:
* QA gets a bug fix, and has to verify that the bug is fixed. You might want to check a few things
that are a “gut feel” if you want to and get away by calling it retesting, but not the entire
function / module / product. * Development Refuses a bug on the basis of it being “Non
Reproducible”, then retesting, preferably in the presence of the Developer, is needed.
Any recommendation for estimation how many bugs the customer will find till gold
release?
Answer1:
If you take the total number of bugs in the application and subtract the number of bugs you
found, the difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question
with much accuracy. If you could refernce a previous application release, it might give you a
rough idea. The best thing to do is insure your test coverage is as good as you can make it then
hope you've found the ones the customer might find.
Remember Software testing is Risk Management!
Answer2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20
principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and
how much life cycle teting have you performed (Believe that most of the bugs from customer
comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum
estimation.
Answer3:
You can look at the customer issues mapping from previous release (If you have the same
product line) to the current release ,This is the best way of finding estimation for gold release of
migration of any product.Secondly, till gold release most of the issues comes from various
combination of installation testing like cross-platform,i18 issues,Customization,upgradation and
migration.
So ,these can be taken as a parameter and then can estimation be completed.
When the build comes to the QA team, what are the parameters to be taken for
consideration to reject the build upfront without committing for testing ?
Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build
verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.
Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That
means that the entrance criteria to the test phase have been defined and agreed upon up front.
This should be standard for all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-
over procedure. There may, but shouldn't be, politics involved. The only way the test phase can
proceed is for the test team to have all components required to perform successful testing. You
will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be
taken together by the whole development team. Developments entrance criteria would include
signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for
success
Answer3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered
invalid.
For example, suppose someone gave you a "bad build" in which several of the wrong files had
been loaded. Once you know it contains the wrong versions, most groups think there is no point
continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set
a build verification test and the program fails it, the agreement in your company might be to
reject the program from testing. Some BVTs are designed to include relatively few tests, and
those of core functionality. Failure of any of these tests might reflect fundamental instability.
However, several test groups include a lot of additional tests, and failure of these might not be
grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to
entry criteria but start testing the code whether the entry criteria are met or not. Neither of these
is right or wrong--it's the culture of the company. Be sure of your corporate culture before
rejecting a build.
Answer4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to
satisfy - if it satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new
build - say a change to a specific case - this should pass Not able to proceed - non - testability
or even some more which is in relation to the new build or the product If the above criterias don't
pass then the build could be rejected.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process
of analysing a software item to detect the differences between existing and required conditions
(that is defects/errors/bugs) and to evaluate the features of the software item.
What is the testing lifecycle?
There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting
What is quality?
Quality software is software that is reasonably bug-free, delivered on time and within budget,
meets requirements and expectations and is maintainable. However, quality is a subjective
term. Quality depends on who the customer is and their overall influence in the scheme of
things. Customers of a software development project include end-users, customer acceptance
test engineers, testers, customer contract officers, customer management, the development
organization's management, test engineers, testers, salespeople, software engineers,
stockholders and accountants. Each type of customer will have his or her own slant on quality.
The accounting department might define quality in terms of profits, while an end-user might
define quality as user friendly and bug free.
What is Benchmark?
How it is linked with SDLC (Software Development Life Cycle)?
or SDLC and Benchmark are two unrelated things.?
What are the compoments of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application, all future
application changes will be tested and compared against the benchmarked application.
Answer1:
all the conditions mentioned are valid and not a single condition can be stated as false.
Here i think, the condition means the input type or situation (some may call it as valid or invalid,
positive or negative)
Also a single test case can contain both the input types and then the final result can be verified
(it obviously should not bring the required result, as one of the input condition is invalid, when
the test case would be executed), this usually happens while writing secnario based test cases.
For ex. Consider web based registration form, in which input data type for some fields are
positive and for some fields it is negative (in a scenario based test case)
Above screen can be tested by generating various scenario's and combinations. The final result
can be verified against actual result and the registration should not be carried out sucessfully
(as one/some input types are invalid), when this test case is executed.
The writing of test case also depends upon the no. of descriptive fields the tester has in the test
case template. So more elaborative is the test case template, more is the ease of writing test
cases and generating scenario's. So writing of test cases totally depends on the indepth thinking
of the tester and there are no predefined or hard coded norms for writing test case.
This is according to my understanding of testing and test case writing knowledge (as for many
applications, i have written many positive and negative conditions in a single test case and
verified different scenario's by generating such test cases)
Answer2:
The answer to this question will be 3 Test cases may contain both valid and invalid conditions.
Since there is no restriction for the test case to be of multiple steps or more than one valid or
invalid conditions. But A test case whether it is feature ,unit level or end to end test case ,it can
not contain both valid and invalid condition in a unit test case.
Because if this will happen then the concept of test case for a result will be dwindled and hence
has no meaning.
Which things to consider to test a mobile application through black box technique?
Answer1:
Not sure how your device/server is to operate, so mold these ideas to fit your app. Some
highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect.
modify the firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices connected to
the same ip/port. Your app should have a method to determine which device/user sent the
message and only return to it. Should be in the message string sent and received. Unless you
have conferencing capabilities within the application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on and
reconnecting, ensure that the message is returned to the mobile unit.
Answer2:
Not clearly mentioned which area of the mobile application you are testing with. Whether is it
simple SMS application or WAP application, you need to specify more details.If you are working
with WAP then you can download simulators from net and start testing over it.
2. Module testing:
A module is a collection of dependent components such as an object class, an abstract data
type or some looser collection of procedures and functions. A module encapsulates related
components so it can be tested without other system modules.
4. System testing:
The sub-systems are integrated to make up the entire system. The testing process is concerned
with finding errors that result from unanticipated interactions between sub-systems and system
components. It is also concerned with validating that the system meets its functional and non-
functional requirements.
5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for operational use.
The system is tested with data supplied by the system client rather than simulated test data.
Acceptance testing may reveal errors and omissions in the systems requirements definition(
user - oriented) because real data exercises the system in different ways from the test data.
Acceptance testing may also reveal requirement problems where the system facilities do not
really meet the users needs (functional) or the system performance (non-functional) is
unacceptable.
Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a
single client. The alpha testing process continues until the system developer and the client
agrees that the delivered system is an acceptable implementation of the system requirements.
When a system is to be marketed as a software product, a testing process called beta testing is
often used.
Beta testing involves delivering a system to a number of potential customers who agree to use
that system. They report problems to the system developers. This exposes the product to real
use and detects errors that may not have been anticipated by the system builders. After this
feedback, the system is modified and either released fur further beta testing or for general sale.
How to test and to get the difference between two images which is in the same window?
Answer1:
How are you doing your comparison? If you are doing it manually, then you should be able to
see any major differences. If you are using an automated tool, then there is usually a
comparison facility in the tool to do that.
Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp
function which compares JPEG files in very good detail as long as they have the same
dimentions and number of components.
Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same tool.
Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current tools are
good at, or an equivalency comparison. What differences between these images are not
differences? Near-match comparison has been the subject of a lot of research in printer testing,
including an M.Sc. thesis at Florida Tech. It's a tough problem.
Testing Strategies
Strategy is a general approach rather than a method of devising particular systems for
component tests.
Different strategies may be adopted depending on the type of system to be tested and the
development process used. The testing strategies are
Top-Down Testing
Bottom - Up Testing
Thread Testing
Stress Testing
Back- to Back Testing
1. Top-down testing
Where testing starts with the most abstract component and works downwards.
2. Bottom-up testing
Where testing starts with the fundamental components and works upwards.
3. Thread testing
Which is used for systems with multiple processes where the processing of a transaction
threads its way through these processes.
4. Stress testing
Which relies on stressing the system by going beyond its specified limits and hence testing how
well the system can cope with over-load situations.
5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and
their outputs are compared. 6. Performance testing.
This is used to test the run-time performance of software.
7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper
penetration.
8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.
Large systems are usually tested using a mixture of these strategies rather than any single
approach. Different strategies may be needed for different parts of the system and at different
stages in the testing process.
When a module is introduced at some stage in this process, tests, which were previously
unsuccessful, may now, detect defects. These defects are probably due to interactions with the
new module. The source of the problem is localized to some extent, thus simplifying defect
location and repai
Debugging
Brute force, backtracking, cause elimination.
Focuses on each module and whether it works properly.
Unit Testing Coding
Makes heavy use of white box testing
Centered on making sure that each module works with
another module.
Comprised of two kinds:
Top-down and
Integration
Design Bottom-up integration.
Testing
Or focuses on the design and construction of the
software architecture.
Makes heavy use of Black Box testing.(Either answer is
acceptable)
Validation
Analysis Ensuring conformity with requirements
Testing
Making sure that the software product works with the
Systems Systems
external environment, e.g., computer system, other
Testing Engineering
software products.
Driver and Stubs
For four or five features at once, a single plan is fine. Write new test cases rather than new test
plans. Write test plans for two very different purposes. Sometimes the test plan is a product;
sometimes it's a tool.
What is boundary value analysis?
Boundary value analysis is a technique for test data selection. A test engineer chooses values
that lie along data extremes. Boundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that, if
a systems works correctly for these extreme or special values, then it will work correctly for all
values in between. An effective way to test code is to exercise it at its natural boundaries.
Answer1:
While testing, you need to keep in mind following two things always:
-- Percentage of requirements coverage
-- Number of Bugs present + Rate of fall of bugs
-- Firstly, There may be a case where requirement is covered quite adequately but number of
bugs do not fall. This indicates over testing.
--- Secondly, There may be a case where those parts of application are also being tested which
are not affected by a CHANGE or BUG FIXTURE. This is again a case of over testing.
-- Third is the case as you have suggested, with slight modification, i.e bug has sufficiently
dropped off but still testing is being at SAME levels as before.
Answer3:
Best way is to monitor the test defects over the period of time
Refer williams perry book, where he has mentioned the concept of 'under test' and 'over test', in
fact the data can be plotted to see the criteria.
Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method
would be using test coverage when it reach 100% (or 100% requirement coverage)
Answer1:
The main purpose of BB Testing is to validate that the application works as the user will be
operating it and in the environments of their systems. How do you do system testing and
integration testing?
You may lose time and money but you may also lose Quality and eventually Customers!
Answer2:
"What is the purpose of black box testing?"
Black-box testing checks that the user interface and user inputs and outputs all work correctly.
Part of this is that error handling must work correctly. It's used in functional and system testing.
"We do everything in white box testing: - we check each module's function in the unit testing"
Who is "we"? Are you programmers or quality assurance testers? Usually, unit testing is done
by programmers, and white-box testing would be how they'd do it.
"- once unit test result is ok, means that modules work correctly (according to the requirement
documemts)"
Not quite. It means that on a stand-alone basis, each module is okay. White box testing only
tests the internal structure of the program, the code paths. Functional testing is needed to test
how the individual components work together, and this is best done from an external
perspective, meaning by using the software the way an end user would, without reference to the
code (which is what black-box testing is).
if we doing testing again in black box will we lose time and money?"
No, the opposite: You'll lose money from having to repair errors you didn't catch with the white-
box testing if you don't do some black-box testing. It's far more expensive to fix errors after
release than to test for them and fix them early on.
But again, who is "we"? The black box testers should not be the people who did the
programming; they should be the QA team -- also some end users for the usability testing.
Now that I've said that, good programmers will run some basic black-box tests before handing
the application to QA for testing. This isn't a substitute for having QA do the tests, but it's a lot
quicker for the programmer to find and fix an error right away than to have to go through the
whole process of reporting a bug, then fixing and releasing a new build, then retesting.
What's Quality Approach document? what should be the contents and things like that...
Answer1:
you should start thinking from your company business type, and according to it define different
processes for your organization. like procurment, CM etc
Then think over different matrices you will be calculating for each process, and define them with
formula, the kind of analysis will be doing and when shall the red flag to be raised,
Decide on your audit policies frequencies etc. Think on the change control board if any process
needs modification.
Answer2:
By defining the process i mean the structured collection of practices that describe the
characteristics of the work and its quality. writting process means creating a system with which
every one will work, the benefits of it are like common language and a shared vision across
organization, its will be a frame work for prioritizing actions.
From implementation point of view first you need to break the complete life cycle of your product
in diffrent meaningful steps, and setting the goals for each phase.
you can create different document templates which every one shall follow, Define the
dependencies among different groups for each project, Define risks for each project and what is
mitigation plan for each risk. etc
You can read the CMMI model, customize that as per your organization goal. for a start up
company As per my personal opinion, its better to define and reach at the process for Level 3
First and then go for level 5.
Answer1:
1---Regression testing must consist of a fixed set of tests to create a base line
Don't think it is true as a "must" -- it
depends on whether your regression testing style involves repeating identical tests or redoing
testing in previously tested areas with similar tests or tests that address the same risks. For
example, some people do regression testing with tests whose specific parameters are
determined randomly. They broaden the set of values they test while achieving essentially the
same testing. Second example--some regression test suites include random stringing together
of test cases (they include load testing and duration testing in their regression series, reporting
their results as part of the assessment of each build). Depending on your theory of the _point_
of regression testing, these may or may not be entirely valid regression tests.
4--- Regression testing should be targeted areas of high risk and known code change
Hmmm, there's a area of computer science called program slicing and one of the objectives of
this class of work is to figure out how to restrict the regression test suite to a smaller number of
tests, which test only those things that might have been impacted by a change. Bob Glass has
criticized the results of some of this work, but if #4 is false, some Ph.D.'s and big research
grants should be retracted.
Answer2:
Let me explain why I think 2 & 5 are false
2---Regression tests should be used to detect defects in new feature
Since regression tests only address existing features and functionality, it can't find defects in
new features. It can only find where existing features and functionality have been broken by
changes.
I also don't like 1- and 4. 1- since a regression test suite grows as the product does. Therefore
the tests are not fixed. 4- because a regression test tests the whole application, not just a
targeted area. In the past, I have used the concept of test depth (level 1 being the basic
regression tests--higher number reflect additional functionality) so you could run a level one
regression on the whole program but do level three on the transport layer "because we've
updated the library". T
an automated set of tests would be the most likely way to make 3- a possibility. It is unlikely that
with daily builds, as many companies run their build process, that anything short of an
automated regression test suite would be able to be run daily with any efficacy. if the builds
were weekly, then a manual regression test would be likely.
Answer3:
As per the difinition of regression testing and actual workaround if you have to have answer this
question then option 3 & 4 is the best choice among all.The reason behind it is :
3---Regression testing can be run on every build It is a normal phenomenon if there is build
coming on weekly basis or it is a RC build.Since,there is nothing mention about daily build ,only
thing mention is every build so it can be correct.
4---Regression testing should be targeted areas of high risk and known code change This is
also true in most of the situation,it is not universally true but in certain condition where there is
code change and the related modules are only tested in regression automation rather than
whole code.
5 is not true coz in regression we detect the defect not prevent normally.
In QA team, everyone talks about process. What exactly they are taking about? Are there
any different type of process?
Answer1:
When you talk about "process" you are generally talking about the actions used to accomplish a
task.
Here's an example: How do you solve a jigsaw puzzle?
You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for
matching two pieces together (or no strategy at all and simply grab random pieces until you find
a match), and continue on until the puzzle is completed.
If you were to describe the *way* that you go about solving the puzzle you would be describing
the process.
Some follow-up questions you might think about include things like:
- How much time did it take you to solve the puzzle?
- Do you know of any skills, tricks or practices that might help you solve the puzzle quicker?
- What if you try to solve the puzzle with someone else? Does that help you go faster, or
slower? (why or why not?) Can you have *too* many people on this one task?
- To answer your second question, I'll ask *you* the question: Are there different ways that
people can solve a jigsaw puzzle?
There are many interesting process-related questions, ideas and theories in Quality Assurance.
Generally the identification of workplace processes lead to the questions of improvement in
efficiency and productivity. The motivation behind that is to try and make the processes as
efficient as possible so as to incur the least amount of time and expense, while providing a
general sense of repeatability, visibility and predictability in the way tasks are performed and
completed.
The idea behind this is generally good, but the execution is often flawed. That is what makes
QA so interesting. You see, when you work with people and processes, it is very different than
working with the processes performed by machines. Some people in QA forget that distinction
and often become disillusioned with the whole thing.
If you always remember to approach processes in the workplace with a people-centric view, you
should do fine.
Answer2:
There is:
* Waterfall
* Spiral
* Rapid prototype
* Clean room
* Agile (XP, Scrum, ...)
Answer2:
you should not only understand what a Quality Plan is, but you should understand why you're
making it. I don't beleieve that "because I was told to do so" is a good enough reason. If the
person who told you to create it can't tell you 1) what it is, and 2) how to create it, I don't think
that they actually know why it's needed. That breaks the primary rule of all plans used in testing:
We write quality plans for two very different purposes. Sometimes the quality plan is a product;
sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals.
If it's not being used as a tool, don't waste your time (and your company's money) doing this.
Answer1:
Assume that you're thinking client-server or web. If you test the application on the front end only
you can see if the data was stored and retrievd correctly. You can't see if the servers are in an
error state or not. many server processes are monitored by another process. If they crash, they
are restarted. You can't see that without looking at it.
The data may not be stored correctly either but the front end may have cached data lying
around and it will use that instead. The least you should be doing is verifying the data as stored
in the database.
It is easier to test data being transferred on the boundaries and see the results of those
transactions when you can set the data in a driver.
Answer2:
Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur
project is .Ticket booking system,Front end u will provided with an Interface , where u can book
the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..).
It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing
details entered by the user.
After submitting the details ,U might have provided with a correct acknowledgement.But in back
end , the details might not updated correctly in Database becoz of wrong logic development.
Then that will cause a major problem.
and regarding Unit level testing and System testing Unit level testing is for testing the basic
checks whether the application is working fyn with the basic requirements.This will be done by
developers before delivering to the QA.In System testing , In addition to the unit checks ,u will
be performing all the checks ( all possible integrated checks which required) .Basically this will
be carried out by tester
Answer3:
Ever heard about divide and conquer tactic ? It is a same method applied in backend and
frontend testing.
A good back end test will help minimize the burden of frontend test.
Another point is you can test the backend while develope the frontend. A true pararelism could
be achived.
Backend testing has another problem which must addressed before front end could use it. The
problem is concurency. Building a scenario to test concurency is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure which test you
already done and which you haven't. What we need is an effective methods to test our
application. The simplest method i know is using divide and conquer.
Answer4:
A wide range of errors are hard to see if you don't see the code. For example, there are many
optimizations in programs that treat special cases. If you don't see the special case, you don't
test the optimization. Also, a substantial portion of most programs is error handling. Most
programmers anticipate more errors than most testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is
no communication overhead, faster because there is no delay from tester-reporter to
programmer, and more effective because the programmer is likely to fix what she finds, and she
is likely to know the cause of the problems she sees. Also, the rapid feedback gives the
programmer information about the weaknesses in her programming that can help her write
better code.
Many tests -- most boundary tests -- are done at the system level primarily because we don't
trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd
rather see them properly done and properly automated in a suite of programmer tests.
How effective can we implement six sigma principles in a very large software services
organization?
Answer1:
Effective way of implementing sixsigma.
there are quite a few things one needs
1. management buyin
2. dedicated team both drivers as well as adopters
3. training
4. culture building - if you have a pro process culture, life is easy
5. sustained effort over a period towards transforming, people, thoughts and actions Personally
technical content is never a challenge, but adoption is a challenge.
Answer2:
"Six sigma" is a combination of process recommendations and mathematical model. The name
"six sigma" reflects the notion of reducing variation so much that errors -- events out of
tolerance -- are six standard deviations from a desired mean. The mathematics are at the core
of the process implementation.
The problem is that software is not hardware. Software defects are designed in, not the result of
manufacturing variation.
The other side of six sigma is the drive for continuous improvement. You don't need the six
sigma math for this and the concept has been around long before the six sigma movement.
To improve anything, you need some type of indicator of its current state and a way to tell that it
is improved. Plus determination to improve it. Management support helps.
Answer3:
There are different methodologies adopted in sixsigma. However, it is commonly referenced
from the variance based approach. If you are trying to look at sixsigma from that, for software
services, fundamentally the measurement system should be reliable - industry has not reached
the maturity level of manufacturing industry where it fits to a T. The differences between SW
and HW/manufacturing industry is slightly difficult to address.
There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate,
productivity improvements etc), some areas are difficult.
The narrower the problem area is, the better it gets even in software services to address
adopting the statistical method.
There are methodologies that have a bundle of tools,along with statistical techniques, are used
on the full SDLC.
A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose.
Else doubts creep in.
Answer1:
Defect life cycle is....different stages after a defect is identified.
New (When defect is identified)
Accepted (when Development team and QA team accepts it's a Bug)
In Progress (when a person is working to resolve the issue-defect)
Resolved (once the defect resolved)
Completed (Some one who can take up the responsibly Team lead)
Closed/reopened (Retested by TE and he will update the Status of the bug)
Answer2:
Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or
reported.
A general Interview answer can be given as:
1. New or Opened
2. Assinged
3. Fixed
4. Tested
5. Closed.
Are developers smarter than tester? Any suggestion about the future prospects and
technicality involvedin the testing job?
Answer1:
QA & Testing are thankless jobs. In a software development company developer is a core
person. As you are a fresh graduate, it would be good for you to work as a developer. From
development you can always move to testing or QA or other admin/support tasks. But from
Testing or QA it is little difficult to go back to development, though not impossible(as u are BE
comp)
Seeing the job market, it is not possible for each & every fresher to get into development. But
you can keep searching for it.
Some big company's have seperate Verifiction & Validation groups where only testing projects
are executed. Those teams have TLs, PLs who are testing experts. They earn good salary
same as development people.
In technical projects the testing team does lot of technical work. You can do certifications to
improve your technical skills & market value.
It all depends on your way of handling things & interpersonal, communication and leadership
skills. If it is difficult for you to get a job in developement or you really like testing, just go ahead.
Try to achieve excellence as a testing professional. You will never have a job problem .Also you
will always get onsite opportunities too!! Yuo might have to struggle for initial few years like all
other freshers.
Answer2:
QA and Testing are thankless only in some companies.
Testing is part of development. Rather than distinguish between testing and
development,distinguish between testing and programming.
Programming is also thankless in some companies.
Not suggesting that anyone should or should not go into testing. It depends on your skills and
interests. Some people are better at programming and worse at testing, some better at testing
and worse at programming, some are not suited for either role. You should decide what you are
good at and what fascinates you. What type of work would make you WANT to stay at work for
60-80 hours a week for a few years because it is so interesting?
Suggesting that there are excellent testing jobs out there, but there are bad ones too (in testing
and in programming, both).
Have not seen any certification in software testing that improves the technical skill of anyone.
Apparently, testing certification improves a tester's market value in some markets.
Most companies mean testing when they say "QA". Or they mean Testing plus Metrics, where
the metrics tasks are low-skill data collection and basic data analysis rather than thinking up and
justifying measurement systems appropriate to the questions at hand. In terms of skill, salary,
intellectual challenge and value to the company, testing+metrics is the same as testing. Some
companies see QA more strategically, and hire more senior people into their groups. Here is a
hint--if you can get a job in a group called QA with less than 5 years of experience, it's a testing
group or something equivalent to it.
Answer3:
Nothing is considered as great or a mean job. As long as you like and love to do, everything in
that seems to be interesting.
I started as a developer and slowly moved to Testing. I find testing to be more challenging and
interesting. I have solid 6 years of testing experience alone and many sernior people are there
in my team, who are professional testers.
Answer4:
testing is low-skill work in many companies.
Scripted testing of the kind pushed by ISEB, ISTQB, and the other certifiers is low skill, low
prestige, offers little return value to the company that pays for it, and is often pushed to offsite
contracting firms because it isn't worth doing in-house. In many cases, it is just a process of
"going through the motions" -- pretending to do testing (and spending a lot of money in the
pretense) but without really looking for any important information and without creating any
artifacts that will be useful to the project team.
The only reason to take a job doing this kind of work is to get paid for it. Doing it for too long is
bad for your career.
There are much higher-skill ways to do testing. Some of them involve partial automation (writing
or using programs to help you investigate the program more effectively), but automation tools
are just tools. They are often used just as mind-numbingly and valuelessly as scripted manual
testing. When you're offered this kind of position, try to find out how much judgment you will
have to exercise in the analysis of the product under test and the ways that it provides value to
the users and other stakeholders, in the design of tests to check that value and to check for
other threats to value (security failures, performance failures, usability failures, etc.)--and how
much this position will help you develop your judgment. If you will become a more skilled and
more creative investigator who has a better collection of tools to investigate with, that might be
interesting. If not, you will be marking time (making money but learning little) while the rest of
the technical world learns new ideas and skills.
How to test a web based application that has recently been modified to give support for
Double Byte Character Sets?
Answer1:
should apply black box testing techniques (boundary analysis, equivalence partioning)
Answer2:
The Japanese and other East Asian Customers are very particular of the look and feel of the UI.
So please make sure, there is no truncation at any place.
One Major difference between Japanese and English is that there is no concept of spaces
between the words in Japanese. The line breaks in English usually happens whenever there is
a Space. In Japanese this leads to a lot of problem with the wrapping on the text and if you have
a table with defined column length, you might see text appearing vertical.
On the functionality side:
1. Check for the date format and Number format. (it should be in the native locale)
2. Check that your system accepts 2-byte numerals and characters.
3. If there is any fields with a boundary value of 100 characters, the field should accept, the
same number of 2-byte character as well.
4. The application should work on a Native (Chinese, Japanese, Korean) OS as well as on an
English OS with the language pack installed.
Writing a high level test plan for 2-byte support will require some knowledge of the application
and its architecture.
before creating test cases to "break the system", a few principles have to be observed:
Testing should be based on user requirements. This is in order to uncover any defects that
might cause the program or system to fail to meet the client's requirements.
Testing time and resources are limited. Avoid redundant tests.
It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible,
simple because of the many different variables affecting the system and the number of paths a
program flow might take.
Use effective resources to test. This represents use of the most suitable tools, procedures and
individuals to conduct the tests. The test team should use tools that they are confident and
familiar with. Testing procedures should be clearly defined. Testing personnel may be a
technical group of people independent of the developers.
Test planning should be done early. This is because test planning can begin independently of
coding and as soon as the client requirements are set.
Testing should begin at the module. The focus of testing should be concentrated on the smallest
programming units first and then expand to other parts of the system.
We look at software testing in the traditional (procedural) sense and then describe some testing
strategies and methods used in Object Oriented environment. We also introduce some issues
with software testing in both environments.
Would like to know whether Black Box testing techniques like Boundary Value Analysis
and Equivalence Partitioning - during which phases of testing are they used,if possible
with examples ?
Answer1:
Also Boundary Value Analysis and Equivalence Partitioning can be used in unit or component
testing, and generally is used in system testing
Example, you have a module designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
You must define test cases that exercise valid and invalid equivalence classes:
Any value lower than 4000 is tax free
Any value between 4000 and 5500 must paid 10%
Any value between 5501 and 33500 must paid 22%
Any value bigger than 33500 must paid 40%
And the boundary values are: 4000, 4001, 5501, 33501
Answer2:
This Boundary value analysis and Equivalence partitioning is used to prepare the positive and
negative type test cases.
Equivalence partitioning: If you want to validate the text box which accepts the value between
2000 to 10000 , then the test case input is partitioned as the following way
1. <=2000
2. >=2000 and <=10000
3. >10000
The boundary Values analysis is checking the input values on boundaries. IN the above case it
can checked with whether the input values is on the boundary or above the boundary or in low
boundary.
Answer1:
AS you are saying that the all images are as push button than you can check the property
enabled or disabled. If you are not able to find that property than go to object repository for that
objecct and click on add remove to add the available properties to that object. Let me know if
that works. And if you take it as image than you need to check visible or invisible property tht
also might help you are there are no enable or disable properties for the image object.
Answer2:
The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e
greyed out image. That is the only way a developer can show deactivate/activate if he is using
an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object
Properties as a checkpoint.
How do you write test cases?
When I write test cases, I concentrate on one requirement at a time. Then, based on that one
requirement, I come up with several real life scenarios that are likely to occur in the use of the
application by an end user.
When I write test cases, I describe the inputs, action, or event, and their expected results, in
order to determine if a feature of an application is working correctly. To make the test case
complete, I also add particulars e.g. test case identifiers, test case names, objectives, test
conditions (or setups), input data requirements (or steps), and expected results.
Additionally, if I have a choice, I like writing test cases as early as possible in the development
life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find
problems in the requirements or design of an application. And, because the process of
developing test cases makes me completely think through the operation of the application.
Answer1:
system testing: The process of testing an integrated system to verify that it meets specified
requirements. acceptance testing: Formal testing with respect to user needs, requirements, and
business processes conducted to determine whether or not a system satisfies the acceptance
criteria and to enable the user, customers or other authorized entity to determine whether or not
to accept the system.
First, I don’t classify the incidents or defects regarding the phase the software development
process or testing, I prefer classify them regarding their type, e. g. Requeriments, features and
functionality, structural bugs, data, integration, etc The value of categorising faults is that it helps
us to focus our testing effort where it is most important and we should have distinct test
activietis that adrress the problems of poor requerimients, structure, etc.
You don’t do User Acceptance Test only because the software is delivered! Take care about the
concepts of testing!
Answer2:
In my company we do not perform user acceptance testing, our clients do. Once our system
testing is done (and other validation activities are finished) the software is ready to ship.
Therefore any bug found in user acceptance testing would be issued a tracking number and
taken care of in the next release. It would not be counted as a part of the system test.
Answer3:
This is what i feel is user acceptance testing, i hope u find it useful. Definition:
User Acceptance testing is a formal testing conducted to determine whether a software satisfies
it's acceptance criteria and to enable the buyer to determine whether to accept the system.
Objective:
User Acceptance testing is designed to determine whether the software is fit for the user to use.
And also to determine if the software fits into user's business processes and meets his/her
needs.
Entry Criteria:
End of development process and after the software has passed all the tests to determine
whether it meets all the predetermined functionality, performance and other quality criteria.
Exit Criteria:
After the verification that the docs delivered are adequate and consistent with the executable
system. Software system meets all the requirements of the customer
Deliverables:
User Acceptance Test Plan
User Acceptance Testcases
User guides/docs
User Acceptance Testreports
Answer4:
System Testing: Done by QA at developemnt end.It is done after intergration is complete and all
integration P1/P2/P3 bugs are fixed. the code is freezed. No more code changes are taken.
Then All the requirements are tested and all the intergration bugs are verified.
UAT: Done by QA(trained like end users ). All the requiement are tested and also whole system
is verified and validated.
Answer1:
And return the delimited fields as a list of string? Sound like a perl split function. This could be
built on one of your own containing:
[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing
things
[ ] //but this is with the cobling together of several built in functions
[-] LIST OF STRING Split(STRING sDelim, STRING sData)
[ ] LIST OF STRING lsReturn
[ ] STRING sSegment
[-] while MatchStr("*{sDelim}*", sData)
[ ] sSegment = GetField(sData, sDelim, 1)
[ ] ListAppend(lsReturn, Trim(sSegment))
[ ] //crude chunking:
[ ] sSegment += ","
[ ] sData = GetField(sData, sSegment, 2)
[-] if Len(sData) > 0
[ ] ListAppend(lsReturn, Trim(sData))
[ ] return lsReturn
Answer2:
You could use something like this.... hope I am understanding the problem
[+] testcase T1()
[ ] string sTest = "hello, there I am happy"
[ ] string sTest1 = (GetField (sTest, ",", 2))
[ ] Print(sTest1)
[]
[ ] This Prints "there I am happy"
[ ] GetField(sTest,","1)) would Print hello, etc....
Answer3:
Below is the function which return all fields (list of String).
[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)
[ ] INTEGER iIndex= 1
[ ] LIST OF STRING lsStr
[ ] STRING sToken = GetField (sStr, sDelim, iIndex)
[]
[+] if (iIndex == 1 && sToken == "")
[ ] iIndex = iIndex + 1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[]
[+] while (sToken != "")
[ ] ListAppend (lsStr, sToken)
[ ] iIndex = iIndex+1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ] return lsStr
I then created a function that searches the screen contents for the required data to validate.
This works fine for me. Here it is to study. Hope it may help
void CheckOutPut(string sErrorMessage)
[ ]Putty.setActive ()
[]
[ ] // Capture screen contents
[ ] lsScreenContents = Putty.GetScreenContents ()
[ ] Sleep(1)
[ ] // Trim Screen Contents
[ ] lsScreenContents = TrimScreenContents (lsScreenContents)
[ ] Sleep(1)
[-] if (sBatchSuccess == "Yes")
[-] if (ListFind (lsScreenContents, "BUILD FAILED"))
[ ] LogError("Process should not have failed.")
[-] if (ListFind (lsScreenContents, "BUILD SUCCESSFUL"))
[ ] Print("Successful")
[ ] break
[ ] // Check to see if launcher has finished
[-] else
[-] if (ListFind (lsScreenContents, "BUILD FAILED") == 0)
[ ] LogError("Error should have failed.")
[ ] break
[-] else
[ ] // Check for Date Conversion Error
[-] if (ListFind (lsScreenContents, sErrorMessage) == 0)
[ ] LogError ("Error handle")
[ ] Print("Expected - {sErrorMessage}")
[ ] ListPrint(lsScreenContents)
[ ] break
[-] else
[ ] break
[]
[ ] // Raise exception if kPlatform not equal to windows or putty
[+] default
[ ] raise 1, "Unable to run console: - Please specify setting"
[]
Answer1:
The fixed defects can be tracked in the defect tracking tool. I think it is out of scope of a test
case to maintain this.
The defect tracking tool should indicate that the problem has been fixed, and the associated test
case now has a passing result.
If and when you report test results for this test cycle, you should provide this sort of information;
i.e., test failed, problem report written, problem fixed, test passed, etc...
Answer2:
As using Jira (like Bugzilla) to manage your testcases as well as your bugs. When a test
discovers a bug, you will link the two, marking the test as "in work" and "waiting for bug X". Now,
when the developer resolves the bug and you retest it, you see the link to the tescase and
retest/close it.
After the migration done, how to test the application (Frontend hasn't changed just the
database changed)
Answer1:
You can concentrate only on those testcases which involve DB transactions like
insert,update,delete etc.
Answer2:
Focus on the database tests, but it's important to analyze the differences between the two
schemas. You can't just focus on the front end. Also, be careful to look for shortcuts that the
DBAs may be taking with the schema.
Which test cannot be automated? Acceptence test plan is prepared from? Which is the
test case design methodology? Does test plan contains bug tracing procedure and
reporting procedure?
1: which test cannot be automated?
a. Performance testing
b. Regresssssion
c. user interface
d.none
5:does test plan contains bug tracing procedure and reporting procedure?
Answer1:
It is a mapping of one baselined object to another. For testers, the most common documents to
be linked in the manner are a requirements document and the written test cases for that
document.
In order to facilitate this, testers can add an extra column to their test cases listing the
requirement being tested.
The requirements matrix is usually stored in a spreadsheet. It contains the test ids down the left
side and the requirements ids across the top. For each test, you place a mark in the cell under
the heading for that requirement it is designed to test. The goal is to find out which requirements
are under-tested and which are either over tested or which are so large that too many tests
have to be written to adequately test it.
Answer2:
The traceability matrix means mapping of all the work products (various design docs, testing
docs) to requirements.
Ho to write Software Requirement Sepcification (SRS) document for Grade Card System?
SRS document is very important to give information what the project is going to do and what it is
assuming in advance.
below is some idea about it.
in SRS document following points should be included.
1. Project aim.
2. Project objectives.
3. Project scope
4. Process to be followed.
5. Project Deliverables- it includes documents to be submitted and other plans or project
prototypes.
6. Requirements in short.
How can I schedule the different testcases in a (.t) test script so that all the test cases it
contains run one after another ? ...
A small query: a numbe of (.t) script files which contains a number of test cases. Need to call a
user defined method in all the (.t) script files.
Problem: How to do that.
Second is: if this is possible that when one test case is run successfully, can I put in the
condition that if it is successfull, go to testcase 2 else go to test case 3.
Third is: How can I schedule the different testcases in a (.t) test script so that all the test cases it
contains run one after another.
[-] main()
[]
[-] tc1()
[-] if GetTestsPassedCount ( )!=0 // Executing testcases tc2 and tc3 when testcase tc1 is
passed only.
[ ] tc2()
[ ] tc3()
What are Test Cases, Test Suites, Test Scripts, and Test Scenarios (or Scenaria)?
A test case is usually a single step, and its expected result, along with various additional pieces
of information. It can occasionally be a series of steps but with one expected result or expected
outcome. The optional fields are a test case ID, test step or order of execution number, related
requirement(s), depth, test category, author, and check boxes for whether the test is
automatable and has been automated. Larger test cases may also contain prerequisite states or
steps, and descriptions. A test case should also contain a place for the actual result. These
steps can be stored in a word processor document, spreadsheet, database or other common
repository. In a database system, you may also be able to see past test results and who
generated the results and the system configuration used to generate those results. These past
results would usually be stored in a separate table.
The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely
contains a section where the tester identifies the system configuration used during testing. A
group of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may also be called
a test script, or even a test scenario.
A test plan is the approach that will be used to test the system, not the individual tests.
Most companies that use automated testing will call the code that is used their test scripts.
A scenario test is a test based on a hypothetical story used to help a person think through a
complex problem or system. They can be as simple as a diagram for a testing environment or
they could be a description written in prose. The ideal scenario test has five key characteristics.
It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They
are usually different from test cases in that test cases are single steps and scenarios cover a
number of steps. Test suites and scenarios can be used in concert for complete system tests.
See: An Introduction to Scenario Testing
Scenario testing is similar to, but not the same as session-based testing, which is more closely
related to exploratory testing, but the two concepts can be used in conjunction.
Scenario testing is similar to, but not the same as session-based testing, which is more closely
related to exploratory testing, but the two concepts can be used in conjunction.
See Session-Based Test Management
What's Exploratory Test
What is SRS and BRS . and what is the difference between them?
Answer1:
SRS - Software Requirements Specification BRS - Business Requirements Specification
Answer2:
BRS - Biz Requirements Case
This doc has to be from the client stating the need for a particular module or a project. This
basically tells you why a particular request is needed. Reasons have to be given. Mostly a lay
persons document. This has to aproved by te Project Manager
SRS - Sq REq Specification
Follows the BRC after its approval etc. gives a detail func etc details about the project,
requirement, use cases, refere..etc and how each module works in detal
your srs cannot start without a brc and an approval of the same
Qive some examples of Low Severity and Low Priority Bugs .....
Qive some examples of
Low Severity and Low Priority Bugs
High Severity and Low Priority Bugs
Low Severity and High Priority Bugs
High Severity and High Priority Bugs ?
Answer1:
First know about severity and priority then its easy to decide Low or Medium or High
Priority-Business oriented
Severity-Effect of bug in the functionality
1. For example there is cosmetic change in the clients name and you found this bug at the time
of delivery, so the severity of this bug is low but the priority is high because it affects to the
business.
2. If you found that there is major crash in the functionality of the application but the crash lies in
the module which is not delivered in the deliverables in this case the priority is low and severity
is high.
Answer2:
Priority - how soon your business side needs a fix. (Tip: The engineering side never decides
priority.)
Severity - how bad the bug bites. (Tip: Only engineers decide severity.)
For a high priority, low severity example, suppose your program has an easter egg (a secret
feature) showing a compromising photo of your boss. Schedule this bug to be removed
immediately.
Low priority, high severity example: A long chain of events leads to a crash that risks the main
data file. Because the chain of events is longer than customers might probably reproduce, so
keep an eye on this one while fixing higher priority things.
Testers should report bugs, the business side should understand them and set their priorities.
Then testers and engineers should capture the bugs with automated tests before killing them.
This reduces the odds they come back, and generally reduces "churn", which is bug fixes
causing new bugs.
Answer3:
Priority is how important it is to the customer and if the customer is going to find it. Severity is
how bad it is, if the customer found it.
High Priority low severity
I have a text editor and every 3 minutes it rings a bell (it is also noted that the editor does an
auto-save every 3 minutes). This is going to drive the customer insane. They want it fixed
ASAP; i.e. high priority. The impact is minimal. They can turn off the audio when using the
editor. There are workarounds. Should be easy for the developer to find the code and fix it.
Low Priority High severity
If I press CRTL-Q-SHIFT-T, only in that order, then eject a floppy diskette from the drive it
formats my hard drive. It is a low priority because it is unlikely a customer is going to be affected
by it. It is high severity because if a customer did find it the results would be horrific.
High Priority High severity
If I open the Save As dialog and same the file with the same name as the Save dialog would
have used it saves a zero byte file and all the data is lost. Many customers will select Save As
then decide to overwrite the original document instead. They will NOT cancel the Save As and
select Save instead, they will just use Save As and pick the same file name as the one they
opened. So the likelihood of this happening is high; therefore high priority. It will cause the
customer to lose data. This is costly. Therefore high severity.
Low Priority low severity
If I hold the key combination LEFT_CTRL+LEFT_ALT+RIGHT_ALT+RIGHT_CTRL+F1+F12 for
3 minutes it will display cryptic debug information used by the programmer during development.
It is highly unlikely a customer will find this so it is low priority. Even if they do find it it might
result in a call to customer service asking what this information means. Telling the customer it is
debug code left behind; they didn't want to remove it because it would have added risk and
delayed the release of the program is safer than removing it and potentially breaking something
else. Answer4:
High Priority low severity
Spelling the name of the company president wrong
Low Priority High severity
Year end processing breaks ('cause its 6 more months 'till year end)
High Priority High severity
Application won't start
Low Priority low severity
spelling error in documentation; occasionally screen is slightly
misdrawn requiring a screen refresh
What is risk analysis? What does it have to do with Severity and Priority?
Risk analysis is a method to determine how much risk is involved in something. In testing, it can
be used to determine when to test something or whether to test something at all. Items with
higher risk values should be tested early and often. Items with lower risk value can be tested
later, or under some circumstances if time runs out, not at all. It can also be used with defects.
Severity tells us how bad a defect is: "how much damage can it cause?". Priority tells us how
soon it is desired to fix the defect: "should we fix this and if so, by when?".
Companies usually use numeric values to calculate both values. The number of values will
change from place to place. I assume a five-point scale but a three-point scale is commonly
used. Using a defect as an example, Major would be Severity1 and Trivial would be Severity5. A
Priority1 would imply that it needs to be fixed immediately and a Priority5 means that it can wait
until everything else is done. You can add or multiply the two digits together (there is only a
small difference in the outcome) and the results become the risk value. You use the event's risk
value to determine how you should address the problem. The lower values must be addressed
before the middle values, and the higher values can wait the longest.
Defect 12345
Foo displays an error message with incorrect path separators when the optional showpath
switch is applied
Sev5
Pri5
Risk value (addition method) 10
Defect 13579
Module Bar causes system crash using derefenced handle
Sev1
Pri1
Risk value (addition method) 2
Answer1:
You will have to write one test case describing the results of various kinds of users. You could
write a tabular data form.
For each action you would create a table
First column: user type
Second: expected result
This avoids the issue of writing a series of test cases where 90% of the information is the same
and 10% is different. It makes maintaining teh tests easier as well.
And the best way to test your application is to use an automated tool to do it.
Answer2:
Think of things in terms of use cases. Treat it like a completely different system for each user
role, and create your own suite of cases for each role.
How to test a module(web based developed in .NET) which would load data from the
list(which is text file) into the database(SQL Server)
How to test a module(web based developed in .NET) which would load data from the list(which
is text file) into the database(SQL Server). It would touch approx 10 different tables depending
on data in the list.
The job is to verify that data which is suppose to get loaded gets loaded correctly. List might
contain 60 millions of record. anyone suggest? * Compare the record counts before and after
the load and match with the expected data load * Sample records shoudl be taken to ensure teh
data integrity
* Include Test cases where the loaded data is visible functionally through the application. For
eg: If the data loads new users to the system, tahn the login fucntionlaity using the new user
login creadentials shoudl work etc...
Finally tools available in the market, you can be innovativce in using the Functional Automation
tools like Winrunner and adding DB Checkpoints, you can write SQL's to do the Backend
testing. It is upon the Test scenario (Test Case) details that you wooudl have to narrow upon the
tools/techniques.
Answer1:
Think like a guy who would like to break the application. like a hacker...finding the weakness in
the system.
Answer2:
Think like a Tester then think negative rather than positive. Because tester always try to break
the application, by putting some negative values.
Answer3:
How testers think is:
- Testers are "negative" thinkers
- Testers complain
- Testers like to break things
- Testers take a special thrill in delivering bad news
The authors introduce an alternate view:
- Testers don't complain, they offer evidence
- Testers don't like to break things, they like to dispel the illusion that things work
- Testers don't take a special thrill in delivering bad news, they enjoy freeing their clients from
false belief.
They go on to explain how testers should think:
- Deriving inference
- Technically
- creatively
- Critically
- practically
- Attempting to anwer questions
- Exploring, thinking
- Using logic
Answer4:
Testers are destroyers for a cretive purpose.Always keep one thing in mind "CREATIVE
DESTRUCTION IS WHAT WE WANT TO ACHIEVE".
Add one thing to it is that the quality of testers while testing an application should be enforced
only after the smooth flow of the application is assured i.e., the application passes the positive
test. If the application doesnt pass even the positive testing than the testing strategy gets
waivered.
And aftyer all the competition is appreciated when both the sides are equally strong.
So before bringing the real quality of testers into act while doing the testing one should ensure
that it has passed the positive testing.
Answe1:
CMM is much oriented towards S/W engg process improvements and never speaks of customer
satisfaction whereas the ISO 9001:2000 speaks of process improvements generic to all
organisations and also speaks of customer satisfaction.
A2:
FYI. There are 3 popular ISO standards that are commonly used for SW projects. They are
12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest
version is 1.1, however, it is already considered a legacy standard which is to be replaced by
CMMI, the latest version is 1.1. For further information re CMM/I, visit the following:
http://www.sei.cmu.edu/cmm/
http://www.sei.cmu.edu/cmmi/
To build and release the build to the QA. Does any body knowing in detail about this
profile?
Build Release engineer,
The nature of the job is to retrieve the source from the confirugartion system, and creates a
build in the build machine, and takes a copy of the files which you moved to buildmachine, and
install into QA servers.
Here the main task when you install in QA servers, you have to be carefull about connectin
properties, whether all applications are extracted properly, whether is QA server should have all
supported software
Answer1:
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.
Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine
effectiveness in delivering quality software. It is geared to large organizations such as large U.S.
Defense Department contractors. However, many of the QA processes involved are appropriate
to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI
ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to
successfully complete projects. Few if any processes in place; successes may not be
repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout
an organization; a Software Engineering Process Group is is in place to oversee software
processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is
predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required. Perspective on CMM
ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at
Level 1, 39% at 2,23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62%
were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations
was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal
contractors or agencies. For those rated at Level 1, the most problematical key process area
was in Software Quality Assurance.
Answer2:
The whole essence of CMM or CMMI is to produce quality software. It targets the whole
organizational practices (or processes), which are believed to be the best across industries. For
further understanding of SEI CMMI visit http://www.sei.cmu.edu/cmmi.
What is the role of CMMI Level in Testing?
Please understand that Testing is just part or subset of CMMI. Testing is addressed on a
particular Process Area. If my memory serves me correct, it is the VER or Verification process
area and sometimes addressed also in VAL or the Validation process area. It could also be the
other way around.
Each Process Area has its own level to be driven to the level 5. This is true for the Continuous
Representation of CMMI version 1.1. I am not sure about the Staged Representaiton of the
same version. Please refer to the website above for more details.
What is the difference between the levels of CMMI?
This was already answered in the same thread by Priya. I would like to add that there is an
additional level for the Continuous Representation which is called Level 0 (zero) --> Incomplete.
Which level is most commonly used in Testing?
I would say all levels would deal with testing. But again this is true for VAL and VER Process
Areas.
For further readings, try searching google using CMMI+tutorials or Testing+CMMI. Most of the
documents about CMMI are free and available on the Web.
Answer3:
Level 1. Initial The organization is characterized by an ad hoc set of activities. The processes
aren't defined and success depends on individual effort and heroics.
Level 2. Repeatable At this level, basic project management processes are established to track
costs, to schedule, and to define functionality. The discipline is available to repeat earlier
successes on similar projects.
Level 3. Defined All processes are documented for both management and engineering
activities, and standards are defined.
Level 4. Managed Detailed measures of each process are defined and product quality data is
routinely collected. Both process and products are quantitatively understood and controlled.
Level 5. Optimizing Continuous process improvement is enabled by quantitative feedback from
the process and from piloting innovative ideas and technologies.
There are 3 popular ISO standards that are commonly used for SW projects. They are 12270,
15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest
version is 1.1, however, it is already considered a legacy standard which is to be replaced by
CMMI, the latest version is 1.1.
Answer1:
The ISO would say that Verification is a process of determining whether or not the products of a
given phase of the software development cycle meets the implementation steps and can be
traced to the incoming objectives established during the previous phase. The techniques for
verification are testing, inspection and reviewing.
Validation is a process of evaluating software at the end of the software development process to
ensure compliance with software requirements. The techniques for validation are testing,
inspection and reviewing.
Answer2:
Validation:Determination of the correctness of the products with respect to the user needs and
requirements.
Verification:Determination of the correctness of the product with respect to the test
conditions/requirement imposed at the start.
Answer3:
the diifernce between V & V.
*no.*
Verification ensures that the system complies with organizations standards & processes.
Validation physically ensures that the system operates according to plan.
Relies on non-executable methods of analyzing various artifacts.
Executes the system functions through a series of tests that can be observed & evaluated.
Answers the question "Did we build the right system?"
Answers the question "Did we build the system right?"
E.g. Check sheets, traceability matrix,
Uses functional or structural testing techniques to catch defects.
Includes Requirement reviews, design reviews, code walkthroughs, code inspections, test
reviews, independent static analyzers, confirmation in which 3rd party attests to the document,
desk checking.
Includes Unit testing, coverage analysis, black box techniques, Integrated testing, System
testing & User Acceptance testing.
Most effective, it has been proven that 65% defects can be discovered here.
Effective, but not as effective as verification, for removing defects. It has been proven that 30%
of defects can be discovered here.
Can be used throughout SDLC.
Looking for a tool whcih can do bulk data insert to various tables in the test database
and also that tool which work with DB2, SQLServer and Oracle.
Answer1:
First copy the existing data to an excel file by DTS import/export wizard in SQL server 2000
Export the contents of the table to an excel file . In the Excel change the integrity constraints. for
example the table had one primary key column. So using excel you just changed the values of
the priamary key by using linear fill option of Excel. Then save it.
Now import data from this excel sheet to the table.
Answer2:
Using Perl and their DBI modules. You will also need DBD modules for the specific databases
that you want to test with. In theory, you should be able to re-use the scripts and just change
DBD connections or possibly create handles to all three RDBMSs simultaneously. Ruby and
Python have similar facilities.
You will just have to have access to the data files somewhere and then you can then read the
data and insert the data into the database using the correct insert statements.
There are other tools, but since they cost money to purchase I have never bothered to
investigate them.
Scripting is the most powerful (and cheapest) way to do it. preferred method is to use Python
and its ODBC module. This way you can use the same code and just change the data source
for whichever DB you're connecting to. Also, you could potentially have the script generate
random data if you don't have any source data to begin with.
need to have the proper ODBC client drivers installed on the box you're running the script from
for the ODBC module. There's also a PyPerl distribution that will let you use the Perl DBI
module with Python. It's really up to personal preference on what you're comfortable scripting in.
Answer1:
Test cases for telephone
test the "functionality" of telephone,
1. Test for presence of dial tone.
2. Dial Local number and check that receiver phone(dialled no.) rings.
3. Dial any STD number and check that intended phone number rings.
4. Dial the number of "under test" phone and check that it rings.
5. When ringing, pick it up and check that ringing stops.
6. When talking - then there should be no noise or disturbance.
7. Check that "redial" works properly.
8. Check STD lock facility works.
9. Check speed dialing facility.
10. Check for call waiting facility.
11. Check that only the caller can disconnect the call.
12. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts
to call the "telephone under test" then call between two other parties should not get
disconnected.
13. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts
to call the "telephone under test" then third caller will listen to engage tone or message from
exchange.
14. Check for volume(increase or decrease) of the handset.
15. Keep the hand set down from base unit and attempt to call the "telephone under test" then it
should not ring.
16. Check for call transfer facility.
test the 'telephone itself
1. Check for extreme temparatures (hot and cold)
2. Check for different atmospheric conditions (humidity etc..)
3. Check for exterme power conditions
4. Check for button durability
5. Check for body strength
etc...
Answer2:
My company designs and build phone system software, so I am very familiar with phone testing.
You could be dealing with an IVR system that has menu-driven logic, or you could be dealing
with an auto-attendant with directory features. The basic idea is that you need to be able to
define your expected results, and record your actual results. The medium is different, but the
same basic concepts apply. In some ways the phone is easier becuase it can be a more linear
process than say, a web system.
How to solve this issue - When developers blame testers for reporting bugs that is not
reproducible on their machine?
Avoid this differences by taking screenshots and attaching them in bug tracking tool.
it seems that since the environment was not the cause then the "Steps to Reproduce" portion of
the Bug report were lacking clarity. Screenshots along the way are a great way to prove a point,
especially when you are dealing with something that is reproducible.
Sure as a test engineer we surely understand functionality and to an extent we understand the
architecture of the software. Hence, we can surly say that some bugs are related to each other
and some are not. So, We can introduc a column/field in our bug reporting format (What ever it
is...a tool or an excel) for related bug ID.
This will actually be helpful for the development community too fix the bugs.
Actually Development environment should be same as Testing environment so that this issue
will not arise. Make sure that before getting a Build the environment is same . Before Testing
briefly go through with the Internal Release note. While Defect reporting mention proper Test
data, steps etc. so that next time you can reproduce it.
Exceptional circumstances, starting state of each module and how to guarantee the state of
each module). Verify the design incorporates enough memory, I/O devices and quick enough
runtime for the final product.
How test estimation (in terms of schedule, cost, resources required) will be done during
developing of test plan?
Reads on the topic:
Factors that Influence Test Estimation
What kind of automated software used to test a Web-based application with a .NET
(ASP.NET and C#...also SQL Server) framework?
Answer1:
Mercury makes some decent products. Quick Test Pro can be used for a lot of your
requirements... It can be costly and mind-numbing at times though.
Answer2:
Selenium is a test tool for web applications. Selenium tests run directly in a browser, just as real
users do. And they run in Internet Explorer, Mozilla and Firefox on Windows, Linux, and
Macintosh. No other test tool covers such a wide array of platforms.
* Browser compatability testing. Test your application to see if it works correctly on different
browsers and operating systems. The same script can run on any Selenium platform.
* System functional testing. Create regression tests to verify application functionality and user
acceptance.
Answer3:
Ruby is becoming a preferred standard for testing
Perl is also used a great deal
Answer1:
In large scale company first of all they are followed waterfall method. now days mostly
companies are following V models.
model - means the testing involvement starts from the design state itself & continues till system
test.
Phase Testing
Requirements - review
design - review
TR - TUT
then testing phases starts.
So, like this testing makes a perfect V. so we call it V model.
Indicate the flow of activities in the V-model, please look at:
Test Process in the V-model
Answer2:
The waterfall is the General concept of all the models and most of the project based companies
use V-Model. Testing will be involved from the requirements phase till the User Acceptance
Test.
Answer1:
A test environment can be as simple or as complex as can be, but it *must* be seperate from a
development environment. In an ideal world, you'd have a DEVelopment environment, a TEST
environment, an ACCeptance environment and a partitioned PRODuction environment.
The DEV environment no one in QA touches, the TEST environment no one in development
touches, the ACCeptance environment is for acceptance testing by end-users and
adminstrators, performance/stress/load testing and so on and should mirror the PRODuction
environment. The PRODuction environment should be a live/'hot swap' configuration; the
release is deployed to 'hot swap', tested by the administrators and final acceptance testing
before being 'hot swapped' to live.
Answer2:
TEST ENVIRONMENT:
Setup of a test environment will require:
- Hardware
- Operating systems
- Software that needs to be tested
- Other required software like tools (And people who can use them)
- Data configurations
- Interfaces to other systems, communications
- Documentation like user manuals/reference documents/configuration guides/installation guides
What is the exact difference between functional and non functional testing?
Functional testing means we do functional testing to validate the functionality of the application
against functional requirements document.we test for functionality of the application only. Non-
functional testing means we do not test for functionality of the application System testing, load
testing, stress testing, performance testing etc come under non functional testing.
By Anuj Magazine
Testing Without Requirements: A Practical Approach
Do they rely on strange configurations: ones you could never hope to reproduce? Is it
reasonable that your testers should have "caught" these defects? If it is, don't make any
excuses.
Alternately, if it's really the requirements, how can the developers make the right product and
the testers don't understand what the developers are making? There is communication about
what needs to be done, and the developers seem to be getting that communication, why can't
your testers? We know the reason: the developers didn't get the communication right--that's
why there was a defect. So you can point out the communication as well.
when there is a requirements document, testers have a tendency to only test the main path, or
they'll only run one test case per requirement, when there clearly should be many tests to catch
all boundaries and failures. Testers do need to be able to think about what they are doing, and it
is very possible that the testers themselves are at fault. Don't be afraid to hold them
accountable for being lazy.
The main cause of the problem is not enough testing time allocated:
NO time for doc reviews;
Little time for test design and creation;
Little time for test execution.
500 Internal Server Error problem while doing load testing using Microsoft Web
Application Stress Tool ....
500 Internal Server Error problem while doing load testing using Microsoft Web Application
Stress Tool When doing Load testing using WAS (Microsoft Web Application Stress Tool), get "
500 Internal Server Error" problem for most of the "POST" querries. The Log file it showed the
following data:
"GET /imse/Global/images/Default/arrow.gif 500"
"GET /imse/client/Template/images/Default/arrow.gif 500"
"GET /imse/client/Template/images/Default/Plus1.gif 500"
What could be the reason for this?
The problem is because the response will have not come. The session will have timed out. This
will be the application will have taken more memory. This might be because of multiple threads
running with each thread taking much CPU time. Please check the Server of the system where
the build is deployed for Heap Dump. The Garbage collector will have created Java Heap and
Core dumps in the Application folder.
Try increasing the number of DB Connections in the server. This might solve the Problem. Also
increase the Final Heap size also. This might solve the problems.
Answer1:
Your main task is to convince your company of the
- value of structured testing and the benefits it brings to the end product
- the risks of not testing properly (high maintenance, lots of bugs found in production (and these
generally found by your customers!), loss of market reputation ("another crap product from xyz
company).
Another approach might be to consider starting your test processes earlier (i am guessing from
your message that you are following some kind of waterfall method) - its a sort of 'design a little,
build a little, test a little, design a little ...' approach.
Answer2:
Tell the folks making decisions to read user feedback. No time for testing = angry users who
want their money back or worse angry clients who suddenly hire a team of lawyers.
Warned all the stakeholders early on and then sent user feedback emails up the chain. Users
can be brutal and they tell the truth! Comments like YOU SUCK!!
It may also convince them to get more support people instead of increasing testing.
Answer3:
The ratios:
3/1 Developers to QA (industry)
3/2 Developers to QA (Microsoft)
There is also a really good article called "A Better Bug Trap" published by The Economist in
2004, which is pretty telling: according to NIST 80% of a software project belongs to testing and
debugging.
There is also the classic book called "Mythical Man Month". There are a couple of pertinent
passages there:
1) Back when the book was written, the percentage quoted by NIST was 50%, which means
that software development has become less efficient over the last 20 years or so.
2) There is a 30% that a change in any line of code will break something down stream.
3) There is another article published by McKinsey Quarterly called "What high tech can learn
from slow-growth industries".
Answer1:
Two most common security vulnerabilities that often times overlooked by developers are
session and cookie management. Check out google for possible hacks re the two items.
Develop test scenario from the kb that you find in the web.
Another test would be to concentrate on the log in page and log out.
In some cases the back button could be a security problem especially if the previous
screen/page has sensitive data and could easily be modified if the back button is used.
Lastly, test the user roles properly. Making sure that the specific role only sees what s/he is
intended to see.
Answer2:
Can test one more scenario for security,
1. Login into the application.
2. Then copy the url.
3. Click Logout button
4. Now paste this url in Browser's Address bar or from History access the url of the application
after logging out
Also do not forget to check the timeout setting for the application
1. Login into the app
2. Leave the browser for sometime idle
3. then checkout that user session gets expired or not.
What's the difference between Alpha, Beta and User Acceptance testing?
The focus in this question is somewhat wrong. You don't do Alpha testing, you do testing
against the Alpha cycle of the software. The Alpha cycle is during the development phase. The
product has many defects and is not suitable for users in a production environment to be using.
Once the Show-Stopper, Critical and most Major defects have been resolved, and once the
majority of planned functionality has been added to the product, a Beta release can occur. It is
best to have someone coordinate the beta testers rather than just throw the software out to the
general public--this way you can keep track of the defects generated by beta users in the field.
User Acceptance testing occurs when you have to deliver your product to a customer based on
contractual obligations. The User Acceptance test is usually written by the customer or an agent
on their part. It is designed to verify, usually only with positive test cases, that the product is as
described in the contract.
Answer1:
Well, steps to reproduce are just that: what are the steps you need to take to reproduce the
stated problem.
The steps to reproduce (STR) must be as clear as possible, preferably with screenshots and/or
test data. The steps should also be definite (so no 'maybe', 'it sometimes works if you do this'
type statements).
In the test projects, you've always tried to keep the STR down to a maximum of 5, this to make
sure that the problem is easy and clear to communicate to the developers, to reproduce and
hence resolve.
Answer2:
Ideally, once you identify a bug - you would need to determine the least number of steps
required to reproduce the bug. This would help your developer to reproduce the bug easily on
his development environment.
If you don't have requirements specification, how will you go about testing the
application?
Answer1:
if there is no requirement specification and testing is required then smoke testing or Gorilla
testing is a best option for it in this way we can understand the functionality and Bugs of the
application
Answer2:
As a thumb rude, never test or signoff on undocumented (applications without complete
functional specifications) applications. Its quite similar to swiming in unknown waters - you never
know what you could encounter. In the case of software testing, its not what you will encounter,
but its what you will not encounter. There is a very high possibility that you could completely
miss out some functionality or even worse, misunderstand the functionality.
Software Testing is closely associated with the Program management Team or the requirment
analysis team rather than the Development Team. When you test an application without the
knowledge of the requirments, you only see what the developer wants you to see and not what
the customer want to see. And customers / end users are our prime audience.
In the case of missing requirments, you would try out something what is called 'Focused
Exploratory Testing', identifying every piece in the application, its functionality and gradually dig
deeper.
Smoke Testing or Gorilla Testing (Monkey Testing) is a different type of testing and the purpose
of same is very diffferent.
Smoke Testing or Sanity Testing is used, only to certify builds and is no measure for quality. It
only ensures that there are no blocking issues in the build and ensures that the same can
undergo a test pass.
Gorilla Testing or Monkey Testing (Gorilla being the smarter among the Monkey kind) is all
about adhoc testing. You would probably try hitting the 'ENTER' key 100 times, or try a
'SUBMIT' followed by 'CANCEL' followed by 'SUBMIT' again.
The idea of 'Exploratory Testing' is to identify the functionality of the application along with
Testing the same.
Is there any common testing framework or testing best practices for distributed system?
For example, for distrbuted database management system?
A distributed database management based on mysql. It has three components.
1. A jdbc driver providing services for user's applications, including distributed transaction
management, load balancing, query processor, table id management, etc.
2. A master process, which manages global dirstributed transaction id, load balancing, load
balancing strategy, etc.
3. An agent running on the same box with mysql, which get mysql server's balance statistic
info.
AN OPERATIONAL ENVIRONMENT FOR TESTING DISTRIBUTED SOFTWARE
Distributed applications have traditionally been designed as systems whose data and
processing capabilities reside on multiple platforms, each performing an assigned function
within a known and controlled framework contained in the enterprise. Even if the testing tools
were capable of debugging all types of software components, most do not provide a single
monitoring view that can span multiple platforms. Therefore, developers must jump between
several testing/monitoring sessions across the distributed platforms and interpret the cross–
platform gap as best they can. That is, of course, assuming that comparable monitoring tools
exist for all the required platforms in the first place. This is particularly difficult when one server
platform is the mainframe as generally the more sophisticated mainframe testing tools do not
have comparable PC– or Unix–based counterparts. Therefore, testing distributed applications is
exponentially more difficult than testing standalone applications.
To overcome this problem, we present an operational environment for testing distributed
applications based on the Java Development Kit (JDK) as shown in Figure 1, allowing testers to
track the flow of messages and data across and within the disparate platforms.The primary goal
of this operational environment is an attempt to provide a coherent, seamless environment that
can serve as a single platform for testing distributed applications. The hardware platform of the
testbed at the lowest level in Figure 1, is a network of SUN workstations running the Solaris 2.x
operating system which often plays a part in distributed and client–server system. The
widespread use of PCs has also prompted an ongoing effort to port the environment to the
PC/Windows platform. On the top of the hardware platform is Java Development Kit. It consists
of the Java programming language core functionality, the Java Application Programming
Interface (API) with multiple package sets and the essential tools such as Remote Method
Invocations (RMI), Java DataBase Conncetivity (JDBC) and Beans for creating Java
applications. On top of this platform is the SITE which secures automated support for the testing
process, including modeling, specification, statistical analysis, test data generation, test results
inspection and test path tracing. At the top of this environment are the distributed applications.
These can use or bypass any of the facilities and services in this operational environment. This
environment receives commands from the users (testers) and produces the test reports back.
What is the best way to simulate the real behavior of a web based system?
It may seem obvious, but the best way to simulate real behavior of a web based system is to
simulate user actual behavior, and the way to do this is from an actual browser with test
functionality built inside.
The key to achieving the kind of test accuracy that eValid provides is to understand that it's the
eValid browser that is doing the the actual navigating and processing. And, it is the eValid
browser that is taking the actual performance timing measurements.
eValid employs IE-equivalent multi-threaded HTTP/S processing and uses IE-equivalent page
rendering. While there is some overhead with injecting actions into the browser, it is very, very
low. eValid's timers resolve to 1.0 msec and this precision is usually enough to produce very
meaningful performance testing results.
Any server setup that is not needed in order to use Loadrunner for an e-commerce website.
From the server point of view, it is just if many real users would stress your site.
Answer1:
1) List down usecases (taken from business cases) from function specs. For each use case
write a test case and categorize them into sanity tests, functionality, GUI, performance etc. Then
for each test case, write its workflow.
2) For a GUI application - make a list of all GUI controls. For each control start writing test cases
for testing of the control UI, functionality (impact on the whole application), negative testing (for
incorrect inputs), performance etc.
Answer2:
1. Generate Sunny day scenarios based on use cases and/or requirements.
2. Generate Rainy Day (negative, boundary, etc.) tests that correspond to the previously defined
Sunny Day scenarios.
3. Based on past experience and a knowledge of the product, generate tests for anything that
might have been missed in steps one and two above. These tests need not correspond to any
documented requirements or use cases. It's generally not possible to test every facet of the
design, but with a little work and forethought you can test the high risk areas or high impact
features.
Answer1:
Here are tools to check this. Compuware DevPartner can help you test your application for
Memory leaks if the application is complex. Also depending upon the OS on which you need to
check for memory leaks you need to select the tool.
Answer2:
Tools are more effective to do so. the tools watch to see when memory is allocated and not
freeed. You can use various tools manually to see if the same happens. You just won't be able
to find the exact points where this happens.
In windows you would use task manager or process explorer (freeware from Sysinternals) and
switch to process view and watch memory used. Record the baseline memory usage (BL) . Run
an action once and record the memory usage (BLU). Perform the same actions repeatedlty and
then if the memory usage has not returned to at least BLU, you have a memory leak. The trick is
to wait for the computer to clean up after the transactions have finished. This should take a few
seconds.
How can I be effective and efficient, when I'm testing e-commerce web sites?
When you're doing black box testing of an e-commerce web site, you're most efficient and
effective when you're testing the site's visual appeal, content, and home page. When you want
to be effective and efficient, you need to verify that the site is well planned; verify that the site is
customer-friendly; verify that the choices of colors are attractive; verify that the choices of fonts
are attractive; verify that the site's audio is customer friendly; verify that the site's video is
attractive; verify that the choice of graphics is attractive; verify that every page of the site is
displayed properly on all the popular browsers; verify the authenticity of facts; ensure the site
provides reliable and consistent information; test the site for appearance; test the site for
grammatical and spelling errors; test the site for visual appeal, choice of browsers, consistency
of font size, download time, broken links, missing links, incorrect links, and browser
compatibility; test each toolbar, each menu item, every window, every field prompt, every pop-
up text, and every error message; test every page of the site for left and right justifications,
every shortcut key, each control, each push button, every radio button, and each item on every
drop-down menu; test each list box, and each help menu item. Also check, if the command
buttons are grayed out when they're not in use.
Test Specifications
The test case specifications should be developed from the test plan and are the second phase
of the test development life cycle. The test specification should explain "how" to implement the
test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:
Case No.: The test case number should be a three digit identifer of the following form: c.s.t,
where: c- is the chapter number, s- is the section number, and t- is the test case number.
Title: is the title of the test.
ProgName: is the program name containing the test.
Author: is the person who wrote the test specification.
Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how
to conduct the test.
Expected Error(s): Describes any errors expected
Reference(s): Lists reference documententation used to design the specification.
Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation
Under Test (IUT) and the test engine.
Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)
Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid
sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001