0% found this document useful (0 votes)
4 views

Manual Testing Questions and Answers

The document outlines a series of questions and answers related to software requirements specifications (SRS), functional requirements specifications (FRS), testing processes, and review methodologies. It covers the roles involved in requirements gathering, the importance of SRS, and the testing strategies employed during software development. Additionally, it discusses the release process, entry and exit criteria for testing, and the documentation of test cases and scenarios.

Uploaded by

qavasutesting
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Manual Testing Questions and Answers

The document outlines a series of questions and answers related to software requirements specifications (SRS), functional requirements specifications (FRS), testing processes, and review methodologies. It covers the roles involved in requirements gathering, the importance of SRS, and the testing strategies employed during software development. Additionally, it discusses the release process, entry and exit criteria for testing, and the documentation of test cases and scenarios.

Uploaded by

qavasutesting
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Manual Questions and Answers

1. How will you receive the project requirements?

A. The finalized SRS will be placed in a project repository; we will access it from there

2. What will you do with SRS?

A. SRS stands for software requirement specification. SRS is used to understand the project
functionality from business and functional point of view.

3. What is FRS? How it different from SRS?

A.srs describes what client is expecting from the system. For example in case of Gmail SRS
consists details like first page should be login, to access mail box user should be authenticated.
FRS describes how above requirements will be developed .in FRS, the functionality in SRS will
be written down in more technical terms. For example in case of Gmail FRS consists details like
for login what fields should be present and what are valid inputs. This means FRS will have
screen level details of the application.

Note: In many projects SRS itself will be designed at screen level details of the application.

4. Is the testing team involved in SRS preparation?

A. Business analyst prepare the SRS document by interacting with the client. However a senior
testing team member can also be involved in requirements collections along with the
development team and the business analyst team.

5. How does your requirements document look like?

A. It contains lots of use cases where each use case explains one or more functionalities

6. How will you understand the requirements?

A. If it is known domain by going through use cases i can understand the requirements. if i have
some queries, i will discuss them with business analyst(BA) for clarifications. if it is new domain,
first i will get domain training them i go through the use cases. If the project requirements are
very confusing, then (BA) can also walk through each use case.

7. How do u understand functionality without screens?

A. We get wireframes in the use cases which helps a lot to understand the functionality
8. What is wireframe?

A. A diagram which stimulates the feel of the actual screen.

9. What is usecase?

A. Usecase explains the step by step procedure of how a particular functionality of s/w is used
by the end user. Usecase contains sections such as
. usecase id
. usecase name
. description
. flow of events
. alternative flow of events
. pre,post conditions

10. Where you involved in writing the usecases?

A. I am aware of how usecases looks like and i can write if required. but i have never got an
opportunity to write the usecases because these are prepared by requirements gathering
team .Any how i have reviewed the usecases of certain functionalities and have given my inputs
for betterment of the same.

11. What are the different sections present in SRS?

A. overview
Scope
Features
User characteristics
Software requirements
Hardware requirements
Performance requirements
Use cases
Security and reliability requirements

12. How long do u spend on understanding SRS?


A. It depends on the familiarity of the domain and complexity of the project. if it is a familiar
domain, we can understand around 25 pages of the documentation every day. For a new
complex domain, we manage around 15 pages per day.

13. After understanding the SRS what do you do?


A. My lead asks for presentation of the functionalities i am assigned with if i am in a position to
explain the functionalities clearly to the team, then i am considered as comfortable with
functionalities.
14. Should you understand the whole project functionality or only the functionality assigned to
you?
A. I should have a big picture of the whole project. In other words i should have an overview of
the whole project and detailed screen and field level understanding of the assigned
functionalities.

15. What are the different models generally followed in documenting requirements?
A. Two models are followed in documenting the requirements which are usecase model and
paragraph model. in paragraph model business requirements are written like a paragraph which
is old model. Now a days almost all companies follow the usecase model where the
requirements are written by stating their clear objectives and explained with the help of screen
shots.

16. How big is your SRS?


A. You can answer anything like approx 250 pages. This question is asked just to cross check
whether u have seen SRS or not

17. What will be the problem without SRS?


A. without srs we will not be able to understand the project features correctly. Hence we will
not able to test the project in depth and deliver the best quality product.

18. What is SRS?


A. BRS is business requirement specification which is usually prepared before preparing an
srs. This document gives a high-level view of what is being required by the customer to meet
business needs.

19. What is technical requirements specification?


A. This is also called as high-level design, which consists of different modules present in the
project.

20. What is user story?


A. user story is the method of documenting requirements in the agile model.

21. What is Review?


A. Review is a meeting in which a work product is verified by set if members (stake holders)

22. Explain the review process you follow in your organization?


A. The various phases of the review process followed in my organization are:
Planning:
>selecting the personal for review
>allocating roles
>defining entry and exit criteria.
Kick-off:
>Distributing documents
>explaining the objectives
>checking entry criteria, etc.
Individual preparation:
> in the phase, each of the participants will work before the review meeting and be
ready with questions and comments.
Review Meeting:
> Discussion among the review members by going through each line of the work
product.
> Logging comments
> Making decision about the defect.
Rework:
> Fixing defects found during the review, typically done by the author.
Follow-up:
> checking the defects that have been addressed.
>gathering metrics and checking the exit criteria.

23. What are the roles present in the review?


A. Manager:
>decides on execution of reviews.
> allocates time in project schedules.
> determines if the review objectives have met.
Moderator:
>leads the review, including planning and running the meeting
>follows-up after the meeting.
Author:
The author is the person who has created the item to be reviewed. the author may also
be asked questions within the review.
Reviewer:
The reviewer are the attendees of the review who attempt to find errors in the item
under review. they should come from different perspectives in order to provide a well balanced
review of the item.
scribe:
The scribe or recorder is the person who is responsible for documenting issues raised
during the process of the review meeting.

24. What is peer review?


A. Is a review of a software work product by colleagues?

25. What is the difference between static and dynamic testing?


A. Static testing means testing the project without executing the software and dynamic testing
means testing the project by executing the software . i.e. by running the application and going
through screens. To conduct dynamic testing you must use application screens and enter valid
and invalid inputs and verify the application behavior. For static testing, we do not use any
screens of application instead we use static techniques like review. During review, experts go
through each line of the work products like requirement document, design document and
identify mistakes in these documents. Any mistakes identified during this review are nothing but
defects in the work product.

26. I want you to choose one among static and dynamic testing for your project. Which one will
you choose and why?

A. static testing reduces the cost of fix and dynamic testing gives the complete confidence to
release the product. According to me both are equally important and both of them contribute
equally for the project success. so i prefer to have both. however if i have to choose one, i
choose dynamic testing since i cannot let the project to be released until i see with my eyes that
it is working.

27. out of formal and informal review, which one do you prefer?
A. In my view, both are important: informal review is fast and formal review is effective. we
have to use both depending on the data we are reviewing . I prefer formal and informal review
techniques as follows.
Formal Review:
> Reviewing test case document created
> Reviewing test plan document created
> reviewing test scripts developed
Informal Review:
> reviewing tests used for retesting
> Reviewing minor changes in test case, test plan or test scripts.

28. How do you decide the review outcome?


A. The review outcome is decided by the moderator. i can share my views with him. for
example in test cases review, the outcome decision it as follows Review observation
Review outcome

A. Most of the critical test cases are missed


B. Documentation standards are poor
Major changes are suggested …....Accept after correction with another round of review
Minor changes are suggested ….. Accept after correction without another round of review
No changes suggested …………………………………………………Accepts as it is

29. Explain what do you document during the review process?


A. we document page and line number of defect, origin of defect, severity of defect. We also
document other information like work product ID, reviewers, etc

30. How much information you can review in one day?


A. per hour we review around 20 pages if it is documentation and 200 lines if it is code.

31. How do you say review was successful?


A. if every reviewer prepares well before the review and provides good comments for
improvement of the work product, we can say that the review was successful.

32. What is code review?


A. code review is the process of reviewing the code written. code reviews are conducted for the
code developed by the developer and also for the automation scripts developed by the
automation engineer.

33. What is desk check?


A. This is an informal review where a colleagues comes to the desk/computer of the author and
quickly goes through the work product along with the author and also shares comments while
going through it.

34. What are the entry criteria for release?


A. > system testing results must show that all requirements are completed and project is stable.
> Alpha and beta testing must be completed.
> All medium and above severity bugs must be fixed.
>The release package is available.
> The release CD label is ready.

35. What is the release process you follow?


A. In our organization, the release process is coordinated by a person called release manager.
After successful beta testing, the release manger sends an email to all stake holders
( development manager, test manager, documentation manager) for their Approval for final
release. The test manager further forwards the same mail to team members requesting their
internal approval. Based on internal approval. The test manager can send approval to the
release manager.

36. What is your involvement in the release process?


A. As a testing team member, i go through the defect tracking tool and check whether all the
defects are fixed. In case any defects are not fixed i communicate the same to my test lead and
test manager, sharing my opinion regarding each bug whether it must be fixed before release or
it can be fixed after release. The test manager takes the final decision on whether to fix or not
after discussion with the development manager.
I am further involved in preparing release notes, where i document known issues in my
module along with the issues resolved from the previous release.
Exit criteria for release are:
> All stake holders have approved for release.
> The new package is deployed in production and users are happy about the release.
> The code has been base lined in the configuration management.

37. What is a code Freeze?


A. code freeze means the code has been locked from further modifications from developers.
After the code freeze the code should be changed by any developer. if at all any changes are
required it should be only for very critical bugs after taking permission from the top management
of the project. code freezes are often employed in the final stages of development.

38. What are the entry and exit criteria for test execution?
Entry criteria:
--------------------------------
> coding should be completed
> test cases should be ready and base lined
> RTM should be updated
> test data should be read and base lined
> test environment/set up should be ready.
>s/w tools should be ready and approved.
Exit criteria:
-----------------------------
> All test cases must be executed and passed
> All defects identified must be fixed, retested and closed
> Test execution summary report must be prepared

39. Explain different test execution strategies?


A. There are 3 test execution strategies.
They are pass1, pass2, pass 3
Pass1 Test execution strategy:
------------------------------
In this execution model one execution cycle will be there. In this one execution cycle
Itself testers log defects and retest that defect. This is useful in stable, small with
2nd pass:
----------------------------
Development team releases new build claiming all the defects are fixed. Testing team retest
all defects with adhoc regression. if new defects are found development team release new build
and the life cycle is repeated until no new defects.
3rd pass:
---------------------------
Testing team runs full regression suite and this phase completes only when full regression
is completed. This is good model for large, complex and critical projects. In case of getting large
no of defects in pass -2 strategy, one may have to move to pass-3 strategy.

40. How do you know you have a build ready for testing?
A. Frequency of build creation would vary from project to project .however below is the
guideline to answer this question. In our project automatic build creation deployment happens
on every x day. we receive a confirmation mail on every day morning about successful
deployment along with URL for testing .please refer our build process FAQ'S for exactly how
build deployment and release process.
41. How many test cases can you execute per day?
A. it depends on the size and complexity of the test cases. Approximately i execute around 50
test cases per day which comes to 40 pages approx.

42. How do you run the test cases?


A. I will perform each step in the test case on the application and compare the application
behavior with expected result of the step. if it is same as expected result then step is passed
else step is failed. if the password field shows * or some other special character while entering
the password, it is called password masking, NOT password encryption. Encryption means
converting the user entered characters into different characters before sending over the
network. This can be checked with the help of network sniffers.
example s/w for this is WIRESHARK. These s/ was capture every data packet travelling
over the network including the IP address of source and destination computers. By analyzing
these packets we can identify whether the password string is encrypted
or not.

43. How do you check broken links?


A. Many tools are available for this. we use tools like menu.

44. What is test log?


A. It is a report of what tests have been executed and their status like pass/ fail. it is also
known as Test execution report.

45. Did you observe any application logs during the test execution?
A. yes. We do observe logs of the application server to check whether the server has thrown
any runtime errors.

46. Do you run all regression tests for every bug fixed?
A. No, I didn’t run regression test cases for every bug fixed. I run regression tests once for
every build.

47. Do you run all regression tests every time ?


A. Depends. if we are sure that the fix might not affect other modules, we run regression tests
specific to the module of the bugs fixed, else we run for the entire project.

48. In the modules you have worked on, are there any issues identified after release?
A. projects if i am supposed to write stubs or drivers in the current project i am confident that i
can handle it.

49. When you fill the data in the application form, how do you ensure that the data is stored in
the correct tables and columns?
A. we can write an SQL query to retrieve data from the data base and compare the query result
with the data we have filled in the application forms.

50. What is test case?


A. Test case is a set of inputs, conditions and expected outcomes which a tester will determine
whether an application is working correctly or not.

51. What fields a test case will have?


A. The following are the fields that a test case will usually have....... test case id, description,
precondition, step name, expected results, actual results and status.

52. Where do you write test case?


A. Depending on the project we can write test cases in an excel or in QC.

53. How do you know for which functionalities you should write test case?
A. My lead writes top level requirements in QC and assigns to each team member. we divide
test requirements further into sub requirements. Then we identify test conditions for each sub
requirement and create test cases. test cases are reviewed after that. Reviewed and approved
test cases will go to ready state.

54. What is test scenario?


A. Test scenario is nothing but a functional scenario for which testing is to be conducted. It is
also called as a test condition.

55. What is the difference between test scenario and test case?
A. Test scenario is a high level description of business requirements, which is later decomposed
into a set of test cases. These test cases will be reviewed and approved by peers. we follow
formal review process for approving test cases written for each functionality.

56. How do you know your test cases are completed?


A. We follow two step approaches to ensure that test cases are completed.
a. reviews-- it ensures that quality of the test cases is good.
b. Requirement traceability matrix ---it ensures that all requirements have been covered
through test cases.

57. How do you find whether a test case is a good test case or bad test case?
A. A good test case is one which finds the bug or one which has a high probability of finding the
bug. A good test case should be documented clearly, so that it can be executed by anyone
without any difficulties and confusion.

58. What is the percentage of positive and negative test cases that you write?
A. Approx 30% positive and 70% negative

59. Do you update the test cases after receiving build based on the application screen?
A. During execution, if we feel any test case requires an update, we will do it with the approval
of the team lead. but this work is very limited.

60. Explain one scenario where you were not able to write test cases for a given requirement?
A. Effort for a new domain by putting extra effort for through understanding of the domain.

61. What is the difference between a positive and negative test case?
A. A positive test cases checks whether the system does what it is suppose to do. I.e. to check
that we got the desired result with a valid set of inputs.
ex: the user should login in to the system with a valid user name and password.
Negative test case: A negative test case checks whether the system will do what it is not
supposed to do .i.e. to check the system generates the correct error or warning messages with
an invalid set of inputs.
ex: if the user entered the wrong user name or password, then the user should not login in to
the system and appropriate error message should be shown.

62. What are the documents required for test analysis?


A. 1. SRS/FRS
2. Use case
3. Architecture document

63. What is an entry criterion for test closure?


A. Decision to stop testing

64. Who takes this decision?


A. The Test Manager

65. What parameters do the test manager considers to take the decision to stop testing ?
A. The important parameters a test manager looks into are
> Whether all requirements have been developed or not.
> Whether all requirements have been covered through testing.
> Whether all requirements have been handled through fixed or differed status.

66. What are the exit criteria for test closure?


A. > checking whether planned deliverables have been delivered.
> Finalizing and archiving test ware.
> Hand over of test ware for maintenance.
> analyzing lessons learned for improvement of test maturity.
> Testing sign off.

67. What is test ware?


A. Test ware is Artifacts produced during the testing process. Test ware include test cases,
test plan, automation scripts, test data, test environment set-up and clear up procedures and
any additional software or utilities used in testing.
68. What is lessons learnt document?
A. > no of test cases/scenarios blocked
> No of defects verified and their respective status.
> Weekly status reporting:
> Test case summary
> Issues found
> Issues resolved
> Critical issues which are still open and which requires immediate attention from the client
side
> The report should also contain high plan for the next week.

69. What is the status can give to a test case?


A. Status are pass, fail, blocked, no run.

70. What is web server log?


A. Every time a web page is requested, the web server automatically logs the following
information.
> The IP address of the visitor
> Date and time of the request
>The url of the requested file
> The url, the visitor came from immediately before
> The visitors web browser type and os

1. What is Acceptance Testing?


Testing conducted to enable a user/customer to determine whether to accept a software
product. Normally performed to validate the software meets a set of agreed acceptance criteria.

2. What is Accessibility Testing?


Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled
etc.).

3. What is Ad Hoc Testing?


A testing phase where the tester tries to 'break' the system by randomly trying the system's
functionality. Can include negative testing as well. See also Monkey Testing.

4. What is Agile Testing?


Testing practice for projects using agile methodologies, treating development as the customer of
testing and emphasizing a test-first design paradigm. See also Test Driven Development.

5. What is the Application Binary Interface (ABI)?


A specification defining requirements for portability of applications in binary forms across
different system platforms and environments.

6. What is the Application Programming Interface (API)?


A formalized set of software calls and routines that can be referenced by an application program
in order to access supporting system or network services.

7. What is Automated Software Quality (ASQ)?


The use of software tools, such as automated testing tools, to improve software quality.

8. What is Automated Testing?


Testing employing software tools which execute tests without manual intervention. Can be
applied in GUI, performance, API, etc. testing. The use of software to control the execution of
tests, the comparison of actual outcomes to predicted outcomes, the setting up of test
preconditions, and other test control and test reporting functions.

9. What is Backus-Naur Form?


A meta language used to formally describe the syntax of a language.

10. What is the Basic Block?


A sequence of one or more consecutive, executable statements containing no branches.

11. What is the Basis Path Testing?


A white box test case design technique that uses the algorithmic flow of the program to design
tests.

12. What is the Basis Set?


The set of tests derived using basis path testing.

13. What is the Baseline?


The point at which some deliverable produced during the software engineering process is put
under formal change control.

15. What is Beta Testing?


Testing of a rerelease of a software product conducted by customers.

16. What is Binary Portability Testing?


Testing an executable application for portability across system platforms and environments,
usually for conformation to an ABI specification.

17. What is Black Box Testing?


Testing based on an analysis of the specification of a piece of software without reference to its
internal workings. The goal is to test how well the component conforms to the published
requirements for the component.

18. What is a Bottom-Up Testing?


An approach to integration testing where the lowest level components are tested first then used
to facilitate the testing of higher-level components. The process is repeated until the component
at the top of the hierarchy is tested.

19. What is Boundary Testing?


Test which focuses on the boundary or limit conditions of the software being tested. (Some of
these tests are stress tests).

20. What is Bug?


A fault in a program, which causes the program to perform in an unintended or unanticipated
manner.

20. What is Defect?


If software misses some feature or function from what is there in requirement it is called a
defect.

21. What is Boundary Value Analysis?


BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are
usually out of range as defined by the specification. his means that if a function expects all
values in the range of negative 100 to positive 1000, test inputs would include negative 101 and
positive 1001.

22. What is Branch Testing?


Testing in which all branches in the program source code are tested at least once.

23. What is Breadth Testing?


A test suite that exercises the full functionality of a product but does not test features in detail.

24. What is CAST?


Computer-Aided Software Testing.

25. What is Capture/Replay Tool?


A test tool that records test input as it is sent to the software under test. The input cases stored
can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

26. What is CMM?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the
maturity of the software processes of an organization and for identifying the key practices that
are required to increase the maturity of these processes.

27. What is Cause Effect Graph?


A graphical representation of inputs and the associated outputs effects which can be used to
design test cases.
28. What is Code Complete?
The phase of development where functionality is implemented in entirely; bug fixes are all that is
left. All functions found in the Functional Specifications have been implemented.

29. What is Code Coverage?


An analysis method that determines which parts of the software have been executed (covered)
by the test case suite and which parts have not been executed and therefore may require
additional attention.

30. What is Code Inspection?


A formal testing technique where the programmer reviews source code with a group who ask
questions analyzing the program logic, analyzing the code with respect to a checklist of
historically common programming errors and analyzing its compliance with coding standards.

31. What is Code Walkthrough?


A formal testing technique where source code is traced by a group with a small set of test
cases, while the state of program variables is manually monitored, to analyze the programmer's
logic and assumptions.

32. What is Coding?


The generation of source code.

33. What is Compatibility Testing?


Testing whether the software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware.

34. What is a Component?


A minimal software item for which a separate specification is available.

35. What is Component Testing?


Testing of individual software components (Unit Testing).

36. What is Concurrency Testing?


Multi-user testing geared towards determining the effects of accessing the same application
code, module or database records. Identifies and measures the level of locking, deadlocking
and use of single-threaded code and locking semaphores.

37. What is the Conformance Testing?


The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to test conformance to a formal standard.

38. What is Context Driven Testing?


The context-driven school of software testing is a flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential information
revealed and the value of that information to the organization right now.

39. What is Conversion Testing?


Testing of programs or procedures used to convert data from existing systems for use in
replacement systems.

40. What is Cyclomatic Complexity?


A measure of the logical complexity of an algorithm, used in white-box testing.

41. What is Data Dictionary?


A database that contains definitions of all data items defined during analysis.

42. What is Data Flow Diagram?


A modeling notation that represents a functional decomposition of a system.

43. What is Data Driven Testing?


Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.

44. What is Debugging?


The process of finding and removing the causes of software failures.

45. What is Defect?


Non conformance to requirements or functional / program specification

46. What is Dependency Testing?


Examines an application's requirements for pre-existing software, initial states and configuration
in order to maintain proper functionality.

47. What is Depth Testing?


A test that exercises a feature of a product in full detail.

48. What is Dynamic Testing?


Testing software through executing it. See also Static Testing.

49. What is Emulator?


A device, computer program, or system that accepts the same inputs and produces the same
outputs as a given system.

50. What is Endurance Testing?


Checks for memory leaks or other problems that may occur with prolonged execution
51. What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

52. What is the Equivalence Class?


A portion of a component's input or output domains for which the component's behavior is
assumed to be the same from the component's specification.

53. What is Equivalence Partitioning?


A test case design technique for a component in which test cases are designed to execute
representatives from equivalence classes.

54. What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

55. What is Functional Decomposition?


A technique used during planning, analysis and design; creates a functional hierarchy for the
software.

54. What is Functional Specification?


A document that describes in detail the characteristics of the product with regard to its intended
features.

55. What is Functional Testing?


Testing the features and operational behavior of a product to ensure they correspond to its
specifications. Testing that ignores the internal mechanism of a system or component and
focuses solely on the outputs generated in response to selected inputs and execution
conditions. or Black Box Testing.

56. What is Glass Box Testing?


A synonym for White Box Testing.

57. What is Gorilla Testing?


Testing one particular module, functionality heavily.

58. What is Gray Box Testing?


A combination of Black Box and White Box testing methodologies testing a piece of software
against its specification but using some knowledge of its internal workings.

59. What is High Order Tests?


Black-box tests conducted once the software has been integrated.
60. What is the Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing,

61. What is an Inspection?


A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).

62. What is Integration Testing?


Testing of combined parts of an application to determine if they function together correctly.
Usually performed after unit and functional testing. This type of testing is especially relevant to
client/server and distributed systems.

63. What is Installation Testing?


Confirms that the application under test recovers from expected or unexpected events without
loss of data or functionality. Events can include a shortage of disk space, unexpected loss of
communication, or power out conditions.

64. What is Load Testing?


See Performance Testing.

65. What is Localization Testing?


This term refers to making software specifically designed for a specific locality.

66. What is Loop Testing?


A white box testing technique that exercises program loops.

67. What is Metric?


A standard of measurement. Software metrics are the statistics describing the structure or
content of a program. A metric should be a real objective measurement of something such as
number of bugs per lines of code.

68. What is Monkey Testing?


Testing a system or an Application on the fly, i.e just few tests here and there to ensure the
system or an application does not crash out.

69. What is Negative Testing?


Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.

70. What is Path Testing?


Testing in which all paths in the program source code are tested at least once.

71. What is Performance Testing?


Testing conducted to evaluate the compliance of a system or component with specified
performance requirements. Often this is performed using an automated test tool to simulate a
large number of users. Also, known as "Load Testing".

72. What is Positive Testing?


Testing aimed at showing software works. Also known as "test to pass". See also Negative
Testing.

73. What is Quality Assurance?


All those planned or systematic actions necessary to provide adequate confidence that a
product or service is of the type and quality needed and expected by the customer.

74. What is a Quality Audit?


A systematic and independent examination to determine whether quality activities and related
results comply with planned arrangements and whether these arrangements are implemented
effectively and are suitable to achieve objectives.

75. What is a Quality Circle?


A group of individuals with related interests that meet at regular intervals to consider problems
or other matters related to the quality of outputs of a process and to the correction of problems
or to the improvement of quality.

76. What is Quality Control?


The operational techniques and the activities used to fulfill and verify requirements of quality.

77. What is Quality Management?


That aspect of the overall management function that determines and implements the quality
policy.

78. What is Quality Policy?


The overall intentions and direction of an organization as regards quality as formally expressed
by top management.

79. What is a Quality System?


The organizational structure, responsibilities, procedures, processes, and resources for
implementing quality management.

80. What is a Race Condition?


A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which
is a write, with no mechanism used by either to moderate simultaneous access.

81. What is Ramp Testing?


Continuously raising an input signal until the system breaks down.
82. What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include a shortage of disk space, unexpected loss of communication,
or power out conditions

83. What is Regression Testing?


Retesting a previously tested program following modification to ensure that faults have not been
introduced or uncovered as a result of the changes made.

84. What is Release Candidate?


A pre-release version, which contains the desired functionality of the final version, but which
needs to be tested for bugs (which ideally should be removed before the final version is
released).

85. What is Sanity Testing?


A brief test of major functional elements of a piece of software to determine if it's basically
operational. See also Smoke Testing.

86. What is Scalability Testing?


Performance testing focused on ensuring the application under test gracefully handles
increases in workload.

87. What is Security Testing?


Testing which confirms that the program can restrict access to authorized personnel and that
the authorized personnel can access the functions available to their security level.

88. What is Smoke Testing?


A quick-and-dirty test that the major functions of a piece of software work. Originated in the
hardware testing practice of turning on a new piece of hardware for the first time and
considering it a success if it does not catch on fire.

89. What is Soak Testing?


Running a system at high load for a prolonged period of time. For example, running several
times more transactions in an entire day (or night) than would be expected in a busy day, to
identify and performance problems that appear after a large number of transactions have been
executed.

90. What is the Software Requirements Specification?


A deliverable that describes all data, functional and behavioral requirements, all constraints, and
all validation requirements for software/

91. What is Software Testing?


A set of activities conducted with the intent of finding errors in software.
92. What is Static Analysis?
Analysis of a program carried out without executing the program.

93. What is Static Analyzer?


A tool that carries out static analysis.

94. What is Static Testing?


Analysis of a program carried out without executing the program.

95. What is Storage Testing?


Testing that verifies the program under test stores data files in the correct directories and that it
reserves sufficient space to prevent unexpected termination resulting from lack of space. This is
external storage as opposed to internal storage.

96. What is Stress Testing?


Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. Often this is performance
testing using a very high level of simulated load.

97. What is Structural Testing?


Testing based on an analysis of internal workings and structure of a piece of software. See also
White Box Testing.

98. What is System Testing?


Testing that attempts to discover defects that are properties of the entire system rather than of
its individual components.

99. What is Testability?


The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.

100. What is Testing?


The process of exercising software to verify that it satisfies specified requirements and to detect
errors. The process of analyzing a software item to detect the differences between existing and
required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE
Std 829). The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect of the system or
component. What is Test Automation? It is the same as Automated Testing.

101. What is Test Bed?


An execution environment configured for testing. May consist of specific hardware, OS, network
topology, the configuration of the product under test, other application or system software, etc.
The Test Plan for a project should enumerate the test beds(s) to be used.
102. What is a Test Case?
Test Case is a commonly used term for a specific test. This is usually the smallest unit of
testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution
preconditions, and expected outcomes developed for a particular objective, such as to exercise
a particular program path or to verify compliance with a specific requirement. Test-Driven
Development? Testing methodology associated with Agile Programming in which every chunk
of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-
level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an
equal number of lines of test code to the size of the production code.

103. What is Test Driver?


A program or test tool used to execute tests. Also known as a Test Harness.

104. What is Test Environment?


The hardware and software environment in which tests will be run, and any other software with
which the software under test interacts when under test including stubs and test drivers.
105. What is Test First Design?

Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires
that programmers do not write any production code until they have first written a unit test.

106. What is Test Harness?


A program or test tool used to execute tests. Also known as a Test Driver.

107. What is Test Plan?


A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning.

108. What is Test Procedure?


A document providing detailed instructions for the execution of one or more test cases.

109. What is Test Script?


Commonly used to refer to the instructions for a particular test that will be carried out by an
automated test tool.

110. What is Test Specification?


A document specifying the test approach for a software feature or combination of features and
the inputs predicted results and execution conditions for the associated tests.

111. What is the Test Suite?


A collection of tests used to validate the behaviour of a product. The scope of a Test Suite
varies from organization to organization. There may be several Test Suites for a particular
product for example. In most cases, however, a Test Suite is a high-level concept, grouping
together hundreds or thousands of tests related by what they are intended to test.

112. What is Test Tools?


Computer programs used in the testing of a system, a component of the system, or its
documentation.

113. What is Thread Testing?


A variation of top-down testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.

114. What is a Top-Down Testing?


An approach to integration testing where the component at the top of the component hierarchy
is tested first, with lower-level components being simulated by stubs. Tested components are
then used to test lower-level components. The process is repeated until the lowest level
components have been tested.

115. What is Total Quality Management?


A company commitment to develop a process that achieves high-quality product and customer
satisfaction.

116. What is the Traceability Matrix?


A document showing the relationship between Test Requirements and Test Cases.

117. What is Usability Testing?


Testing the ease with which users can learn and use a product.

118. What is a Use Case?


The specification of tests that are conducted from the end-user perspective. Use cases tend to
focus on operating software as an end-user would conduct their day-to-day activities.
119. What is Unit Testing?

Testing of individual software components.

120. how do the companies expect the defect reporting to be communicated by the tester to the
development team. Can the excel sheet template be used for defect reporting? If so what are
the common fields that are to be included? who assigns the priority and severity of the defect
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Priority Issue status
this is how to report bugs in the excel sheet and also set filters on the Columns attributes.
But most of the companies use the share point process of reporting bugs In this when the
project came for testing a module wise detail of the project is inserted to the defect management
system they are using. It contains the following field
1. Date
2. Issue brief
3. Issue description(used for the developer to regenerate the issue)
4. Issue status( active, resolved, on-hold, suspend and not able to regenerate)
5. Assign to (Names of members allocated to project)
6. Priority(High, medium and low)
7. Severity (Major, medium and low)

121. How do you plan test automation?


1. Prepare the automation Test plan
2. Identify the scenario
3. Record the scenario
4. Enhance the scripts by inserting checkpoints and Conditional Loops
5. Incorporated Error Handler
6. Debug the script
7. Fix the issue
8. Rerun the script and report the result

122. Does automation replace manual testing?


There can be some functionality which cannot be tested in an automated tool so we may have
to do it manually. therefore manual testing can never be replaced. (We can write the scripts for
negative testing also but it is a hectic task).When we talk about the real environment we do
negative testing manually.

123. How will you choose a tool for test automation?


choosing of a tool depends on many things ...
1. Application to be tested
2. Test environment
3. Scope and limitation of the tool.
4. Feature of the tool.
5. Cost of the tool.
6. Whether the tool is compatible with your application which means tool should be able to
interact with your application
7. Ease of use

124. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be beneficial for our
project. The additional new features and the enhancements of the features will also help.

125. How you will describe testing activities?


Testing activities start from the elaboration phase. The various testing activities are preparing
the test plan, Preparing test cases, Execute the test case, Log the bug, validate the bug & take
appropriate action for the bug, Automate the test cases.

126. What testing activities you may want to automate?


Automate all the high priority test cases which need to be executed as a part of regression
testing for each build cycle.

127. Describe common problems of test automation.


The common problems are:
1. Maintenance of the old script when there is a feature change or enhancement
2. The change in the technology of the application will affect the old scripts
128. What types of scripting techniques for test automation do you know?
5 types of scripting techniques:
Linear
Structured
Shared
Data-Driven
Key Driven

129. What is memory leaks and buffer overflows?


Memory leaks mean incomplete deallocation - are bugs that happen very often. Buffer overflow
means data sent as input to the server that overflows the boundaries of the input area, thus
causing the server to misbehave. Buffer overflows can be used.

130. What are the major differences between stress testing, load testing, Volume testing?
Stress testing means increasing the load, and checking the performance at each level. Load
testing means at a time giving more load by the expectation and checking the performance at
that level. Volume testing means first we have to apply initially.

Descriptive Questions:

Q: How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with
high-risk projects, a serious management buy-in is required and a formalized QA process is
necessary. For medium size organizations with lower risk projects, management and
organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA
processes should be balanced with productivity, in order to keep any bureaucracy from getting
out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot
depends on team leads and managers, feedback to developers and good communication is
essential among customers, managers, developers, test engineers and testers. Regardless the
size of the company, the greatest value for effort is in managing requirement processes, where
the goal is requirements that are clear, complete and
testable.

Q: What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they
are repeatable. Specifications, designs, business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally,
there should be a system for easily finding and obtaining of documents and determining what
document will have a particular piece of information. Use documentation change management,
if possible.
Q: What makes a good test engineer?

A: Good test engineers have a "test to break" attitude. We, good test engineers, take the point
of view of the customer; have a strong desire for quality and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers and an ability to
communicate with both technical and non-technical people. Previous software development
experience is also helpful as it provides a deeper understanding of the software development
process, gives the test engineer an appreciation for the developers' point of view and reduces
the learning curve in automated test tool programming.

Rob Davis is a good test engineer because he has a "test to break" attitude, takes the point of
view of the customer, has a strong desire for quality, has an attention to detail, He's also tactful
and diplomatic and has good a communication skill, both oral and written. And he has previous
software development experience, too.
Q: What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that none
outside the test group will be able to read it.

Q: What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in
order to determine if a feature of an application is working correctly. A test case should contain
particulars such as a...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
Please note, the process of developing test cases can help find problems in the requirements or
design of an application, since it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases early in the development cycle, if
possible.

Q: What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested. Additionally, determinations should be
made regarding requirements, software, hardware, safety impact, etc., for regression testing to
check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place,
it should encapsulate these determinations. A variety of commercial,
problem-tracking/management software tools are available. These tools, with the detailed input
of software test engineers, will give the team complete information so developers can
understand the bug, get an idea of its severity, reproduce it and fix it.

Q: What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate
and track code, requirements, documentation, problems, change requests, designs, tools,
compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis
has had experience with a full range of CM tools and concepts, and can easily adapt to your
software tool and process needs.
Q: What if the software is so buggy it can't be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting
whatever bugs or problems initially show up, with the focus being on critical bugs.

Since this type of problem can severely affect schedules and indicates deeper problems in the
software development process, such as insufficient unit testing, insufficient integration testing,
poor design, improper build or release procedures, managers should be notified and provided
with some documentation as evidence of the problem.

Q: What if there isn't enough time for thorough testing?

A: Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects.

Use risk analysis to determine where testing should be focused. This requires judgment skills,
common sense and experience. The checklist should include answers to the following
questions:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
Q: What if the project isn't big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing
is still not justified, risk analysis is again needed and the considerations listed under "What if
there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad
hoc" testing, or write up a limited test plan based on the risk analysis.

Q: What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that
alternate test plans and strategies can be worked out in advance. It is helpful if the application's
initial design allows for some adaptability, so that later changes do not require redoing the
application from scratch. Additionally, try to...
• Ensure the code is well commented and well documented; this makes changes easier
for the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of their
requirements and minimize changes.
• In the project's initial schedule, allow for some extra time to commensurate with probable
changes.
Move new requirements to a 'Phase 2' version of an application and use the original
requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements into the project.
• Ensure customers and management understand scheduling impacts, inherent risks and
costs of significant requirements changes. Then let management or the customers decide if the
changes are warranted; after all, that's their job.
• Balance the effort put into setting up automated testing with the expected effort required
to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely to remain
unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize regression-
testing needs;
• Design some flexibility into test cases; this is not easily done; the best bet is to minimize
the detail in the test cases, or set up only higher-level generic-type test plans;
Focus less on detailed test plans and test cases and more on ad-hoc testing with an
understanding of the added risk this entails.
Q: How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and
run in such an interdependent environment, that complete testing can never be done. Common
factors in deciding when to stop are...
• Deadlines, e.g. release deadlines, testing deadlines;
• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.
Q: What if the application has functionality that wasn't in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden
functionality, which it would indicate deeper problems in the software development process. If
the functionality isn't necessary to the purpose of the application, it should be removed, as it
may have unknown impacts or dependencies that were not taken into account by the designer
or the customer.
If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as
a result of the unexpected functionality. If the functionality only affects areas, such as minor
improvements in the user interface, it may not be a significant risk.

Q: How can software QA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes
and adjust and experiment as an organization grows and matures. Productivity will be improved
instead of stifled. Problem prevention will lessen the need for problem detection. Panics and
burnout will decrease and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings and promote training as part of the QA process.

However, no one, especially talented technical types, like bureaucracy and in the short run
things may slow down a bit. A typical scenario would be that more days of planning and
development will be needed, but less time will be required for late-night bug fixing and calming
of irate customers.

Q: What if the organization is growing so fast that fixed QA processes are impossible?

A: This is a common problem in the software industry, especially in new technology areas.
There is no easy solution in this situation, other than...
• Hire good people (i.e. hire Rob Davis)
• Ruthlessly prioritize quality issues and maintain focus on the customer;
• Everyone in the organization should be clear on what quality means to the customer.
Q: Why do you recommend that we test during the design phase?

A: Because testing during the design phase can prevent defects later on. We recommend
verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
2. Verify the design meets the requirements and is complete (specifies all relationships
between modules, how to pass data, what happens in exceptional circumstances, starting state
of each module and how to guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick enough runtime
for the final product.
Q: What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to prevention. It involves the
entire software development process. Prevention is monitoring and improving the process,
making sure any agreed-upon standards and procedures are followed and ensuring problems
are found and dealt with.

Software Testing, when performed by Rob Davis, is also oriented to detection. Testing involves
the operation of a system or application under controlled conditions and evaluating the results.

Rob Davis can provide QA/testing service. This document details some aspects of how he can
provide software testing/QA service. For more information, e-mail [email protected].

Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual.

Also common are project teams, which include a mix of test engineers, testers and developers,
who work closely together, with overall QA processes monitored by project managers.

Software quality assurance depends on what best fits your organization's size and business
structure.
Q: How is testing affected by object-oriented designs?
A: A well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box testing
(where an understanding of the internal design of the application is unnecessary), white-box
testing can be oriented to the application's objects. If the application was well designed this can
simplify test design.

Q: What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and
procedures, standards and templates and test readiness reviews.

Rob Davis' QA service depends on the customers and projects. A lot will depend on team leads
or managers, feedback to developers and communications among customers, managers,
developers' test engineers and testers.

Q: What is black box testing?


A: Black box testing is functional testing, not based on any knowledge of internal software
design or code. Black box testing are based on requirements and functionality.

Q: What is white box testing?


A: White box testing is based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths and conditions.

Q: What is unit testing?


A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and
then that of the test engineers.
Unit testing is performed after the expected test results are met or differences are
explainable/acceptable.

Q: What is functional testing?


A: Functional testing is black-box type of testing geared to functional requirements of an
application. Test engineers should perform functional testing.

Q: What is usability testing?


A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the
targeted end-user or customer. User interviews, surveys, video recording of user sessions and
other techniques can be used. Programmers and developers are usually not appropriate as
usability testers.

Q: What is incremental integration testing?


A: Incremental integration testing is continuous testing of an application as new functionality is
recommended. This may require that various aspects of an application's functionality are
independent enough to work separately, before all parts of the program are completed, or that
test drivers are developed as needed. Incremental testing may be performed by programmers,
software engineers, or test engineers.

Q: What is parallel/audit testing?


A: Parallel/audit testing is testing where the user reconciles the output of the new system to the
output of the current system to verify the new system performs the operations correctly.

Q: What is integration testing?


A: Upon completion of unit testing, integration testing begins. Integration testing is black box
testing. The purpose of integration testing is to ensure distinct components of the application still
work in accordance to customer requirements.
Test cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either
in line or differences are explainable/acceptable based on client input.

Q: What is system testing?


A: System testing is black box testing, performed by the Test Team, and at the start of the
system testing the complete system is configured in a controlled environment.
The purpose of system testing is to validate an application's accuracy and completeness in
performing the functions as designed.

System testing simulates real life scenarios that occur in a "simulated real life" test environment
and test all functions of the system that are required in real life.

System testing is deemed complete when actual results and expected results are either in line
or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit
and integration test results are reviewed by Software QA to ensure all problems have been
resolved. For a higher level of testing it is important to understand unresolved problems that
originate at unit and integration test levels.

You CAN learn system testing, with little or no outside help. Get CAN get free information. Click
on a link!

Q: What is end-to-end testing?

A: Similar to system testing, the macro end of the test scale is testing a complete application in
a situation that mimics real world use, such as interacting with a database, using network
communication, or interacting with other hardware, application, or system.

Q: What is regression testing?


A: The objective of regression testing is to ensure the software remains intact. A baseline set of
data and scripts is maintained and executed to verify changes introduced during the release
have not "undone" any previous code. Expected results from the baseline are compared to
results of the software under test. All discrepancies are highlighted and accounted for, before
testing proceeds to the next level.
Q: What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset of regression testing.

It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to
the database, application servers, printers, etc.

Q: What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as


a distinct level of testing. Performance testing verifies loads, volumes and response times, as
defined by requirements.

Q: What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site
under a range of loads to determine at what point the system response time will degrade or fail.

Q: What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The


installation test for a release is conducted with the objective of demonstrating production
readiness.
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?


A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
This test includes the inventory of configuration items, performed by the application's System
Administration, the evaluation of data readiness, and dynamic tests focused on basic system
functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software,
operating system, or network
Q: What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of
competitors' products.

Q: What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the
opportunity to verify the system functionality and usability prior to the system being released to
production.
The acceptance test is the responsibility of the client/customer or project manager, however, it
is conducted with the full support of the project team. The test team also works with the
client/customer/project manager to develop the acceptance criteria.

Q: What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor


design changes can still be made as a result of alpha testing. Alpha testing is typically
performed by a group that is independent of the design team, but still within the company, e.g.
in-house software test engineers, or software QA engineers.
Q: What is beta testing?

A: Beta testing is testing an application when development and testing are essentially
completed and final bugs and problems need to be found before the final release. Beta testing is
typically performed by end-users or others, not programmers, software engineers, or test
engineers.

Q: What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.

Q: What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System
Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test
Configuration Manager.

Depending on the project, one person may wear more than one hat. For instance, Test
Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test
Configuration Manager.

You CAN get a job in testing. Click on a link!

Q: What is a Test Engineer?

A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze
standards of measurements, evaluate results of system/integration/regression testing. We
also...
• Speed up the work of the development staff;
• Reduce your organization's risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;
• Assure the successful launch of your product by discovering bugs and design flaws,
before users get discouraged, before shareholders loose their cool and before employees get
bogged down;
• Help the work of your development staff, so the development team can devote its time to
build up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your
customers;
• Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs
and design flaws damage the reputation of your company.
: What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a Test Build Manager.

Q: What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current


software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test
Engineer may also wear the hat of a System Administrator.

Q: What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database Administrator.

Q: What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test


requirements. Depending on the project, one person may wear more than one hat. For instance,
Test Engineers may also wear the hat of a Technical Analyst.

Q: What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test
Engineers may also wear the hat of a Test Configuration Manager.
Q: What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort,
a schedule of all test activities and resource requirements.

Q: What is software testing methodology?

A: One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.
This methodology can be used and molded to your organization's needs. Rob Davis believes
that using this methodology is important in the development and ongoing maintenance of his
clients' applications.

Q: What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.
Q: How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the
release and preparing logical groups of functions that can be further broken into test
procedures. Test procedures define test conditions, data to be used for testing and expected
results, including database updates, file outputs, report results. Generally speaking...
• Test cases and scenarios are designed to represent both typical and unusual situations
that may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also
execute unit test cases.
• It is the test team that, with assistance of developers and clients, develops test cases
and scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more
test scenarios.
• Test procedures or scripts include the specific data that will be used for testing the
process or transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to
ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the foundation
for unit and system testing and used to exercise system functionality in a controlled
environment.
• Some output data is also base-lined for future comparison. Base-lined data is used to
support future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the environment
and data to be tested. A test readiness document is created to indicate the status of the
entrance criteria of the release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from
general and detailed design documents, e.g. software design document, source code, and
software complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions, and test data.
• Reports of software design issues, given to software developers for correction.
Q: How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As


each test procedure is performed, an entry is recorded in a test execution log to note the
execution of the procedure and whether or not the test procedure uncovered any defects.
Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held
daily, if required, to address and discuss testing issues, status and activities.
• The output from the execution of test procedures is known as test results. Test results
are evaluated by test engineers to determine whether the expected results have been obtained.
All discrepancies/anomalies are logged and discussed with the software team lead, hardware
test lead, programmers, software engineers and documented for further investigation and
resolution. Every company has a different process for logging and reporting bugs/defects
uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are
recorded in a test summary report. The severity of a problem, found during system testing, is
defined in accordance to the customer's risk assessment and recorded in their selected tracking
tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the
problem. Fixes are regression tested and flawless fixes are migrated to a new baseline.
Following completion of the test, members of the test team prepare a summary report. The
summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team
Lead.
• After a particular level of testing has been certified, it is the responsibility of the
Configuration Manager to coordinate the migration of the release software components to the
next test level, as documented in the Configuration Management Plan. The software is only
migrated to the production environment after the Project Manager's formal acceptance.
• The test team reviews test document problems identified during testing, and update
documents where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software
Design Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This needs
to be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements
document and Design Document problems.
• Reports on software design issues, given to software developers for correction.
Examples are bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for migration
to the next level.

Q: How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test
strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test plan
may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria
and risk assessment.
Inputs for this process:
• A description of the required hardware and software components, including test tools.
This information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the
project management level.
Q: What is security clearance?

A: Security clearance is a process of determining your trustworthiness and reliability before


granting you access to national security information.

Q: What are the levels of classified access?

A: The levels of classified access are confidential, secret, top secret, and sensitive
compartmented information, of which top secret is the highest.

What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. It should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be included in a test
plan, depending on the particular project:

* Title

* Identification of software including version/release numbers.

* Revision history of document including authors, dates, approvals.

* Table of Contents.
* Purpose of document, intended audience

* Objective of testing effort

* Software product overview

* Relevant related document list, such as requirements, design documents, other test plans, etc.

* Relevant standards or legal requirements

* Traceability requirements

* Relevant naming conventions and identifier conventions

* Overall software project organization and personnel/contact-info/responsibilties

* Test organization and personnel/contact-info/responsibilities

* Assumptions and dependencies

* Project risk analysis

* Testing priorities and focus

* Scope and limitations of testing

* Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable

* Outline of data input equivalence classes, boundary value analysis, error classes

* Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems

* Test environment validity analysis - differences between the test and production systems and
their impact on test validity.

* Test environment setup and configuration issues

* Software migration processes

* Software CM processes

• * Test data setup requirements


* Database setup requirements

* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture


software, that will be used to help describe and report bugs

* Discussion of any specialized software or hardware tools that will be used by testers to help
track the cause or source of bugs

* Test automation - justification and overview

* Test tools to be used, including versions, patches, etc.

* Test script/test code maintenance processes and version control

* Problem tracking and resolution - tools and processes

* Project test metrics to be used

* Reporting requirements and testing deliverables

* Software entrance and exit criteria

* Initial sanity testing period and criteria

* Test suspension and restart criteria

* Personnel allocation

* Personnel pre-training needs

* Test site/location

* Outside test organizations to be utilized and their purpose, responsibilties, deliverables,


contact persons, and coordination issues.

* Relevant proprietary, classified, security, and licensing issues.

* Open issues

* Appendix - glossary, acronyms, etc.

What's a 'test case'?


* A test case is a document that describes an input, action, or event and an expected response,
to determine if a feature of an application is working correctly. A test case should contain
particulars such as test case identifier, test case name, objective, test conditions/setup, input
data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available (see the 'Tools' section
for web resources with listings of such tools). The following are items to consider in the tracking
process:

* Complete information such that developers can understand the bug, get an idea of it's
severity, and reproduce it if necessary.
* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the
developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test


* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful
in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various


stages. For instance, testers need to know when retesting is needed, developers need to know
when bugs are found and how to get the needed information, and reporting/summary
capabilities are needed for managers.

What if the software is so buggy it can't really be tested at all?


* The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run
in such an interdependent environment, that complete testing can never be done. Common
factors in deciding when to stop are:

* Deadlines (release deadlines, testing deadlines, etc.)

* Test cases completed with certain percentage passed

* Test budget depleted

* Coverage of code/functionality/requirements reaches a specified point

* Bug rate falls below a certain level

* Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

* Use risk analysis to determine where testing should be focused. Since it's rarely possible to
test every possible aspect of an application, every possible combination of events, every
dependency, or everything that could go wrong, risk analysis is appropriate to most software
development projects. This requires judgement skills, common sense, and experience. (If
warranted, formal methods are also available.) Considerations can include:

* Which functionality is most important to the project's intended purpose?

* Which functionality is most visible to the user?

* Which functionality has the largest safety impact?

* Which functionality has the largest financial impact on users?

* Which aspects of the application are most important to the customer?

* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex, and thus most subject to errors?

* Which parts of the application were developed in rush or panic mode?

* Which aspects of similar/related previous projects caused problems?

* Which aspects of similar/related previous projects had large maintenance expenses?

* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?

* What kinds of problems would cause the worst publicity?

* What kinds of problems would cause the most customer service complaints?

* What kinds of tests could easily cover multiple functionalities?

* Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

* Consider the impact of project errors, not the size of the project. However, if extensive testing
is still not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then
do ad hoc testing, or write up a limited test plan based on the risk analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache

* Work with the project's stakeholders early on to understand how requirements might change
so that alternate test plans and strategies can be worked out in advance, if possible.

* It's helpful if the application's initial design allows for some adaptability so that later changes
do not require redoing the application from scratch.

* If the code is well-commented and well-documented this makes changes easier for the
developers.

* Use rapid prototyping whenever possible to help customers feel sure of their requirements and
minimize changes.
* The project's initial schedule should allow for some extra time commensurate with the
possibility of changes.

* Try to move new requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version.

* Negotiate to allow only easily-implemented new requirements into the project, while moving
more difficult new requirements into future versions of the application.

* Be sure that customers and management understand the scheduling impacts, inherent risks,
and costs of significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their job.

* Balance the effort put into setting up automated testing with the expected effort required to re-
do them to deal with changes.

* Try to design some flexibility into automated test scripts.

* Focus initial automated testing on application aspects that are most likely to remain
unchanged.

* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

* Design some flexibility into test cases (this is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)

* Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
• What if the application has functionality that wasn't in the requirements?

* It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer. If not removed, design information will be needed to determine added testing needs
or regression testing needs. Management should be made aware of any significant added risks
as a result of the unexpected functionality. If the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

How can QA processes be implemented without stifling productivity?

* By implementing QA processes slowly over time, using consensus to reach agreement on


processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem
detection, panics and burn-out will decrease, and there will be improved focus and less wasted
effort. At the same time, attempts should be made to keep processes simple and efficient,
minimize paperwork, promote computer-based processes and automated tracking and
reporting, minimize time required in meetings, and promote training as part of the QA process.
However, no one - especially talented technical types - likes rules or bureacracy, and in the
short run things may slow down a bit. A typical scenario would be that more days of planning
and development will be needed, but less time will be required for late-night bug-fixing and
calming of irate customers. (See the Books section's 'Software QA', 'Software Engineering', and
'Project Management' categories for useful books with more information.)

What if an organization is growing so fast that fixed QA processes are impossible

* This is a common problem in the software industry, especially in new technology areas. There
is no easy solution in this situation, other than:

* Hire good people

* Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

* Everyone in the organization should be clear on what 'quality' means to the customer

How does a client/server environment affect testing?

* Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the focus should be on integration and system
testing. Additionally, load/stress/performance testing may be useful in determining client/server
application limitations and capabilities. There are commercial tools to assist with such testing.
(See the 'Tools' section for web resources with listings that include these kinds of test tools.)

How can World Wide Web sites be tested?

* Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
Internet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a
wide variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that
• testing for web sites can become a major ongoing effort. Other considerations might
include:

How is testing affected by object-oriented designs?

* What are the expected loads on the server (e.g., number of hits per unit time?), and what kind
of performance is required under such loads (such as web server response time, database
query response times). What kinds of tools will be needed for performance testing (such as web
load testing tools, other tools already in house that can be adapted, web robot downloading
tools, etc.)?

* Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection
speeds and browser types)?

* What kind of performance is expected on the client side (e.g., how fast should pages appear,
how fast should animations, applets, etc. load and run)?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* How reliable are the site's Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?

* What processes will be required to manage updates to the web site's content, and what are
the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

* Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?
* Will there be any standards or requirements for page appearance and/or graphics throughout
a site or parts of a site?
* How will internal and external links be validated and updated? how often?

* Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, dial-up connection variabilities,
and real-world internet 'traffic congestion' problems to be accounted for in testing?
* How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?

* How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.

* The page layouts and design elements should be consistent throughout a site, so that it's clear
to the user that they're still within a site.
* Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type.
* All pages should have links external to the page; there should be no dead-end pages.
* The page owner, revision date, and a link to a contact person or organization should be
included on each page.
What is Extreme Programming and what's it got to do with testing?

* Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. It was created by Kent Beck who described the approach
in his book 'Extreme Programming Explained' (See the Softwareqatest.com Books page.).
Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are
expected to write unit and functional test code first - before the application is developed. Test
code is under source control along with the rest of the code. Customers are expected to be an
integral part of the project team and to help develope scenarios for acceptance/black box
testing. Acceptance tests are preferably automated, and are modified and rerun for each of the
frequent development iterations. QA and test personnel are also required to be an integral part
of the project team. Detailed requirements documentation is not used, and frequent re-
scheduling, re-estimating, and re-prioritizing is expected.

You might also like