What Is Fuzz Testing
What Is Fuzz Testing
Fuzz Testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct. The great advantage of Fuzz Testing is that the test design is extremely simple, and free of preconceptions about system behavior.
What is Portlet Testing ?
Portlet Testing is the process of testing a part of the website page where that particular fragment of information (Portlet) or say some advertisement is collected from different source of website.
What is L10 Testing ?
L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not.
What is Disaster Recovery Testing ?
Disaster Recovery Testing tests how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems.
What is Disaster Recovery Testing ?
Disaster Recovery Testing tests how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems.
What is Stochastic Testing ?
Stochastic Testing is the same as "monkey testing", but stochastic testing is a lot more technical sounding name for the same testing process. Stochastic Testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests. Types of Black Box Testing :More Testing JOBS & FAQ @ http://www.TestingKen.com Functional Testing black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) System Testing testing is based on overall requirements specifications; covers all combined parts of a system. Integration Testing testing combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially mainly to client/server and distributed systems. Incremental Integration Testing continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
End-to-end Testing similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Sanity Testing typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. Regression Testing re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. Load Testing testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. More Testing JOBS & FAQ @ http://www.TestingKen.com Stress Testing term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. Performance Testing term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Usability Testing testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. Install/Uninstall Testing testing of full, partial, or upgrade install/uninstall processes Recovery Testing testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Security Testing testing how well the system protects against unauthorized internal or external access, wilful damage, etc; may require sophisticated testing techniques. Computability Testing testing how well software performs in a particular hardware/software/operating system/network/etc. environment Acceptance Testing determining if software is satisfactory to a customer.
Comparison Testing comparing software weaknesses and strengths to competing products More Testing JOBS & FAQ @ http://www.TestingKen.com Alpha Testing testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. Beta Testing testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
What is Security / Penetration testing ?
Security / Penetration testing is testing how well the system is protected against unauthorized internal access, external access, or willful damage. Security/penetration testing usually requires sophisticated testing techniques. Security Testing: Process to determine that an IS (Information System) protects data and maintains functionality as intended. The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation. Confidentiality * A security measure which protects against the disclosure of information to parties other than the intended recipient(s). * Often ensured by means of encoding the information using a defined algorithm and some secret information known only to the originator of the information and the intended recipient(s) (a process known as cryptography) but that is by no means the only way of ensuring confidentiality. Integrity * A measure intended to allow the receiver to determine that the information which it receives has not been altered in transit or by other than the originator of the information. * Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication. Authentication * A measure designed to establish the validity of a transmission, message, or originator. * Allows a receiver to have confidence that information is receives originated from a specific known source. Authorization
* The process of determining that a requestor is allowed to receive a service or perform an operation. * Access control is an example of authorization. Availability * Assuring information and communications services will be ready for use when expected. * Information must be kept available to authorized persons when they need it. Non-repudiation * A measure intended to prevent the later denial that an action happened, or a communication that took place etc. * In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.
What is Open box and Closed box testing ?
Open box testing is same as White box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. Closed box testing is same as Black box testing. It is a testing approach that considers only externally visible behavior. Black box testing considers neither the code itself, nor the inner working of the software.
What is Product metrics ?
Product metrics is a metric used to measure the characteristic of the documentation & code characteristic.
What are different types of metrics used in testing ?
1. User Participation : used to find the involvement of the tester = Participation test time Vs Total test time 2. Path testing = Number of path tested / total number of paths 3. Acceptance criteria tested = Acceptance criteria verified Vs total Acceptance criteria This meets identifies the number of user that were evaluated during the testing process 4. Test cost : used to find resources consumed in the testing = test cost Vs total system cost This meets identifies the amount of resources used in testing process 5. Cost to locate defect = test cost / number of defects located in the testing This metrics shows the cost to locate a defect 6. Detected production defect = number of defects detected in production / Application system size 7. Test automation = Cost of manual test effort / Total test cost
8. Schedule variance = (Actual time taken - Planned time) / Planned time * 100 9. Effort variance = (Actual effort - Planned Effort)/Planned effort * 100 10. Test case efficiency = (Total STRs - STRs not mapped)/Total STRs * 100 11. Test case coverage = (Total Test cases - STRs that cannot be mapped to test cases)/ Total Test Cases * 100
Re: What are different types of metrics used in testing ?
Metric Calculation Unit Frequency of Updates Remarks Effort Variation (%) = ((Actual Effort) - (Planned Effort)) / (Planned Effort) x 100 % Weekly Actual and Planned Effort as of the date of the report should be used. Schedule Variation (%) = ((Actual Duration) - (Planned Duration)) / (Planned Duration) x 100 % Upon completion of each milestone or Monthly, whichever is crossed first. Actual and Planned Duration expended to achieve latest milestone should be used if update is upon completion of milestone. If update is upon completion of a month since the last update, the Actual and Planned Duration expended to complete the planned scope of work at that point in time should be used. Resource Utilization = (FTE Used) / (FTE Billed) x 100 % Weekly Rework Effort = (Effort for Reviews and Rework on Test Cases) / Effort for Test Case Preparation) x 100 %
Weekly Effort for Test Case Preparation includes Effort for Reviews and Rework. Test Cases Prepared per Person Hour = (Number of Test Cases Created)/(Effort for Test Case Preparation) /FTE/hr Weekly Effort for Test Case Preparation includes Effort for Reviews and Rework. Test Cases Executed per Person Hour = (Number of Test Cases Executed)/(Effort for Test Case Execution) /FTE/hr Weekly Effort for Test Case Execution includes effort for reporting. Defect Detection Effectiveness (%) = (Number of Defects Reported by Test Team) / (Total Number of Defects Reported) x 100 % Weekly Total Number of Defects Reported includes defects reported by any party other than the test team, including postdelivery defects. Defect Acceptance Ratio = (Number of Defects Accepted as Valid ) / (Number of Defects Reported by Test Team) % Weekly
What is Process metrics ?
Process metrics is a metric used to measure the characteristic of the methods, techniques & tools employed in developing implementing & maintaining the software system.
What are two types of Metrics ?
Metrics are classified into 2 types :1. Process metrics : A metric used to measure the characteristic of the methods, techniques & tools employed in developing implementing & maintaining the software system. 2. Product metrics : A metric used to measure the characteristic of the documentation & code characteristic.
What is Metrics ?
Metrics is a mathematical number that shows a relation ship between two variables. Software metrics are measure that are used to quantify the software, software development resources & software development process.
What is Testware ? How Testware Produced ?
As we know that hardware development engineers produce hardware, Software development engineers produce software. Similar to this, Software Test Engineers produce Testware. Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report etc. Testware also includes software written for testing.
Do all testing projects need tester ?
This depends on the type of the project. For simple projects, developers can take care testing activities also. But for medium & large projects, a separate tester is desired.
What is considered successful testing ?
It is really difficult to have 100% successful testing. As human beings tend to make mistakes, we may miss some bugs. We may normally fix all visible bugs but difficult to fix the invisible bugs. So if bug rate falls below a certain level (normally defined at project level), then we may consider it successful testing and stop further testing.
What if there is not enough time for thorough testing ?
Most of the times, it's not possible to test the whole application within the specified time. In such situations, Tester needs to use the commonsense and find out the risk factors in the projects and concentrate on testing them. Here are some points to be considered when you are in such a situation: # What is the most important functionality of the project ? # What is the high-risk module of the project ? # Which functionality is most visible to the user ? # Which functionality has the largest safety impact ? # Which functionality has the largest financial impact on users ? # Which aspects of the application are most important to the customer ? # Which parts of the code are most complex, and thus most subject to errors ? # Which parts of the application were developed in rush or panic mode ? # What do the developers think are the highest-risk aspects of the application ? # What kind of problems would cause the worst publicity ? # What kind of problems would cause the most customer service complaints ? # What kind of tests could easily cover multiple functionalities ? Considering these points, you can greatly reduce the risk of project release failure under strict time constraints.
What is the most important thing in testing ?
The most important thing in testing is to fulfill all the requirements of the client and getting the client acceptance. Quality is one more important thing in testing.
Quality Assurance (QA) is the activity of providing evidence needed to establish quality in work, and that activities that require good quality are being performed effectively. Software Testing is the process used to assess the quality of computer software. Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. Software Testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification. An important point is that Software Testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing. In short, QA and Testing are integral part of the system. Testing is one of the phases in QA. In Testing, one deals with the detecting errors in behavior and structure of the coding. QA ensures desired output of product meeting all the required specifications of the project.