Software Testing

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. It is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.

The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process as well as in other processes are explained below.

1. White Box Testing

Testing based on an analysis of internal workings and structure of a piece of software. It includes techniques such as Branch Testing and Path Testing. It is also known as Structural Testing and Glass Box Testing.

Generally done by Developer/Programmer

Possible Bugs…
• Loop Errors
• Branching errors
• Conditional Errors
• Syntax Errors etc.

2. Black Box Testing 

Testing without knowledge of the internal workings of the item being tested. Tests are usually functional. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document..

Possible Bugs…

• Functionality Errors
• Interface errors
• Error in Functions/components etc.,

3. Gray box Testing

The effective combination of black box (external) and white box (internal or unit) testing.

 As it needs both Internal and External knowledge of the system, it is generally done by Developers. (Developer becomes Tester here)
 Well suited for Open source applications

4. Integration Testing

Testing in which modules are combined and tested as a group.

Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Modules are typically code modules, individual applications, client and server applications on a network, etc. Integration Testing follows unit testing and precedes system testing.

 5. System Integration Testing

Testing a specific hardware/software installation. This is typically performed on a COTS (commerical off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.

6. Regression Testing

A regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process.

Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.

7. Unit Testing

Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration

8. Functional Testing

Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

This can involve testing of the product’s user interface, APIs, database management, security, installation, networking, etcF testing can be performed on an automated or manual basis using black box or white box methodologies.

9. System Testing

Once the entire system has been built then it has to be tested against the “System Specification” to check if it delivers the features required. Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.

It is still developer focused, although specialist developers known as systems testers are normally employed to do it.


 Performance – Are the performance criteria met?
 Volume – Can large volumes of information be handled?
 Stress – Can peak volumes of information be handled?
 Documentation – Is the documentation usable for the system?
 Robustness – Does the system remain stable under adverse circumstances?

10. Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.

11. Ad Hoc Testing

Testing without a formal test plan or outside of a test plan. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed.

 Sometimes ad hoc testing is referred to as exploratory testing.
 Sometimes also called as Monkey Testing

12. Alpha Testing

Testing after code is mostly complete or contains most of the functionality and prior to users being involved.

 Sometimes a select group of users are involved.
 More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

13. Beta Testing

Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.

14. Interface Testing

As the components are constructed and tested they are then linked together to check if they work with each other.

It is a fact that two components that have passed all their tests, when connected to each other produce one new component full of faults. These tests can be done by specialists, or by the developers.


 What a component can expect from another component in terms of services?
 How these services will be asked for?
 How they will be given?
 How to handle non-standard conditions, i.e. errors? Etc…

15. Component Testing

Starting from the bottom the first test level is “Component Testing”, sometimes called Unit Testing.

It involves checking that each feature specified in the “Component Design” has been implemented in the component.

16. Release Testing

Even if a system meets all its requirements, there is still a case to be answered that it will benefit the business.

The linking of “Business Case” to Release Testing is looser than the others, but is still important.


 Does it affect any other systems running on the hardware?
 Is it compatible with other systems?
 Does it have acceptable performance under load?

17. Pilot Testing

Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products.

Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)

18. Security Testing

Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

19. Compatibility Testing

Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

20. Configuration Testing

Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

21. Installation Testing

Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs.

22. Risk Based Testing

Testing is our primary means of reducing risks related to systems and software. By identifying and prioritizing risks, we can make sure that limited time and resources are used to test the most important things

23. Content Testing

Process of testing any errors in your content, and should ensure that your site has a professional appearance. In this testing, you should reread each page on your site, and check for spelling and grammatical errors.

24. Accessibility testing

Accessibility testing measures how well the site’s interface accommodates users with disabilities; part of this issue is the fact that specialized browsers exist for disabled users (screen readers, Braille readers, etc.) which have trouble rendering some HTML constructions in meaningful ways.

25. User scenario testing (or) User test

User tests are observational tests where a user or set of users are given tasks to perform using some software or product

26. Conformance Testing

Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

27. Independent Verification and Validation (IV&V)

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner.

The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.

28. Load Testing

Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation. Load testing is a generic term covering Performance Testing and Stress Testing.

29. Performance Testing

Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

30. Stress Testing

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result.
Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.

31. Acceptance Testing

Testing the system with the intent of confirming readiness of the product and customer acceptance.


 Systems Testing checks that the system that was specified has been delivered.
 Acceptance Testing checks that the system delivers what was requested.

32. Usability Testing

Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment

 Usability testing measures the site’s interface for ease and intuitiveness of navigation.
 This can be tested for Web based projects.
 Also called as User-Interface Testing

33. Sanity testing

A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. It is originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

• It tests the basic functionality of the Application.
• Sometimes called as smoke testing