Software Test

Testing terms.

Текст предоставлен Павлом Гущиным, печатается с его разрешения.

  1. User related types
    1. Acceptance testing:

    2. Formal testing conducted to enable a user, customer, or other authorized entity whether to accept a system or component.
    3. Usability testing:

    4. Testing the ease with which users can learn and use a product.
    5. Environment related types
      1. compatibility testing:

      2. Testing whether the system is compatible with other systems with which it should communicate.
      3. operational testing:

      4. Testing conducted to evaluate a system or component in its operations environment.
      5. conversion testing:

      6. Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
      7. portability testing:

      8. Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.
      9. installablllty testing:

      10. Testing concerned with the installation procedures for the system.
    6. Requirement related types
      1. performance testing:

      2. Testing conducted to evaluate the compliance of a system or component with specified performance requirements.
      3. security testing:

      4. Testing whether the system meets its specified security objectives.
      5. stress testing:

      6. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
      7. non-functional requirements testing:

      8. Testing of those requirements that do not relate to functionality (technical requirements testing).
    7. Data storage related types
      1. storage testing:

      2. Testing ether the system meets its specified storage objectives.
      3. volume testing:

      4. Testing where the system is subjected to large volumes of data.
    8. Special types
      1. negative testing:

      2. Testing aimed at showing software does not work (dirty testing).
      3. documentation testing:

      4. Testing concerned with the accuracy of documentation.
      5. beta testing:

      6. Operational testing at a site not otherwise involved with the software developers.
      7. alpha testing:

      8. Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.
      9. static testing:

      10. Testing of an object without execution on a computer.
      11. Y2K testing:

  2. Types by testing strategy
    1. Types by object of testing
      1. unit testing:

      2. The testing of individual software components (component testing)
      3. integration testing:

      4. Testing performed to expose faults in the interfaces and in the interaction between integrated components.
        1. incremental testing:

        2. Integration testing where system components are integrated into the system a time until the entire system is integrated.
        3. interface testing:

        4. Integration testing where the interfaces between system components are tested
        5. isolation testing:

        6. Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.
        7. big-bang testing:

        8. Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.
      5. system testing:

      6. The process of testing an integrated system to verify that it meets specified requirements.
      7. baseline testing:
    2. Types by methods of testing
      1. functional testing:

      2. Test case selection that is based on an analysis of the specification of the component without reference to its internal workings (functional test case design, facility testing, feature testing, black box testing).
      3. structural testing:

      4. Test case selection that is based on an analysis of the internal structure of the component (white box testing, glass box testing, structural test case design, logic coverage testing, logic-driven testing).
        1. structured basis testing:

        2. A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage.
  3. Types by testing technique
    1. Types by test case design
      1. branch testing:

      2. A test case design technique for a component in which test cases are designed to execute branch outcomes (arc testing)
      3. branch condition testing:

      4. A test case design technique in which test cases are designed to execute branch condition outcomes.
        1. modified condition/decision testing:

        2. A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.
        3. branch condition combination testing:

        4. A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.
      5. partition testing:

      6. A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes (equivalence partition testing)
      7. state transition testing:

      8. A test case design technique in which test cases are designed to execute ~ transitions.
        1. n-switch testing:

        2. A form of state transition testing in which test cases are designed to execute in valid sequences of N-transitions.
      9. boundary value testing:

      10. A test case design technique for a component in which test cases are designed which include representatives of boundary values (boundary value analysis)
      11. syntax testing:

      12. A test case design technique for a component or system in which test case design is based upon the syntax of the input.
      13. statistical testing:

      14. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
      15. data definition-use testing:

      16. A test case design technique for a component in which test cases are designed to execute data definition-use pairs.
      17. exhaustive testing:

      18. A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables (complete path testing).
      19. LCSAJ testing:

      20. A test case design technique for a component in which test cases are designed to execute LCSAJs.
      21. ad hoc testing:

      22. Testing carried out using no recognized test case design technique.
    2. Types by hierarchy of the tested material
      1. top-down testing:

      2. An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
        1. thread testing:

        2. A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
      3. bottom-up testing:

      4. An approach to testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
    3. Types by base of testing design
      1. requirements based testing:

      2. Designing tests based on objectives derived from requirements for the software component (e.g., Tests that exercise specific functions or probe the non-functional constraints such as performance or security). See functional case design.
      3. design-based testing:

      4. Designing tests based on objectives derived from the architectural or detail design of the software (e.g., Tests that execute specific invocation paths or probe the worst case behaviour of algorithms).
      5. code-based testing:

      6. Designing tests based on objectives derived from the implementation (tests that execute specific control flow paths or use specific data items).
      7. data flow testing:

      8. Testing in which test cases are designed based on variable usage within the
      9. conformance testing:

      10. The process of testing that an implementation conforms to the specification on which it is based.
  4. Types by place in the software development process
    1. testing during planning stages
    2. testing during design stages
    3. testing of the ready product
    4. regression testing:

    5. Retesting of a previously tested program following modification to ensure that have not been introduced or uncovered as a result of the changes made.
    6. progressive testing:

    7. Testing of new features after regression testing of previous features.
    8. recovering testing:

    9. Retesting of a previously tested program after fixing error found by previsions testing
    10. maintainability testing:

    11. Testing whether the system meets its specified objectives for maintainability (serviceability testing)



 
Hosted by uCoz