background preloader

Basic Definitions

Facebook Twitter

Test design. In software engineering, test design is the act of creating and writing test suites for testing a software. Definition[edit] Test design could require all or one of: knowledge of the software, and the business area it operates on,knowledge of the functionality being tested,knowledge of testing techniques and heuristics.planning skills to schedule in which order the test cases should be designed, given the effort, time and cost needed or the consequences for the most important and/or risky features.[1] Well designed test suites will provide for an efficient testing. The test suite will have just enough test cases to test the system, but no more. This way, there is no time lost in writing redundant test cases that would unnecessarily consume time each time they are executed. Automatic test design[edit] However, as good as automatic test design can be, it is not appropriate for all circumstances. References[edit] Test management. Creating tests definitions in a database[edit] Preparing test campaigns[edit] This includes building some bundles of test cases and executing them (or scheduling their execution).

Execution can be either manual or automatic. Manual execution[1] The user will have to perform all the test steps manually and inform the system of the result. Some test management tools includes a framework to interface the user with the test plan to facilitate this task. Automatic execution There are numerous ways of implementing automated tests. Generating reports and metrics[edit] The ultimate goal of test management tools is to deliver sensitive metrics that will help the QA manager in evaluating the quality of the system under test before releasing. Managing bugs[edit] Eventually, test management tools can integrate bug tracking features or at least interface with well-known dedicated bug tracking solutions (such as Bugzilla or Mantis) efficiently link a test failure with a bug.

Planning test activities[edit] Model-based testing. General model-based testing setting Model-based testing is application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach. A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior. Tests can be derived from models in different ways. Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.

Model-based testing for complex software systems is still an evolving field. Models[edit] Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Deploying model-based testing[edit] Theorem proving[edit] Software testing. Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest.

In general, these properties indicate the extent to which the component or system under test: As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. Overview[edit] Defects and failures[edit] Code coverage.

Code coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM in 1963.[1] Coverage criteria[edit] To measure what percentage of code has been exercised by a test suite, one or more coverage criteria are used. Coverage criteria is usually defined as a rule or requirement, which test suite needs to satisfy.[2] Basic coverage criteria[edit] There are a number of coverage criteria, the main ones being:[3] For example, consider the following C++ function: int foo (int x, int y){ int z = 0; if ((x>0) && (y>0)) { z = x; } return z;} Assume this function is a part of some bigger program and this program was run with some test suite. Condition coverage does not necessarily imply branch coverage.

Condition coverage can be satisfied by two tests: a=true, b=falsea=false, b=true However, this set of tests does not satisfy decision coverage since neither case will meet the if condition. In practice[edit] Session-based testing. Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. The method can also be used in conjunction with scenario testing. Session-based testing was developed in 2000 by Jonathan and James Bach. Session-based testing can be used to introduce measurement and control to an immature test process and can form a foundation for significant improvements in productivity and error detection. Session-based testing can offer benefits when formal requirements are not present, incomplete, or changing rapidly. Elements of session-based testing[edit] Mission[edit] The mission in Session Based Test Management identifies the purpose of the session, helping to focus the session while still allowing for exploration of the system under test.

Charter[edit] A charter is a goal or agenda for a test session. Session[edit] Session report[edit] Debrief[edit]