background preloader

Unit testing

Unit testing
In computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use.[1] Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method.[2] Unit tests are short code fragments[3] created by programmers or occasionally by white box testers during the development process. Ideally, each test case is independent from the others. Benefits[edit] The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.[1] A unit test provides a strict, written contract that the piece of code must satisfy.

White-box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. White-box test design techniques include: Control flow testingData flow testingBranch testingPath testingStatement coverageDecision coverage

Integration testing Purpose[edit] The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test whether all the components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. Some different types of integration testing are big bang, top-down, and bottom-up. Big Bang[edit] A type of Big Bang Integration testing is called Usage Model testing. Top-down and Bottom-up[edit] Limitations[edit] References[edit] See also[edit]

Test-driven development Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that is not proven to meet requirements. American software engineer Kent Beck, who is credited with having developed or "rediscovered"[1] the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.[2] Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999,[3] but more recently has created more general interest in its own right.[4] Programmers also apply the concept to improving and debugging legacy code developed with older techniques.[5] Test-driven development cycle[edit] A graphical representation of the test-driven development lifecycle 1. 2. 3. 4. 5. Repeat Development style[edit]

Mock object In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts. Reasons for use[edit] In a unit test, mock objects can simulate the behavior of complex, real objects and are therefore useful when a real object is impractical or impossible to incorporate into a unit test. If the real object: supplies non-deterministic results (e.g., the current time or the current temperature);has states that are difficult to create or reproduce (e.g., a network error);is slow (e.g., a complete database, which would have to be initialized before the test);does not yet exist or may change behavior;would have to include information and methods exclusively for testing purposes (and not for its actual task). Technical details[edit]

System testing System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. [1] As a rule, system testing takes, as its input, all of the "integrated" software components that have passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limited type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole. Testing the whole system[edit] Types of tests to include in system testing[edit] See also[edit] References[edit]

Acceptance testing In systems engineering it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.[1] Software developers often distinguish acceptance testing by the system provider from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. In the case of software, acceptance testing performed by the customer is known as user acceptance testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance) testing. Overview[edit] Testing generally involves running a suite of tests on the completed system. Acceptance Tests/Criteria (in Agile Software Development) are usually created by business customers and expressed in a business domain language. Acceptance test cards are ideally created during sprint planning or iteration planning meeting, before development begins so that the developers have a clear idea of what to develop.

Behavior Driven Development In software engineering, behavior-driven development (abbreviated BDD) is a software development process based on test-driven development (TDD).[1][2] Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software developers and business analysts with shared tools and a shared process to collaborate on software development,[1][3] with the aim of delivering "software that matters".[4] Although BDD is principally an idea about how software development should be managed by both business interests and technical insight, the practice of BDD does assume the use of specialized software tools to support the development process.[2] Although these tools are often developed specifically for use in BDD projects, they can be seen as specialized forms of the tooling that supports test-driven development. History[edit] Principles of BDD[edit] Behavioral specifications[edit] See Also[edit]

Dependency injection There are three common forms of dependency injection: setter-, interface- and constructor-based injection, where the responsibility of injecting the dependency lies upon the client, the service or the constructor method respectively. Overview[edit] Implementation of dependency injection is often identical to that of the strategy pattern, but while the strategy pattern is intended for dependencies to be interchangeable throughout an object's lifetime, in dependency injection only a single instance of a dependency is used.[citation needed] Application frameworks such as Spring, Guice, Glassfish HK2, and Microsoft Managed Extensibility Framework (MEF) support dependency injection. Advantages[edit] Disadvantages[edit] Examples[edit] Without dependency injection[edit] Dependency injection is an alternative technique to initialize the member variable than explicitly creating a service object as shown above. Three types of dependency injection[edit] Other types[edit] Constructor injection[edit]

Stress testing Stress testing (sometimes called torture testing) is a form of deliberately intense or thorough testing used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Reasons can include: to determine breaking points or safe usage limitsto confirm intended specifications are being metto determine modes of failure (how exactly a system fails)to test stable operation of a part or system outside standard usage Reliability engineers often test items under expected stress or even under accelerated stress in order to determine the operating life of the item or to determine modes of failure.[1] Computing[edit] Hardware[edit] Stress testing, in general, should put computer hardware under exaggerated levels of stress in order to ensure stability when used in a normal environment. Hardware stress testing and stability are subjective and may vary according to how the system will be used.

Regression testing The intent of regression testing is to ensure that a change such as those mentioned above has not introduced new faults.[1] One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software.[2] Common methods of regression testing include rerunning previously completed tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change. Background[edit] Experience has shown that as software is fixed, emergence of new and/or reemergence of old faults is quite common. Sometimes reemergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Regression testing is an integral part of the extreme programming software development method.

FitNesse

Related: