Wednesday, December 9, 2020

Difference between White box and Black box testing

Difference between White box and Black box testing

Criteria Black Box Testing White Box Testing
Definition
Black box testing is the Software testing method the internal structure/design/implementation of the item being tested is NOT known to the tester. White box testing is the software testing method in which internal structure/design/implementation of the item being tested is known to the tester.
Levels Applicable To Mainly applicable to higher levels of testing: Acceptance Testing, System Testing Mainly applicable to lower levels of testing: Unit Testing, Integration Testing
Responsibility Generally, independent: Software Testers Generally, independent: Software Developers
Programming Knowledge Not Required Required
Implementation Knowledge Not Required Required
Basis for Test Cases Requirement Specifications Detail Design

Thursday, December 3, 2020

Black Box Testing

Black Box Testing

Black Box Testing

  • Black-box testing also called behavioral testing,
  • It focuses on the functional requirements of the software.
  • Functional testing of a component of a system
  • Examine behavior through inputs & the corresponding outputs.
  • Input is properly accepted, output is correctly produced.
  • Black-box testing attempts to find errors in the following categories:
    • (1) Incorrect or missing functions
    • (2) Interface errors
    • (3) Errors in data structures or external database access
    • (4) Behavior or performance errors
    • (5) initialization and termination errors
  • Black box testing is used during the later stages of testing after white box testing has been performed.
  • Different Black box testing techniques
    • (1) Graph Based Testing Methods
    • (2) Equivalence Partitioning
    • (3) Boundary Value Analysis
    • (4) Orthogonal Array Testing

Black box testing techniques - (1) Graph-Based Testing Methods

  • The first step in black-box testing is to understand the objects that are modeled in software and the relationships that connect these objects.
  • Once this has been accomplished, the next step is to define a series of tests that verify “all objects have the expected relationship to one another”.
Graph-Based Testing Methods
  • Graph Representation
    • A collection of nodes that represent objects,
    • Links that represent the relationships between objects,
    • Node weights that describe the properties of a node (e.g., a specific data value or state behavior),
    • Link weights that describe some characteristic of a link.
    • The symbolic representation of a graph is shown in Figure.
    • Nodes are represented as circles connected by links that take a number of different forms.
    • A directed link (represented by an arrow) indicates that a relationship moves in only one direction.
    • A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions.
    • Parallel links are used when a number of different relationships are established between graph nodes.  

Black box testing techniques - (2) Equivalence Partitioning Method

  • Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
  • An equivalence class represents a set of valid or invalid states for input conditions.
  • Equivalence classes guidelines
  • If an input condition specifies a range,
    • one valid and two invalid equivalence classes are defined
    • Input range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}
  • If an input condition requires a specific value,
    • one valid and two invalid equivalence classes are defined
    • Input value: 250 Eq classes: {250}, {x < 250}, {x > 250}
  • If an input condition specifies a member of a set,
    • one valid and one invalid equivalence class are defined
    • Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any other x}
  • If an input condition is a Boolean value,
    • one valid and one invalid class are define
    • Input: {true condition} Eq classes: {true condition}, {false condition} 

Black box testing techniques - (3) Boundary Value Analysis Technique

  • A greater number of errors occurs at the boundaries of the input domain.
  • It is for this reason that boundary value analysis (BVA) has been developed as a testing technique
  • Test both sides of each boundary
  • Look at output boundaries for test cases
  • Test min, min-1, max, max+1, typical values
  • Example : 1 <= x <=100
    • Valid : 1, 2, 99, 100
    • Invalid : 0 and 101
  • Guidelines for BVA
    • If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b.
    • If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested.

Black box testing techniques - (4) Orthogonal Array Testing

  • Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate complete testing.
  • The orthogonal array testing method is particularly useful in finding region faults — an error category associated with faulty logic within a software component.
  • Consider a system that has three input items, X, Y, and Z. Each of these input items has three discrete values associated with it. There are 3^3 = 27 possible test cases.
  • Phadke suggests a geometric view of the possible test cases associated with X, Y, and Z illustrated in Figure…
  • Used when the number of input parameters is small and the values that each of the parameters may take are clearly

Orthogonal Array Testing
  • To illustrate the use of the L9 orthogonal array, consider the send function for a fax application.
  • Four parameters, P1, P2, P3, and P4, are passed to the send function. For example : Function (p1,p2,p3,p4)
  • Each takes on three discrete values P1 takes on values:
    • P1 = 1, send it now 
    • P1 := 2, send it one hour later
    • P1 = 3, send it after midnight
  • P2, P3, and P4 would also take on values of 1, 2, and 3, signifying other send functions.
  • If a “one input item at a time” testing strategy were chosen, the following sequence of tests (P1, P2, P3, P4) would be specified: (1, 1, 1, 1), (2, 1, 1, 1), (3, 1, 1, 1), (1, 2, 1, 1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 3, 1), (1, 1, 1, 2), and (1, 1, 1, 3).
The L9orthogonal array testing approach enables you to provide good test coverage with far fewer test cases than the exhaustive strategy
Orthogonal Array Testing Table

Black Box Testing - Advantage & Disadvantages

  • Advantage
    • Find missing functionality
    • Independent from code size and functionality.
  • Disadvantages
    • No systematic search for low level errors.
    • Specification errors not found.

Monday, November 23, 2020

White Box Testing

White Box Testing

  • White-box testing sometimes called glass-box testing.
  • It is a test-case design philosophy that uses the control structure described as part of the component-level design to derive test cases.
  • Using white-box testing methods, you can derive test cases that
    • (1) Guarantee that all independent paths within a module have been exercised at least once,
    • (2) Exercise all logical decisions on their true and false sides,
    • (3) Execute all loops at their boundaries and within their operational bounds,
    • (4) Exercise internal data structures to ensure their validity.
  • White-box testing is a verification technique software engineers can use to examine if their code works as expected.
  • White box testing is a strategy in which testing is based on:
    • the internal paths,
    • structure, and
    • implementation of the software under test (SUT).
  • White-box testing is also known as structural testing, clear box testing, and glass box testing. 
  • Generally requires detailed programming skills.

White Box Testing Techniques

  • Basis path testing
    • Flow graph notation
    • Cyclomatic complexity
    • Derived test cases
    • Graph metrics
  • Control structure testing
    • Condition testing
    • Data Flow testing
    • Loop testing 

  • Basis path testing
    • Basis path testing is a white-box testing technique
    • The basis path method enables the test-case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths
(1) Flow Graph Notation
    • A simple notation for the representation of control flow, called a flow graph (or program graph).
  • The flow graph describes logical control flow using the notation illustrated in Figure.
Flow Graph Notation

(2) Independent Program Path
    • in terms of a flow graph, an independent path must move along at least one edge that has not been traversed before the path is defined.
    • For example, a set of independent paths for the flow graph illustrated in Figure as…
    • Path 1: 1-11
    • Path 2: 1-2-3-4-5-10-1-11
    • Path 3: 1-2-3-6-8-9-10-1-11
    • Path 4: 1-2-3-6-7-9-10-1-11
  • How do you know how many paths to look for? The computation of cyclomatic complexity provides the answer
  • Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program..
  • When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines
  • The number of independent paths in the basis set of a program.
  • Provides you with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.
  • How Is Cyclomatic Complexity Computed?
  • (1) The number of regions of the flow graph corresponds to the cyclomatic complexity.
    • The flow graph has 4 regions
  • (2) V(G) = E – N + 2
    • E : Number of flow graph edges 
    • N : Number of flow graph nodes
    • V(G) = 11 edges – 9 nodes + 2 = 4
  • (3) V(G) = P + 1
    • P : Number of predicate nodes [More than one outcomes]
    • V(G) = 3 predicate nodes + 1 = 4
(3) Deriving Test Cases
  • The basis path testing method can be applied to a procedural design or to source code
  • The following steps can be applied to derive the basis set / Test cases …
    • Using the design or code as a foundation, draw a corresponding flow graph.
    • Determine the cyclomatic complexity of the resultant flow graph.
    • Determine a basis set of linearly independent paths
    • Prepare test cases that will force the execution of each path in the basis set.
(4) Graph Matrices
  • A data structure, called a graph matrix, can be quite useful for developing a software tool that assists in basis path testing.
  • Graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on the flow graph.
  • Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.
  • A simple example of a flow graph and its corresponding graph matrix [Bei90] is shown in Figure…
Graph Matrices

Control Structure testing technique

(1) Condition Testing
  • Condition testing [Tai89] is a test-case design method that exercises the logical conditions contained in a program module. 
  • A simple condition is a Boolean variable or a relational expression.
Condition Testing

    
(2) Control flow testing

Control flow testing

(3) Data Flow Testing
  • Data flow testing is a powerful tool to detect improper use of data values due to coding errors.
  • For example : main() { int x; if (x == 42 ){ ...} }
  • Variables that contain data values have a defined life cycle.
  • They are created, they are used, and they are killed (destroyed) – Scope

Data Flow Testing

(4) Loop Testing
  • Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs.
  • Four different classes of loops [Bei90] can be defined:
    • Simple loops,
    • Concatenated loops,
    • Nested loops,
    • Unstructured loops

Loop Testing

White Box Testing Advantage & Disadvantages

  • Advantage
    • The tester can be sure that every path through the software under test has been identified and tested.
    • Find errors on code level.
    • Typically based on a very systematic approach, covering the complete internal module structure
  • Disadvantages
    • (1) The number of execution paths may be so large then they cannot all be tested.
    • (2) The test cases chosen may not detect data sensitivity errors.
      • For example, p=q/r; may execute correctly except when r=0.
    • (3) White box testing assumes the control flow is correct (or very close to correct). Since the tests are based on the existing paths, nonexistent paths cannot be discovered through white box testing.
    • (4) the tester must have the programming skills to understand and evaluate the software under test.

Wednesday, November 18, 2020

Software Testing Fundamentals

Software Testing Fundamentals

  • The goal of testing is to find errors, and a good test is one that has a high probability of finding an error.
  • Therefore, you should design and implement a computer-based system or a product with “testability” in mind.
  • Test Characteristics :
    • A good test has a high probability of finding an error.
    • A good test is not redundant. Testing time and resources are limited. There is no point in conducting a test that has the same purpose as another test. 
    • A good test should be “best of breed”: In a group of tests that have a similar intent, time and resource limitations may mitigate (moderate) toward the execution of only a subset of these tests. 
    • A good test should be neither too simple nor too complex

Internal & External View of Testing

  • External View : (Black box Testing)
    • Knowing the specified function that a product has been designed to perform, test to see if that function is fully operational and error-free.
    • Includes tests that are conducted at the software interface
    • Not concerned with the internal logical structure of the software
  • Internal View (White Box testing)
    • Knowing the internal workings of a product, test that all internal operations are performed according to specifications and all internal components have been exercised
    • Involves tests that concentrate on close examination of procedural detail
    • Logical paths through the software are tested
    • Test cases exercise specific sets of conditions and loops

White-Box Testing

White-Box Testing


Debugging

 Debugging

  • Debugging occurs as a consequence of successful testing.
  • When a test case uncovers an error, debugging is the process that results in the removal of the error.
  • Although debugging can and should be an orderly process.

Debugging: A Diagnostic Process

Debugging A Diagnostic Process

The Debugging Process


The Debugging Process




Tuesday, November 17, 2020

System Testing & its types

System Testing & its types

  • System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.
  • Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
  • Types of system tests :
    • Recovery Testing
    • Security Testing
    • Stress Testing
    • Performance Testing
    • Deployment Testing
High Order Testing
  • Validation testing
    • Focus is on software requirements
  • System testing
    • Focus is on system integration
  • Alpha/Beta testing 
    • Focus is on customer usage
  • Recovery testing
    • forces the software to fail in a variety of ways and verifies that recovery is properly performed
  • Security testing
    • verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration
  • Stress testing
    • executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
  • Performance Testing
    • test the run-time performance of software within the context of an integrated system

In Brief;
  • Recovery Testing
    • Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.
    • If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness.
    • If recovery requires human intervention, the mean-time-torepair (MTTR) is evaluated to determine whether it is within acceptable limits
  • Security Testing
    • Security testing attempts to verify that protection mechanisms built into a system.
  • Stress Testing
    • It executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.
    • Stress tests are designed to face programs with abnormal situations. In essence, the tester who performs stress testing asks: "How high can we crank this up before it fails?"
    • variation of stress testing is a technique called sensitivity testing.
  • Performance Testing
    • Test the run-time performance of software within the context of an integrated system.
    • Performance tests are often coupled with stress testing.
    • Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as tests are conducted
  • Deployment Testing
    • In many cases, software must execute on a variety of platforms and under more than one operating system environment.
    • Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to operate.
    • In addition, deployment testing examines all installation procedures and specialized installation software (e.g., “installers”) that will be used by customers, and all documentation that will be used to introduce the software to end users.

Saturday, May 9, 2020

Validation Testing

Validation Testing

  • Validation testing begins at the conclusion of integration testing,
  • When individual components have been exercised, the software is completely assembled as a package, and interfacing errors have been uncovered and corrected.
  • At the validation or system level, the distinction between conventional software, object-oriented software, and WebApps disappears.
  • Testing focuses on user-visible actions and user-recognizable output from the system.
  • In simple meaning, validation succeeds when software functions in a manner that can be reasonably expected by the customer.
  • Validation-Test Criteria
  • Software validation is achieved through a series of tests that demonstrate conformity with requirements.
  • A test plan outlines the classes of tests to be conducted and A test procedure defines specific test cases that are designed to ensure that
    • All functional requirements are satisfied,
    • All behavioral characteristics are achieved,
    • All content is accurate and properly presented,
    • All performance requirements are attained,
    • Documentation is correct, and usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability).
  • After each validation test case has been conducted, one of two possible conditions exist:
    • The function or performance characteristic conforms to specification and is accepted.
    • A deviation (Error) from the specification is uncovered and a deficiency list is created.
  • Configuration Review :
    • An important element of the validation processis a configuration review.
    • The objective of the review is to ensure that all elements of the software configuration have been properly developed.
    • The configuration review, sometimes called an audit…
  • Acceptance Testing :
  • When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements.
  • Conducted by the end user rather than software engineers, an acceptance test can range from an informal “test drive” to a planned and systematically executed series of tests.
  • In fact, acceptance testing can be conducted over a period of weeks or months.
  • Alpha and Beta Testing :
  • If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. 
  • Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find.
  • The alpha test is conducted at the developer’s site by a representative group of end users.
  • The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording errors and usage problems.
  • Alpha tests are conducted in a controlled environment.
  • The beta test is conducted at one or more end-user sites.
  • Unlike alpha testing, the developer generally is not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer.
  • The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals.
  • As a result of problems reported during beta tests, you make modifications and then prepare for release of the software product to the entire customer base.


Alpha vs Beta Testing, Difference Between Alpha and Beta Testing

Alpha vs Beta Testing, Difference Between Alpha and Beta Testing

Alpha vs Beta Testing, Difference Between Alpha and Beta Testing



Monday, May 4, 2020

Test Strategies for WebApps

Test Strategies for WebApps


  • The strategy for WebApp testing adopts the basic principles for all software testing and applies a strategy that are used for object-oriented systems.
  • The following steps summarize the approach :
    • The content model for the WebApp is reviewed to uncover errors.
    • The interface model is reviewed to ensure that all use cases can be accommodated.
    • The design model for the WebApp is reviewed to uncover navigation errors.
    • The user interface is tested to uncover errors in presentation and/or navigation mechanics.
    • Each functional component is unit tested.
    • Navigation throughout the architecture is tested.
    • The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration.
    • Security tests are conducted.
    • Performance tests are conducted.
    • The WebApp is tested by a controlled and monitored population of end users.

Friday, May 1, 2020

Test strategies for Object-Oriented Software

Test strategies for Object-Oriented Software

  • Introduction: The objective of testing, stated simply, is to find the greatest possible number of errors with a manageable amount of effort applied over a realistic time span.
  • Although this fundamental objective remains unchanged for objectoriented software.
  • The nature of object-oriented software changes both testing strategy and testing tactics (Plan).

Unit Testing in the OO Context

  • When object-oriented software is considered, the concept of the unit changes. Encapsulation drives the definition of classes and objects.
  • This means that each class and each instance of a class packages attributes (data) and the operations that manipulate these data.
  • An encapsulated class is usually the focus of unit testing.
  • However, operations (methods) within the class are the smallest testable units. Because a class can contain a number of different operations, and a particular operation may exist as part of a number of different classes.
  • Class testing for OO software is the equivalent of unit testing for conventional software.
  • Unlike unit testing of conventional software, which tends to focus on the algorithmic detail of a module and the data that flow across the module interface,
  • Class testing for OO software is driven by the operations encapsulated by the class and the state behavior of the class.

Integration Testing in the OO Context 

  • Different strategies for integration testing of OO systems.
    • Thread-based testing
    • use-based testing 
    • Cluster testing
  • The first, thread-based testing, integrates the set of classes required to respond to one input or event for the system.
  • Each thread is integrated and tested individually. 
  • Regression testing is applied to ensure that no side effects occur.
  • The second integration approach, use-based testing, begins the construction of the system by testing those classes (called independent classes) that use very few (if any) server classes.
  • After the independent classes are tested, the next layer of classes, called dependent classes, that use the independent classes are tested.
  • This sequence of testing layers of dependent classes continues until the entire system is constructed.

Cluster testing

  • Cluster testing is one step in the integration testing of OO software. 
  • Here, a cluster of collaborating classes is exercised by designing test cases that attempt to uncover errors in the collaborations. 

Tuesday, May 29, 2018

Strategic Issues

Strategic Issues

  • The best strategy will fail if a series of overriding issues are not addressed. Tom Gilb argues that a software testing strategy will succeed when software testers.
  • Specify product requirements in a quantifiable manner long before testing commences.
    • Objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as portability, maintainability, and usability.
  • State testing objectives explicitly.
    • The specific objectives of testing should be stated in measurable terms.
  • Understand the users of the software and develop a profile for each user category.
  • Develop a testing plan that emphasizes “rapid cycle testing.
  • Build “robust” software that is designed to test itself.
  • Use effective technical reviews as a filter prior to testing.
  • Conduct technical reviews to assess the test strategy and test cases themselves
  • Develop a continuous improvement approach for the testing process.