ISTQB Foundation Level Exam Crash Course Part-12
This is Part 12 of 35 containing 5 Questions (Q. 56 to 60) with detailed explanation as expected in ISTQB Foundation Level Exam Latest Syllabus updated in 2011
Deep study of these 175 questions shall be of great help in getting success in ISTQB Foundation Level Exam
Q. 56: What is the purpose of Statement Testing and Coverage?
Statement testing is testing aimed at exercising programming statements. If we aim to test every executable statement we call this full or 100 per cent statement coverage. If we exercise half the executable statements this is 50 per cent statement coverage, and so on. Remember: we are only interested in executable statements, so we do not count non-executable
statements at all when we are measuring statement coverage.
Why should we measure the statement coverage?
It is a very basic measure that testing has been (relatively) thorough. After all, a suite of tests that had not exercised all of the code would not be considered complete. Actually, achieving 100 per cent statement coverage does not tell us very much, and there are much more rigorous coverage measures that we can apply, but it provides a baseline from which we can move on to more useful coverage measures.
<<<<<< =================== >>>>>>
Q. 57: What is the purpose of Decision Testing and Coverage?
Decision testing aims to ensure that the decisions in a program are adequately exercised. Decisions, as you know, are part of selection and iteration structures; we see them in IF THEN ELSE constructs and in DO WHILE or REPEAT UNTIL loops. To test a decision we need to exercise it when the associated condition is true and when the condition is false; this guarantees that both exits from the decision are exercised.
As with statement testing, decision testing has an associated coverage measure and we normally aim to achieve 100 per cent decision coverage. Decision coverage is measured by counting the number of decision outcomes exercised (each exit from a decision is known as a decision outcome) divided by the total number of decision outcomes in a given program. It is usually expressed as a percentage.
The usual starting point is a control flow graph, from which we can visualize all the possible decisions and their exit paths.
Let us consider the following example.
1 Program Check
2
3 Count, Sum, Index: Integer
4
5 Begin
6
7 Index = 0
8 Sum = 0
9 Read (Count)
10 Read (New)
11
12 While Index <= Count
13 Do
14 If New < 0
15 Then
16 Sum = Sum + 1
17 Endif
18 Index = Index + 1
19 Read (New)
20 Enddo
21
22 Print (“There were”, Sum, “negative numbers in the input
stream”)
23
24 End
This program has a WHILE loop in it. There is a golden rule about WHILE loops. If the condition at the WHILE statement is true when the program reaches it for the first time then any test case will exercise that decision in both directions because it will eventually be false when the loop terminates. For example, as long as Index is less than Count when the program reaches the loop for the first time, the condition will be true and the loop will be entered. Each time the program runs through the loop it will increase the value of Index by one, so eventually Index will reach the value of Count and pass it, at which stage the condition is false and the loop will not be entered. So the decision at the start of the loop is exercised through both its true exit and its false exit by a single test case. This makes the assumption that the logic of the loop is sound, but we are assuming that we are receiving this program from the developers who will have debugged it.
Now all we have to do is to make sure that we exercise the If statement inside the loop through both its true and false exits. We can do this by ensuring that the input stream has both negative and positive numbers in it.
For example, a test case that sets the variable Count to 5 and then inputs the values 1, 5, −2, −3, 6 will exercise all the decisions fully and provide us with 100 per cent decision coverage. Note that this is considered to be a single test case, even though there is more than one value for the variable New, because the values are all input in a single execution of the program. This example does not provide the smallest set of inputs that would achieve 100 per cent decision coverage, but it does provide a valid example.
Although loops are a little more complicated to understand than programs without loops, they can be easier to test once you get the hang of them.
<<<<<< =================== >>>>>>
Q. 58: What do we mean by experience-based techniques?
Experience-based techniques are those that you need to deploy when there is no adequate specification from which to derive specification-based test cases or no time to run the full structured set of tests.
They use the users’ and the testers’ experience to determine the most important areas of a system and to exercise these areas in ways that are both consistent with expected use (and abuse) and likely to be the sites of errors – this is where the experience comes in. Even when specifications are available it is worth supplementing the structured tests with some that you know by experience have found defects in other similar systems.
Techniques range from the simplistic approach of ad hoc testing or error guessing through to the more sophisticated techniques such as exploratory testing, but all tap the knowledge and experience of the tester rather than systematically exploring a system against a written specification.
<<<<<< =================== >>>>>>
Q. 59: What are the different types of experience-based testing techniques?
There are mainly two techniques under this category;
1) Error Guessing: It is a very simple technique that takes advantage of a tester’s skill, intuition and experience with similar applications to identify special tests that may not be easy to capture by the more formal techniques. When applied after systematic techniques, error guessing can add another value in identifying and exercising test cases that target known or suspected weaknesses or that simply address aspects of the application that have been found to be problematical in the past.
The main drawback of error guessing is its varying effectiveness, depending as it does on the experience of the tester deploying it. However, if several testers and/or users contribute to constructing a list of possible errors and tests are designed to attack each error listed, this weakness can be effectively overcome. Another way to make error guessing more structured is by the creation of defect and failure lists. These lists can use available defect and failure data (where this exists) as a starting point, but the list can be expanded by using the testers’ and users’ experience of why the application under test in particular is likely to fail. The defect and failure list can be used as the basis of a set of tests that are applied after the systematic techniques have been used. This systematic approach is known as fault attack.
2) Exploratory Testing: It is a technique that combines the experience of testers with a structured approach to testing where specifications are either missing or inadequate and where there is severe time pressure. It exploits concurrent test design, test execution, test logging and learning within time-boxes and is structured around a test charter containing test objectives. In this way exploratory testing maximizes the amount of testing that can be achieved within a limited time frame, using test objectives to maintain focus on the most important areas.
<<<<<< =================== >>>>>>
Q. 60: How do we decide which is the best experience-based testing techniques?
Following are some simple thumb rules:
1) Always make functional testing the first priority. It may be necessary to test early code products using structural techniques, but we only really learn about the quality of software when we can see what it does.
2) When basic functional testing is complete that is a good time to think about test coverage. Have you exercised all the functions, all the requirements, all the code? Coverage measures defined at the beginning as exit criteria can now come into play. Where coverage is inadequate extra tests will be needed.
3) Use structural methods to supplement functional methods where possible. Even if functional coverage is adequate, it will usually be worth checking statement and decision coverage to ensure that enough of the code has been exercised during testing.
4) Once systematic testing is complete there is an opportunity to use experience-based techniques to ensure that all the most important and most error-prone areas of the software have been exercised. In some circumstances, such as poor specifications or time pressure, experience-based testing may be the only viable option.
Part – 13 of the Crash Course – ISTQB Foundation Exam
Access The Full Database of Crash Course Questions for ISTQB Foundation Level Certification
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.