ISTQB Foundation Level Exam Crash Course Part-8
This is Part 8 of 35 containing 5 Questions (Q. 36 to 40) with detailed explanation as expected in ISTQB Foundation Level Exam Latest Syllabus updated in 2011
Deep study of these 175 questions shall be of great help in getting success in ISTQB Foundation Level Exam
Q. 36: What are the main steps involved in the design of tests?
The design of tests comprises of three main steps:
1) Identify test conditions: Decide on a test condition, which would typically be a small
section of the specification for our software under test.
Going by the definition – A test condition is some characteristic of our software that we can check with a test or a set of tests.
2) Specify test cases: Design a test case that will verify the test condition.
Going by the definition – A test case gets the system to some starting point for the test (execution preconditions); then applies a set of input values that should achieve a given outcome (expected result), and leaves the system at some end point (execution postcondition).
3) Specify test procedures: Write a test procedure to execute the test, i.e. get it into the right starting state, input the values, and check the outcome.
Going by the definition – A test procedure identifies all the necessary actions in sequence to execute a test. Test procedure specifications are often called test scripts (or sometimes manual test scripts to distinguish them from the automated scripts that control test execution tools.
This is a simple set of steps. Of course we will have to carry out a very large number of these simple steps to test a whole system, but the basic process is still the same. To test a whole system we write a test execution schedule, which puts all the individual test procedures in the right sequence and sets up the system so that they can be run.
The test development process may be implemented in more or less formal ways. In some situations it may be appropriate to produce very little documentation and in others a very formal and documented process may be appropriate. It all depends on the context of the testing, taking account of factors such as maturity of development and test processes, the amount of time available and the nature of the system under test. Safety-critical systems, for example, will normally require a formal test process.
<<<<<< =================== >>>>>>
Q. 37: What is Test Coverage & where do we apply it?
Test coverage provides a quantitative assessment of the extent and quality of testing. It answers the question �how much testing have you done?� in a way that is not open to interpretation.
Vague statements like �I am nearly finished�, or �I have done two weeks’ testing� or �I have done everything in the test plan� generate more questions than they answer. They are statements about how much testing has been done or how much effort has been applied to testing, rather than statements about how effective the testing has been or what has been achieved.
We need to know about test coverage for two very important reasons:
1) It provides a quantitative measure of the quality of the testing that has been done by measuring what has been achieved.
2) It provides a way of estimating how much more testing needs to be done. Using quantitative measures we can set targets for test coverage and measure progress against them.
Statements like �I have tested 75 per cent of the decisions� or �I have tested 80 per cent of the requirements� provide useful information. They are neither subjective nor qualitative; they provide a real measure of what has actually been tested. If we apply coverage measures to testing based on priorities, which are themselves based on the risks addressed by individual tests, we will have a reliable, objective and quantified framework for testing.
Test coverage can be applied to any systematic technique; & here it will be applied to specification-based techniques to measure how much of the functionality has been tested, and to structure-based techniques to measure how much of the code has been tested. Coverage measures may be part of the completion criteria defined in the test plan and used to determine when to stop testing.
<<<<<< =================== >>>>>>
Q. 38: What are the different categories of test case design techniques?
The test case design techniques are grouped into three categories like:
1) Those based on deriving test cases directly from a specification or a model of a system or proposed system, known as specification-based or black-box techniques. So black-box techniques are based on an analysis of the test basis documentation, including both functional and non-functional aspects. They do not use any information regarding the internal structure of the component or system under test.
2) Those based on deriving test cases directly from the structure of a component or system, known as structure-based, structural or white-box techniques. We will concentrate on tests based on the code written to implement a component or system, but other aspects of structure, such as a menu structure, can be tested in a similar way.
3) Those based on deriving test cases from the tester’s experience of similar systems and general experience of testing, known as experience-based techniques.
<<<<<< =================== >>>>>>
Q. 39: What are the specification-based techniques for test case design?
It was originally called �black-box� because this technique take a view of the system that does not need to know what is going on �inside the box�.
The specification-based techniques derive test cases directly from the specification or from some other kind of model of what the system should do. The source of information on which to base testing is known as the �test basis�. If a test basis is well defined and adequately structured we can easily identify test conditions from which test cases can be derived.
The most important point about specification-based techniques is that specifications or models do not (and should not) define how a system should achieve the specified behavior when it is built; it is a specification of the required (or at least desired) behavior. One of the hard lessons that software engineers have learned from experience is that it is important to separate the definition of what a system should do (a specification) from the definition of how it should work (a design). This separation allows the two specialist groups (testers for specifications and designers for design) to work independently so that we can later check that they have arrived at the same place, i.e. they have together built a system and tested that it works according to its specification.
If we set up test cases so that we check that desired behavior actually occurs then we are acting independently of the developers. If they have misunderstood the specification or chosen to change it in some way without telling anyone then their implementation will deliver behavior that is different from what the model or specification said the system behavior should be. Our test, based solely on the specification, will therefore fail and we will have uncovered a problem.
It may be noted that not all systems are defined by a detailed formal specification. In some cases the model we use may be quite informal. If there is no specification at all, the tester may have to build a model of the proposed system, perhaps by interviewing key stakeholders to understand what their expectations are. However formal or informal the model is, and however it is built, it provides a test basis from which we can generate tests systematically.
Remember that the specification can contain non-functional elements as well as functions; topics such as reliability, usability and performance are examples. These need to be systematically tested as well.
What we need, then, are techniques that can explore the specified behavior systematically and thoroughly in a way that is as efficient as we can make it. In addition, we use what we know about software to �home in� on problems; each of the test case design techniques is based on some simple principles that arise from what we know in general about software behavior.
<<<<<< =================== >>>>>>
Q. 40: What are the different types of specification-based techniques?
We have following five major specification-based techniques:
1) Equivalence partitioning
2) Boundary value analysis
3) Decision table testing
4) State transition testing
5) Use case testing
Part – 9 of the Crash Course – ISTQB Foundation Exam
Access The Full Database of Crash Course Questions for ISTQB Foundation Level Certification
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.