Important Tasks Performed by Software Testing Engineers after executing Automated Tests
Once an automated test has been executed the test result artifacts, consisting of actual outcomes & various byproducts of the test execution e.g. tool log file etc. lie scattered here & there & need to be organized. It becomes quite necessary to evaluate the success of the test case plus doing certain tasks like running a report generator to extract data from the database etc. etc.
If software testing engineer made his test suite ran throughout the night, then he needs to spend time in the morning looking at the failed test results and analyzing whether the software or the test was wrong, or whether some other factor had disturbed the automated test, causing it to fail. Such an exercise is quite time consuming & it is important that such an effort is planned beforehand. These tasks are likely to vary from one Testware Set to another and from one Test Suite to another.
Following are the examples of four types of important tasks performed after the execution of an automated test. Majority of these tasks can be automated as groups of tasks rather than having to be automated individually. It has been seen that performing these tasks manually is both error prone
and time consuming.
1) Cleanup Exercise:
These are the tasks like deleting files, database records etc and a comparison report that says no points of differences were found. Whereas output file that is found to be different from the expected output can be retained. Some test cases generate a lot of output even though only a little of it is used for comparison purposes. For instance, a test case may capture a lot of screen images as a detailed record of what went on during execution. If the test case fails they can be used to help determine the cause of failure without having to re-run the test case. If the test case passes then they can be safely deleted.
2) Checking the artifacts:
This typically involves careful checking the expected outcome of a test case in respect that a particular file does not exist – either because it has been deleted by the test case or because it is not supposed to be created. Likewise it may be expected as a post condition of the test case that certain files need to be present. These checks can be easily automated.
3) Reorganizing the artifacts:
These tasks are quite similar to the above mentioned cleanup exercise, and are specifically concerning copying or moving files to the results structure of the test-ware architecture. It is not always possible to have all of the test results created in one specific place and yet it is desirable since doing so makes test failure analysis much easier. The artifacts decided to be preserved are parked in a common location for ease of analysis or simply to prevent them from being changed or destroyed by subsequent tests. This is a matter of copying or moving each of the artifacts to one particular place easily accessible to all concerned people. Sometimes it becomes necessary to copy not just one file but several files, or compile many scripts.
These tasks can be automated easily. These can be achieved with a simple instruction or command since they are usually simple functions like ‘copy a file’. More complex functions can be reduced to a simple command by implementing them in a command file.
4) Conversion of Formats:
Refers to the tasks of converting the formats of outcomes that we need to compare or analyze & where these formats are not suited to the task. For instance is easier to analyze database data if it is copied into a formatted report file. Not only can we concentrate on the relevant subset of data but also we can usually choose the format in which it is presented. Another instance where this is useful is in converting data from a platform-dependent format into a platform-independent format or at least into the format in which the expected outcome is held.
Tasks performed after normal completion of a test case:
Normal completion of test case means situation wherein all the actual outcomes match the expected outcomes, all of the actual outcome files can be deleted. There is no point in preserving them if they are known to be the same as the expected results. In the beginning it appears quite strange to do: we have taken so much of the trouble of assembling and executing our automated tests, and now we immediately throw away the very outputs that we took so much trouble to generate. However, we need not delete any of the status information from the test, since this will be preserved as a record of the test having been done. So the test status and test log would be filed away in a safe place. Alternatively, once a summary report is generated detailing the status of every test case, then all the by-products of test execution i.e. log files etc. may also be deleted.
What we can delete straight away is the actual outcomes, which we have just determined to be the same as the expected outcomes. So we now have two similar, if not identical, copies. This takes twice as much storage space as one copy would take, so it is wasteful to preserve both. We certainly don’t want to throw away our expected outcomes, so it is the actual outcomes that we can now safely delete.
The exception to this is where a QMS – Quality Management System or company norms require that all test results be preserved. Sometimes only the results of the final run of all tests need be preserved, rather than the results of all the runs of every test. Also, it may be appropriate to change the QMS as test automation offers a better way of recording the testing that has been undertaken and the results achieved.
Tasks performed after abnormal termination of a test case:
Abnormal completion of test case means situation wherein any part of a test case fails, or where a test case does not run to completion, we want to preserve everything. In this case, the more information that is available to help with analysis the better. When we are trying to find out why a test did not pass, often the first thing that the developer who is fixing the problem wants to do is to re-run the test. The reason for this is that there may not be enough information to actually find out in detail what went wrong. Hence the more information we can supply with a failed test case, the more efficient the debugging process can be.
We can even take this a step further, and design our tests so that they create more output than is actually used in the comparison of the test, just in case the test case fails. This additional output may then be deleted as part of the test case post-processing if the test case passes. If the test case fails, then this additional data serves to help the failure analysis and possibly the debugging effort
We need not analyze the failure data immediately. For instance, we can capture the state of a database after an abnormal termination (for analyzing later on) and return the database to a known good state to allow subsequent test cases to run. The known good state may be the expected result of the failed test case. If a subsequent test case then destroys the data, this is not a problem if the right data and the right amount of data have been captured at this point.
Probable reasons of failure of post test execution tasks:
If any of the post test execution task fails it should cause the test case itself to fail irrespective of its outcome. This is a fail-safe policy. A post test execution task could fail because a file that it was meant to move or delete had not been created. Hence the test case outcome does not match the expectation. In this case the post test execution task has failed because the test case itself failed to produce the expected outcome.
However, a post test execution task could fail because the disk to which it was meant to move a result file did not have sufficient free space. This may occur quite independently from the test case, which could have been successful in all other respects. If the file that could not be moved was to be compared after the move then the comparison cannot be performed so this too should cause the test case to fail. If the move operation was part of a final cleaning up exercise then it may seem unfair to fail the test case but we prefer to do so. We will then be sure that if a test case passes, it really has passed within the limitations of its design and implementation.
It is always advisable to report the causes of the test case failure in a log file. Since post test execution tasks are specific, hence we can easily generate meaningful message like “Task failed to move *.* file due to an insufficient disk space on C: or D: drive” or so on.
Many More Articles on Test Automation
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.