All about Writing Crisp and Efficient Scripts for Test Automation
Script is a form of a set of instructions for the test tool to act upon. It is the job of a good test engineer to innovatively engineer his test scripts such that they remain long lasting and become an essential part of his test automation effort.
Before talking about efficient ways of writing test scripts, first of all let us have a quick review of the different scripting techniques deployed by software testing engineers. Each technique has its own pros & cons, hence these techniques in all practical sense are used together.
Five types of scripting techniques being practiced by software testing engineers are:
1) Linear scripts: It involves recording the whole of each test case; i.e. each test case will be replayed in its totality by a single script. In linear script we record a test case performed manually. It contains all the keystrokes, including function keys, arrow keys, and the like, that control the software under test, and the alphanumeric keys that make up the input data.
2) Structured scripts: Writing structured scripts is in a way parallel to structured programming, with a difference that some special instructions or control structures are used to control the execution of the script. The control structures supported by almost all test tool-scripting languages are a) Sequence b) Selection and c) Iteration.
3) Shared scripts: These scripts are used by more than one test case. The idea is to write one script that performs some task that has to be repeated for different tests and then whenever that task has to be done, this script is simply called at the particular point in each test case.
4) Data-driven scripts: In this technique, test inputs are stored in a separate data file rather than in the script itself. This leaves the control information like menu navigation etc. in the script itself. When the test is executed the test input is read from the file rather than being taken directly from the script.
5) Keyword-driven scripts: This is in fact a logical extension of the more sophisticated data-driven technique. This technique combines the data-driven technique with the desire to be able to specify automated test cases without having to specify all the time consuming detail. We expand the data file so that it becomes a description of the test case we wish to automate using a set of keywords to indicate the tasks to be performed.
Uses of Test Scripts:
Recording a test case being performed manually generally results in a long linear script that can be used to replay the actions performed by the manual tester. Doing the same for a number of test cases will result in one script for each test case. Hence in case we have thousands of test cases we will land up building thousands of scripts. Having one script for every test case is not efficient since many test cases share common actions like “Add new client” or “Query client details” etc. This eventually leads to greater maintenance costs than the equivalent manual testing effort because every copy of a set of instructions to perform a particular action will need changing when some aspect of that action changes.
Most of the test tools use a scripting language that offers far more power and flexibility than would ever be used by recording, but to draw sizable benefits out of it we need to edit the recorded scripts or write scripts right from the scratch. One of the benefits of editing and coding scripts is to reduce the amount of scripting necessary to automate a set of test cases. This is achieved principally by following two methods.
Method-1: Coding of relatively small pieces of script each one of which performs a specific action or task that is common to several test cases. Each test case that needs to perform one of the common actions can then use the same script.
Method-2: To reduce scripting is to insert control structures into the scripts to make the tool repeat sequences of instructions without having to code multiple copies of the instructions. Although these approaches inevitably cause us to code several more scripts initially, they are much smaller and therefore easier to maintain. Eventually, however, once a reasonably comprehensive set of scripts has been coded, new test cases can be added without the need to add more scripts so we enter a stage where thousands of test cases are implemented by hundreds of scripts.
Typical Contents of Test Scripts:
Actual contents of a test script depend upon the test tool being used and the scripting techniques employed. However scripts usually contain data and instructions for the test tool, like:
a) Synchronization i.e. when to enter the next input.
b) Comparison information i.e. what and how to compare, and what to compare with.
c) What screen data to capture and where to store it.
d) When to read input data from another source, and where to read it from i.e. file, database or device.
e) Control information i.e. to repeat a set of inputs or make a decision based on the value of an output).
Like software, scripts are very flexible. There are many ways of coding a script to perform a given task. The way a script is written usually depends on the skill of the software-testing engineer doing its coding but should also depend on its objective. A quick approach can be to record as much as possible, or perhaps to copy parts of other scripts and club them together. A more thoughtful approach can be doing some design work and coding right from the scratch.
Some scripting techniques involve elaborate constructs and logic while others are much simpler. The more elaborate the scripting technique used, the more up-front work there will be and the more effort will be needed on debugging the scripts. However, there will also be greater end productivity because there will be more reuse of scripts and they will involve less maintenance. The more tests we automate, the more worthwhile it is to put the effort into the more elaborate scripting techniques.
Comparison of Efficient & Poor Scripts:
Since scripts form a vital part of most of the test automation regimes, it important for every software testing engineer to ensure that these scripts are efficient and are capable to reliably do what it is supposed to do and is easy to use and maintain.
Comparison among attributes of efficient and poor set of test scripts operating on the same set of test cases is as under.
Sr. | Attribute | Efficient Test Scripts | Poor Test Scripts |
1 | Numbers | Lesser number of scripts � say less than one script for each test case | More number of scripts � like at least one script for every test case |
2 | Size | Small size of scripts – with annotation, no more than two pages | Large size of scripts � running into many pages |
3 | Function | Each script has a clear, single
objective |
Individual scripts perform a number of functions, typically the whole test case |
4 | Documentation | Specific documentation for users and maintainers, clear, precise and up-to-date | No documentation or out-of-date – general points, no detail, not informative |
5 | Reuse | Many scripts reused by different test cases | No reuse – each script implements a single test case |
6 | Structured | Easy to see and understand the structure and therefore to make changes – following good programming practices, well organized control constructs | An amorphous mass, difficult to make changes with confidence |
7 | Maintenance | Easy to maintain – changes to software only require minor changes to a few scripts | Minor software changes still need major changes to scripts; scripts difficult to change correctly |
When test automation begins, we begin with a small number of scripts and it is not difficult to keep track of them, but this task becomes more difficult as the number of scripts increases.
Principles of writing Good Scripts:
Although different test engineers will have varying preferences as to the style, layout, and content of scripts, yet following principles prescribed by test automation experts can be taken as a guide.
The test scripts must be:
1) Annotated – to guide both the user and the person maintaining them
2) Functional – performing a single task and encouraging reuse
3) Structured – for ease of reading, understanding and maintenance
4) Understandable – for ease of maintenance
5) Documented – to aid reuse and maintenance
Above principles are quite generic irrespective of the scripting technique used, but they may be implemented in different ways. For example, all scripts should be annotated, but the annotation may differ in style and format from one regime to another.
Important Documentation associated with Good Scripts:
A good script is always supported by detailed written information that is truly useful. The information that can usefully be written down includes the following:
a) Raw content of the script i.e. inputs and sometimes expected outputs.
b) The purpose of the script i.e. what this script does or the actions it performs.
c) User information i.e. what information has to be passed to the script, what information it returns, what state the software under test has to be in when the script is called, and what state the software will be left in when the script ends.
d) Implementation information i.e. additional information helpful to maintainers, such as an explanation of why it was implemented in a particular way or references to similar scripts that may also need changing.
e) Annotation i.e. comments embedded throughout the script to say what is happening at each logical step in terms of the actions being performed on or by the software under test.
The only annotation that can be generated automatically is limited to “click the OK button”, focus on window “Payments” and input the string “Yoginder Nath” in the field “name”. Such information is often rather obvious anyway, so the annotation is generally not adding any value. This type of annotation certainly is not sufficient if the script is to be used and maintained effectively and efficiently.
None of the other information can be generated automatically. But if we want understandable scripts, someone must produce this information, generally a job of the scriptwriter. To add this information to the script, this person needs to be able to edit the script itself so he or she needs to have some technical skills (unless the tool supports easy insertion of comments during script creation). If the people recording the scripts are not technical, then adequate support need be provided to them.
Simply recording a long test is not to be viewed as an end of the story; there remains lot many things to be done. However usually test automation begins with recording and is considered to be sufficient in some cases.
Many More Articles on Test Automation
Largest Database of Sample Papers – 1000+ Unique Questions for ISTQB Foundation Exam
ISTQB Foundation Exam – Full Crash Course (Set of 35 Parts)
ISTQB Advanced CTAL Test Analysts Exam – Full Crash Course
ISTQB Advanced CTAL Test Manager Exam – Full Crash Course
Consolidated Study Material – All ISTQB Certification Exams
What Successful Testers say about the Quality of this website
If you want to keep track of further articles on Software Testing,
I suggest you to subscribe my RSS feed.You can also Subscribe by E-mail and get All New articles delivered directly to your Inbox. |
Get your Absolutely Free Copy of Several MS PowerPoint Presentations & E-Booksrelated to ISTQB, HP Load Runner, IBM RFT, HP QTP & QC Certification Exams, prepared by Popular Writers & Trainers, by writing to:Software.testing.genius@gmail.com
Full Study Material for Popular Certification Exams:
Study Material – HP QTP & QC Certification Exam
Study Material – HP LoadRunner Certification Exam
Study Material – IBM RFT Certification Exam
Consolidated Study Material – Testing & QA
Most Popular Topics in Demand:
Practical Roadmap to QTP Certification
Practical Roadmap to CSTE Certification
Rehearsal of QTP in 1 Hr. – Interview Questions
General HR Interview Questions Common For All
Tricky HR Interview Questions Common For All
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.