be used according to the adopted technique:
1) When recording a script for a test case, some sort of initialization may
be necessary - such as database connection, variables, and functions - which can
be shared between test cases of the same use case or even between many use
2) Each script may be recorded independently from the others, having its own
variables, initialization, and so on, although it would be better to share
common objects for test scripts, in other words, use the concept of modularity.
For example, if it is necessary to clean up a database for a specific
use case before executing any test script, or if there are common script steps,
a text file to store these steps may be used and later on can be included at
scripts execution time. If these steps change, only one place will have to be
3) Another practice is to define functions executing a common activity for
many scripts, and only call it from those scripts.
4) A clean and robust code must be implemented. Comments should be made for
important lines or code blocks. While implementing, possible maintenance should
not be discarded.
5) When a flow alteration or verification point in a script is recorded
again, there must be some care to update only script sections where there really
must be a change.
6) To delete any trash shown in a form, before inserting any data, a
combination of keys should be used ("HOME", "SHIFT+END" and "DELETE") when
recording an action of data input.
7) If test scripts are recorded initially from a prototype (applicable when
the actual application is still not available), it is necessary to certify that
names of fields and IDs of clickable icons on screen will not be altered. If
they are changed, the script will need an update at a later date.
8) Dynamic data for test scripts should be used (test data must be separate
of scripts). The immediate benefit is that, if additional data is needed, more
records can simply be added; there is no need to modify the test
Apart from following the above best practices, It is important
for the Software Testing Manager;
1) Form a team of specialized
professionals to begin the implantation of tests automation, so as to reap the
best benefits out of it.
2) Include in the team, at least one member
with sufficient knowledge of the organizational culture.
3) Ensure an
active participation of all team members, so that pertinent directives to the
organizational context may be outlined and the commitment from every individual
gets assured for the outcome of the "Automation Product".
4) Monitor the
process planning, elaboration of test cases, creation of code, and execution of
tests and, especially the evaluation of results.
More Articles on Test Automation