on the next. Continuous improvement means solving problems which irritated you in the past so that you can avoid them in the future. It is quite important for every manager or tester not to lose sight of any shortcomings encountered in the previous version.
Here I am discussing five simple techniques on continuous improvement deployed by the intelligent test managers.
Technique-1: Doing self-evaluation to close the Project Loop
Software testing teams in good organizations perform a periodic self-evaluation to close the loop on the project. These are reviews targeted at identifying what worked well & what didn’t. The results can lead to adjustments in testing techniques & processes.
There are two approaches of doing the review;
1) Comprehensive self-evaluation done once at the conclusion of a project, or
2) Iterative self-evaluation done throughout.
Approach-1: Comprehensive self-evaluation: At the end of a complex project, the software testing team needs to look back at where they had been. All the team members can be invited. The meeting can begin by discussing things, which worked well & should be retained for future tests. Then brainstorming can be done on the areas for improvement. Following questions can be asked;
a) Test Strategy: Was there anything good or bad about the overall strategy used?
b) Tools: Following questions related to testing tools that could be helpful.
# How well did the selected tools work?
# Should you have selected a different set?
# If new niche tools were invented, did they work out well?
# Is everyone on the team aware of them as possible candidates for reuse in future tests?
# Have they been added to the tool list of the software testing team?
c) Test Plans: Is there a possibility of improvement in checklists, consideration lists, & test plans?
d) Workloads: Were workloads representative of those used by customers, or stressful & complex enough to dig out the defects?
e) Environment: Were enough resources available or did the configurations allow problems to escape?
f) Test Flow: Could communication with the testing phases be improved?
g) Education: Following questions related to team skills that could be helpful.
# Was the team technically prepared to perform the tasks of the test?
# Did everyone understand the tasks that were assigned to them?
h) Clarity: Were test scenarios documented sufficiently well for others to execute them in the future?
j) Documentation: Was all of the necessary technical & operational information available for the test team to use?
k) Problem Data: Following questions related to the data that could be helpful.
# Did you find the types of problems you expected?
# Were there trends in the data?
# Problematic components or functions?
The above set of questions & topics is certainly not the end of the road, rather it offers a starting roadmap. Keep in mind that the objective of the initial self-evaluation is only to identify areas of strength & weakness, not to devise actions. Once you have a list of possibilities, then various participants can investigate possible next steps later & report back to the team.
Approach-2: Iterative self-evaluation: Like iterative development techniques, we have iterative self-evaluation reviews. Just because the entire project is not yet completed doesn’t mean that a review of what has happened so far should not be considered. In fact, interim reviews will capture ideas while they are fresh in your mind.
As experts say, “If you don’t recognize the problem, it’s tough to fix it”. Thus efficient software testing managers prefer taking out time during even the busiest tests to step back & review. Few simple reviews can really pay off. They may lead to testing modifications that will increase your efficiency & effectiveness at hunting down bugs
a) Post planning Reviews: After your test plan has been developed, test cases have been written, & execution has begun, you can collect your entire team for a review of the planning & preparation activities. This will help outline what to pay attention to the next time through the planning cycle. Additionally, it might also identify short-term actions that the team can take to enhance the test. When a group of testers get together they are bound to generate new ideas on how to tackle test planning & preparation.
A review of the current schedule & whether modifications are needed is generally a good area to explore. The end date may be beyond the test team’s control, but internal checkpoints owned by the team can often be adjusted. Perhaps now that test execution is in progress, some assumptions made during planning that were doubtful can be revisited. By stepping back to take a fresh look, you might see an adjustment to testing order or overlap that can help the overall efficiency of the test. A few such tweaks might actually improve the end date, or at least make it achievable.
b) Interim Execution Reviews: Doing a review of successes & failures while the test is in progress is also an effective way to identify gaps “on-the-fly.”
# What if the set of scenarios developed during test planning is not able to find the anticipated volume of important defects?
# Should you use a different technique for driving out the bugs or shift attention to another feature of the software?
# If one new technique is uncovering lots of defects, should it be expanded to other areas?
These interim execution reviews help with early identification of the weaknesses in the test, but also help point out methods that have been unusually fruitful.
Many test teams get stuck to continuing to execute the original plan of record. If it is not exposing important defects, It does not make any sense in sticking with it.
Software testing teams should make their test plans dynamic so that they can rapidly change their approach. If the need to shift strategies becomes clear, the test team must be able to dynamically change the execution plan & have management’s support in doing so. To enact a dynamic plan change, you may need to have a rapid review & approval process in place. Alternatively, you can anticipate the need for dynamic adjustments up front & make provisions for them by including time for artistic testing in your plan. The important thing is to find a way to keep your testing flexible. Move quickly & be ready to change your plan to find those bugs.
c) Interim Defect Reviews: Even while you are in the middle of a test, it can be useful to look for trends in the defects found thus far. An interim defect review of an entire software package or any of its components can point out soft spots. This certainly doesn’t have to be a formal event. It can be as easy as scanning the list of bugs & identifying the affected areas. Based on that, the development & test teams may decide to take additional actions, such as adding code inspections, revisiting earlier tests, or expanding stress testing.
The characteristics of the defects can also relay a message to the team. Are the problems in mainline function, recovery processing, or installation? Each unique area can yield unique actions. For example if it’s recovery processing that’s unstable, then a review of the various resource serialization points within the code could produce a new set of error injection scenarios that were previously overlooked.
Technique-2: Identification of Escape Analysis Activities
No matter how comprehensive a test plan is, there will be problems, which would escape from one test phase to the next. Unfortunately, some problems will slip out to customers as well. Escapes are likely in any environment, but steps can be taken to limit them. One important technique is for software testing engineers to perform an analysis of the ones that had already gone away.
Post-project & in-process escape analysis activities are critical means for driving the test improvements. It is important to review the problems with some specific goals in mind. How can you attack that analysis? Most important is to look at the function, which are in error. Also, examine the type of defect. For example, is the error in a simple API, a complex serialization method, or a user interface?
a) Identification of the test phase which had allowed the escape: Look for the answers to the questions like:
# Which test phase should have logically removed the defect?
# Was it a simple function verification test, which was overlooked? or
# Was it a multithreaded test missing from the system verification plan?
# Was a performance problem missed?
This is where each software testing team needs to forget their ego & learn from experience. Looking at each defect objectively will help the test team improve the next time around.
Escape analysis is important across all software testing phases. Look for the answers to the questions like:
# How many defects that could have been caught during unit test instead leaked through to FVT? # How many did the beta test customers find that could have been extracted earlier?
This type of analysis drives the improvements. The test team will identify actions they can take so that these specific problems do not come up again.
b) Identification of trends: One missed defect in a particular functional area of the software might not be something to worry a lot. But if there are trends in a component or a type of defect, that may be alarming. A trend can provide much more information than a single defect. It can help pinpoint soft spots. Once identified, these soft spots are where the software testing team should emphasize their efforts for improvement.
Trends tell compelling stories. Customers seem to identify trends at a much higher level than testers do. They might point out that an entire function or component is error prone. If a customer gets a bad taste in their mouth due to problems, it will take a long time for them to get rid of it. Testers need to have that same perspective. If the software testing team can identify trends while the software is still in the development cycle, they can shield customers from ever seeing them. Identify trends as early as possible & focus on them quickly.
c) Identification of sources of escapes: The trends of escape identified above can now become important feedback for the test team. The team can map these into their test cases, tools, environments, processes, & methods to see what can be done differently the next time to prevent not only the defects that did escape, but others in the same class.
Of course, this can be a painful exercise. On the other hand, if the analysis identifies bugs that escaped because the team was missing some key hardware, which can create a powerful argument for additional test equipment the next time around. Thus software testing teams draw important advantage out of the findings.
Technique-3: Customer Involvement
An excellent way to understand the shortcomings of your test efforts is to share them directly with actual customers. Customers are generally ready for a discussion on improvements in the vendor’s testing.
If the problems encountered by customers are significant, they will expect action by the software vendor to address the apparent lack of test coverage. But experience shows that customers also tend to be helpful in identifying specific gaps & helping to create solutions. In fact, a close working relationship with a particular customer can help not only them, but also the industry of which they are a member, or even the entire customer community. Analysis of a customer’s environment & their exploitation of the software will help both to identify what exposed the problem, & to formulate an action to take in response.
As a matter of fact, customer environments have a wide combination of system software, middleware applications, different hardware devices, & end-user clients. These integrated environments challenge the test team to keep up with state-of-the-art testing techniques to simulate them. Understanding customers is a critical first step.
Conference calls, executive briefings, & on-site meetings with customers can help pave the way for a relationship that can continue to improve the effectiveness of both your test team & the customer. Being able to meet directly with their leading information technology specialists allows you to hear directly from the folks who feel the satisfaction or pain. This environment encourages building more comprehensive test scenarios & programs at both the vendor & the customer.
Technique-4: Communication between Test Teams
Continuous improvement is a byproduct of continuous communication. When different teams stay close in touch, they can build an ongoing improvement plan as the product is created.
In particular, software testing teams whose activities overlap should consider frequent communication sessions, especially during an active product development cycle, to ensure that they are passing observations, concerns, & solutions to one another. These sessions could be as casual, or can be formal as a weekly status meeting. Irrespective of the place of discussion, such talks help break down the wall between test teams so they can feel comfortable leaning on one another. Some give-&-take between camps goes a long way toward promoting teamwork, & teamwork drives improvements in the product’s test.
Technique-5: Examining Best Practices
Another mechanism, which helps a software testing team to improve, is the implementation of testing best practices. These practices could be anything from tools to processes to automation. Compile a list of possibilities, & then see how your team’s approaches compare. Mature software organizations usually have test teams, which maintain a list of best practices. They may exist at the corporate level & at the local level as well. Less mature platforms & technologies may not yet have accumulated a significant number, or any, best practices.
How do find out the best practices?
# Research studies are a good place to start. Some experts identify a set, which includes functional specifications, formal entry & exit criteria, automated test execution, & “nightly” builds. Individual computing providers or test consultants also promote some of their own practices, they found useful based upon the past experience.
# Another way is just walking out of the office & talking to sister groups. This can provide you with a set of ideas, which you can benefit from almost immediately. The creation of test communities to identify & share best practices is definitely another recommended action.
# Another approach is to “benchmark” your own software testing techniques & methodologies against other companies. Sitting down with the teams from four or five other companies, at one time, & comparing notes in a cross-section of areas can highlight where your team stands.
Self-examination of how your team measures up to the state-of-the-art, or to anything different, might prove to be beneficial. In fact, you may find that some of your own methodologies may be worth considering as the “best” practices. You never know.
1) Change, though sometimes painful, is necessary for improvement. Combine a critical look at the way you are operating today with a discussion on possible changes.
2) Continuous improvement means continuous investment.
3) Investment in making the team more effective must be a stated goal of the top management.
4) Customer is the first one to tell us about the problems that impact their business.
5) An intelligent test manager is the one who takes control of his/her testing objectives.
6) A successful & envied test manager is the one who is strong enough to drive his/her test team towards practices, processes & technical expertise.