product bugs & known security issues. You won't really analyze this information immediately, but you need to have some familiarity with it & have concrete data at hand to refer to later. With this you will have a much better idea of what security issues exist & a list of the ones you specifically want to consider.
The next step is to take a carefully examining both the functional & design specifications for your system, both for the current release & the last few releases. You are looking specifically for security-related designs & areas where security seems to have been neglected or not given the attention it deserved. You objective is to compile a list of areas that may be of particular concern.
After this you can review the existing functional or security tests plans & results of available test case, Here you can find out what has been tested before & what hasn't been tested. If only functional testing has been done before, you can probably see easily where there are possibilities of security vulnerabilities. If security testing has been done before, there will be more data to go on that will tell you what has been tested, what cases were run, & even what vulnerabilities were discovered.
Next step is to get familiar with existing test automation if there had been any as to what is being done & how. Certain methods of software testing may or may not actually test what it is supposed to. Sometimes user authentications are put in place for the convenience of testing but are accidentally left in at the time the system is shipped & can then be used by an attacker. Knowing how the automation is done or has been done will let you know what to check for in the released system.
Development of Security Test Cases:
Major point of focus for our today’s discussion start from the development of the actual test cases that you will run to security-test your product. As you examine each area, you need to write test cases that will give you an unambiguous result as the answer to the question of whether a test passes or fails.
The test case outline process requires drilling down through the behavior to arrive at an atomic statement of behavior that can be tested. Some other test case systems are not as stringent & leave more room for ambiguity, so be careful to be extremely clear on what exactly you are testing & what results you expect to see as a pass result.
Here you shouldn't really focus on how you will carry out these test cases. You really want to develop the best test cases possible, & later you will determine how to run them. If you start to worry right now about how to run a case, you will subconsciously narrow your test cases to accommodate the "how" instead of finding out how to run a test case you already have.
Understanding of the Known Vulnerabilities in the system:
The notes prepared till the earlier step will now become items you want to have a test case or test cases specifically address. Reviewing these known vulnerabilities will also trigger some ideas on other items to test. The objective of these test cases is to verify that the known vulnerability is no longer reproducible & has been removed or mitigated.
It is also essential to develop test cases for other known vulnerabilities that exist in your system's dependencies & those systems that the system being tested interacts & interfaces with. The objective of these test cases is to verify that if these known vulnerabilities in these other products still exist & are unmitigated, your system will mitigate them so they do not become a vulnerability for your system as well.
You can also look at the known vulnerabilities in competing products & write test cases for each applicable one to ensure that these vulnerabilities are not also present in your system.
Understanding of the Unknown Vulnerabilities:
Once you have developed a test case for every known vulnerability, you need to branch out & create test cases to search for vulnerabilities that are currently unknown.
These tests constitute the majority of the tests run in any security test passes. Unlike the known vulnerability cases, where you know exactly what you are looking for, these cases are designed to look for something that may or may not exist. You can note that the easiest way to do this is to go through each of your initial lists in turn & try to think of what vulnerabilities may be possible in each item in that list.
There are some potential vulnerabilities, which are pretty much of universal concern for any system. These include items like buffer overruns, plain text storage of sensitive data, etc.
Then, there are potential vulnerabilities that are more dependent how your particular system functions. If your system has a dependency on a database, potential vulnerabilities may include SQL injection. But if your system has no Web site interface, there may not be a need to test for cross-site-scripting (XSS) vulnerabilities.
Prioritizing the Tests:
Now that you have written your test cases, you need to be able to prioritize them so that you have a clear idea of what cases are considered to be the most important.
You need to know this now to be able to communicate this information both in your test plan & on the schedule for the system. If you are told that you have a set amount of time to run security tests, you will be able to then look at your highest priority test cases & determine how many of them you can run in that amount of time.
You will also want to be able to report the time needed by test case priority, i.e., priority 1 test cases will take four weeks to run, priority 2 cases will take another three weeks to run, etc.
This is also the way you can keep your testing on track & ensure that you are running those cases first that most need to be run, instead of just those that are easiest to run.
Test case prioritization is one of the harder test skills to teach, because so many of the factors that contribute to it are not easily quantified. You really have to do the best job you can, &, after each release of your system, continue to revisit & adjust the criteria you use for your test case prioritization.
At the end of this stage, your test case outline or test case documentation will have a priority listed for each test case. We can use a scale of 1 to 3 where the ratings have the following meanings:
1 = Must be tested in this release
2 = Should be tested in this release
3 = Can be tested in this release if time permits
The test plan should include the fact that the test cases are prioritized to allow better scheduling.
Using Threat Modeling or Risk Assessment Charts:
If you or your team have conducted threat modeling for your system, these threat models are probably the best source of test case priorities. The test cases can be set to have the same priority as the vulnerability in the threat model, as a starting point.
If you have to narrow down the pool of test cases, you can pick a subset of the test cases for each vulnerability that you think is most likely to reveal the vulnerability's presence. Those will stay as the original priority & the rest of the equivalent cases would be downgraded to a lower priority.
Sometimes you will find that you have written test cases for vulnerabilities that were not considered in the threat model. These should be added to the threat model & prioritized.
Using Own Experience & Hunches:
There is always a place for personal experience & hunches. You should never let this become a case of "test everything as the highest priority," but you may decide that you want to make a few of the test cases in a certain area a higher priority than they would normally be to appease your hunch that there is an exploitable vulnerability.
There is also a place for using your own experience. You may have information on areas of your system or its dependencies that seem to be rife with security defects. You may have experience with the work done by one of the team members who wrote a particular subsystem in which the code always seemed to be really careful & safe.
These types of information may cause you to adjust test case priorities.
Discussing with the Developers for Special Concerns:
Another source of information that may affect the prioritization of test cases is obtained by talking to the other team members, particularly the developers. These people may have their own concerns or hunches on the best places to test first for security vulnerabilities.
Development of the Test Plan of Attack:
After your have written the test cases & prioritized them, you need to put more work into the test plan. Right now you have a very thorough test case outline, but a very generic test plan.
To flesh out this test plan, you need to determine how you are going to approach the test cases you have documented. This is when you determine the "how" of running your test cases.
Start with your test case outline &, in priority order, begin to add information on the steps needed to run that test. Then decide how that test can be run. Any methods or tools you are planning to use need to be noted in the test case & a summary description of the method, tool, or automation noted in the test plan.
Using By Design Test Methods:
Some tests can be run via the system's normal access methods like a user interface (UI) or console command. Remember if there is any other way to run these tests, that method should be used as well, so that you can bypass the UI checks as a lot of attackers will.
Using Commercial Tools:
There are quite a few commercial tools that can make some types of testing easier to accomplish. Use of these can help you test aspects of your system that you would otherwise be unable to have direct access to, or the wherewithal to create such an item (like a file) that may expose security vulnerability.
Reviewing the Plan & Test Cases:
Now that you have your test documents in order, you should give them a formal review so that other people can offer what can be valuable feedback. It should be very clear to everyone just what is included & not included in the plan, as well as what your schedule is.
Review with Other Disciplines:
Be sure to include disciplines other than test as they can have valuable feedback or insight to offer, especially when it comes to any assumptions you may have made.
The other places these reviewers can really help is in finding any areas that you may have overlooked in the system being tested or in recommending tools they may know about that will be able to fill one or more of your needs
Review with Other Software Testing Engineers:
Also review both documents with other testers to get their feedback on missing or invalid test cases. They may also be able to point out ways to increase efficiency or speed.
Share the Plan with Others:
Lastly, you can put the test plan & test case outline out on a share or send it out as e-mail to anyone who wants to read it. This will help to publicize security testing as a whole but also may give other testers some examples they can use to develop their own security test efforts.
Executing the Test Passes
Now the planning is done, & it's time to actually run the tests. In addition to the standard testing process of run the case, verify the result, & either file a bug or move to the next case, you can also do the following:
# Note any failures & their root causes, then look for any similar cases that could fail with a similar root cause, & if those have not been included in the test pass already, you can mark them to be run
# Correct any false information that appears in your test documents
# Note any additional cases that you can think of as you are in the process of testing
# Mark any cases that took a huge amount of time
# Track how long you estimated for the test pass & how long it actually took so that you can continue to perfect your estimates
# Note when you were blocked from being able to make progress by something outside your control & for how long
All of these are to make easier the postmortem & any future planning for this system's security testing.
Final Analysis of the Results:
After the system is released, it always pays to sit down & look through what happened during this release with a goal of improving the process. This means looking for:
# Items that worked well so that they can be continued
# Items that worked but not so well that they can't be revisited for improvement
# Items that didn't work or were really painful so they can be avoided in the future.
You should always postmortem the results for your own improvement, it is better to distribute a report among your team, so that more people can benefit from the insight & the lessons learnt.
Many More Articles on Risk Analysis & Security Testing