system architecture comprising of many individual components which themselves are standalone systems. Their objective is providing due benefit to the stakeholders. System of systems may have components like different software applications or services, communications infrastructure & hardware devices etc.
Systems of systems are developed using a "Building Block" concept. Individual component systems are integrated with each other so that entire systems can be created without having to develop applications right from the scratch. A system of systems frequently makes use of reusable software components, third-party applications, commercial off-the-shelf software, and distributed business objects.
Pros & Cons of Systems of Systems concept: It can result in lowering of costs in development companies, but on the contrary the cost of testing is likely to shoot up quite a bit due to following reasons.
# Greater complexity: Systems of systems are inherently complex. This is because of use of varied system architecture, different types of software lifecycle development models used while developing the individual applications and complexity of technical and functional compatibility issues etc.
It is known fact that wherever there is more complexity, more risks will be associated and we can expect to have more defects in the product.
# Added time and effort for localizing the defects: Within a system of systems, the localization of defects can be a great challenge. Localization of defects can be highly time consuming & could need great amount of effort since a typical testing company generally would not have access to all the system components. Hence they may not be able to carry out the detailed analysis & establish the monitors they actually would desire.
# More need of integration testing: The development of individual system usually calls for an integration testing stage; however with systems of systems an additional "layer" of integration testing is needed at the intersystem level. This level testing, usually known as system integration testing, usually needs creation of simulators to compensate for the absence of particular component systems.
# Higher management overheads: More effort often results from managing the testing among the many organizational entities involved in developing a system of systems. These could include various product suppliers, service providers, and many suppliers that are perhaps not even involved in the project directly. This may give rise to a lack of a coherent management structure, which makes it difficult to establish ownership and responsibilities for testing.
Test analysts need to be aware of this when designing particular tests such as "end-to-end" tests of business processes. For example, when a user initiates a transaction, the technical and organizational responsibility for handling that transaction may change several times and may be completed on systems that are totally outside the control of the originating organization.
# Reduced overall control: Because we may not always have control over all system components, it is common for software simulations to be constructed for particular component systems so that system integration testing can be performed with some certainty. Due to this the test manager will also need to establish well-defined supporting processes such as release management so that the software can be delivered to the testing team from external sources in a controlled manner.
"Test Analyst" will be required to work within the framework of these supporting processes so that, for instance, tests are developed to defined releases and baselines.
A safety-critical system is one that may endanger life or lead to other severe losses in the event of failure. Normally the criticality of a project is estimated as a result of initial risk management activities or as part of the feasibility study of the project.
The "Test Analyst" and "Technical Test Analyst" must be aware of how the project's criticality has been assessed and, in particular, whether the term safety-critical applies.
For safety-critical systems though, it is the higher level of rigor with which we need to perform test tasks and which shape our testing strategies. Some of those tasks and strategies are listed here:
# Performing explicit safety analysis as part of the risk management. Failure Modes and Effects Analysis (FMEA) can support this task.
# Performing testing according to a predefined software development lifecycle model, such as the V-model.
# Conducing fail-over and recovery tests to ensure that software architectures are correctly designed and implemented.
# Doing reliability testing to ensure reduced rates of failure and greater levels of availability.
# Taking measures to ensure that safety and security requirements are fully implemented.
# Showing that faults are correctly handled.
# Demonstrating that specific levels of test coverage have been achieved.
# Creating full test documentation with complete traceability between requirements and test cases.
# Retaining test data, results, or test environments (possibly for formal auditing).
Usually these issues are covered by standards that may be specific to particular industries, as in the following examples:
1) Space industry: There are different methods and techniques specified by various agencies depending on the criticality of the software.
2) Food and drug industry: Similarly certain structural and functional test techniques are recommended for medical systems.
3) Aircraft industry: Likewise various levels and type of structural coverage to be demonstrated for avionics software, depending on a defined level of software criticality are defined by specific agencies.
The "Test Manager" conveys the level of safety criticality of the system and software under test and whether particular standards need to be applied. The "Test Analyst" ensures that the tests are designed to comply with such standards and can support the "Test Manager" by demonstrating compliance not only within the testing project but to the external auditors as well.
Real Time and Embedded Systems:
These type of systems, usually have certain components having execution times that are quite critical to the correct functioning of the system. For example, these may be responsible, for calculating data at high update rates (e.g., 100 times per second) for some events within a minimum time period, or monitoring processes.
Software that needs to function in real-time is often "embedded" within a hardware environment. This is the situation with several day-to-day consumer items like cell phones and even in safety-critical systems like in aircraft avionics.
Real-time and embedded systems pose great challenges for the "Technical Test Analysts", since they need to provide the expert services like:
# Application of specific testing techniques to detect, for example, "race" conditions.
# Specifying and performing dynamic analysis with tools.
# Providing testing infrastructure which would allow execution of embedded software & getting the results thereof.
# Development & Testing of simulators & emulators for use in testing.