Testing and Validation Issues
- Software development, prototyping, testing and validation,
software tools, ISO 9000-3.
- DRAFT Release 0.2, 10 December 1998
- Author: Kurt Fedra
Testing and Validation Issues:
Testing is the process of executing an evolving software product
(see the description of rapid prototyping as the main software
development methodology) in order to see if the the results are in
compliance with the user expectations and technical specifications.
Testing, as a consequence, is a dynamic process and an integrated part of
subsequent prototyping cycles.
Inspection refers to a process of examination of an evolving
software product conforms to its (technical) specifications as far as
this can be determined statically, i.e., without executing the software.
While traditional testing through static inspection of the code, an be an
effective method with more traditional languages like FORTRAN
(to which it is related historically), modern languages (like C++) and
object oriented design (OOD)
with mostly dynamic constructs
(e.g., dynamic memory management, inheritance and instantiation, overloading,
event handling, callbacks) make this approach much less effective.
Dynamic approaches, and most importantly, source code (dynamic)
debugging, are not only more efficient, but in many cases the only possible
approach to the testing of complex C++ code.
As an additional (preliminary) step in the testing and validation,
cross-compilation will be used to ensure software quality in terms
of portability and machine independence.
With SUN Solaris 2.5 (native compilers) as the primary development platform
for ECOSIM cross-compilation will include porting to at least HP UX (10.2).
In addition, on both platform, the GNU compilers (gcc) and the f2c
pre-processor for FORTRAN code, will be used.
This is particularly important for the non-standardized C++ language
Given the extremely high number of (continuous) parameters in a complex,
multi-model system like ECOSIM, exhaustive testing in term of all possible
input combinations is simply impossible for combinatorial reasons.
The testing and validation process must therefor use some basic guiding
principles to design effective strategies for an efficient and effective
An effective strategy can be built around the concept of extreme case
testing, which uses combinations of the maximum and minimum allowable
parameter (argument) ranges for individual systems functions to check for
the efficiency and reliability of
error handling procedures, error correction and graceful recovery
from possible error conditions in the system including its inputs.
ISO 9000-3 Testing and Validation
The test and validation parts of ISO 9000-3 concentrate on the test
documentation rather than the test procedure itself.
The standard requires, with heavy emphasis on the test plan:
Regarding the actual conducting of tests, ISO 9000-3 requires:
There may be several levels of testing, from individual item to the
complete product or system, and different approaches to testing at different
The test plan should contain test cases, test data, and expected test
The test plan should prescribe the types of tests to be conducted (e.g.,
functional test, performance test).
The test plan should describe the test environment, including specific
tools where applicable.
Test readiness evaluation should examine user documentation, personnel
requirements, and the criteria for determining completion of testing.
Validation under ISO 9000-3 is even less specific:
Test results should be recorded as defined in the relevant
Any problems encountered during the tests should be documented,
the responsible developer notified, and the corrections tracked.
Areas changes should be identified and re-tested (Revision Control
System (RCS), Source Code Control System (SCCS)).
Test adequacy and relevancy should be evaluated.
Hardware and software configuration should be specified and documented.
This last requirement is in practice best met with a (user)
requirements driven approach to testing and validation.
The primary objective is to ensure that the software meets the requirements
as stated in the user requirements document, evolving through the
test case selection and management
test results checking (verification)
test report preparation.