Next: Bibliography
Up: TeTra: Testability Transformation
Previous: Novelty, Impact, Timeliness and
  Contents
Next: Bibliography
Up: TeTra: Testability Transformation
Previous: Novelty, Impact, Timeliness and
  Contents
Project Management
0 = 1
SEC:MANAGEMENT
The principal
work-packages are summarised below.
Scheduling and duration information for each can be found in the TeTra Gantt Chart.
WORK PACKAGE 01: Benchmark Evaluation Programs:
DELIVERABLE: Evaluation Criteria,Benchmarks |
The project will start with the development of a set of programs
and associated unit-level testing problems.
The programs will be identified by the industrial
partners, and will be refined to act as exemplars of particular evolutionary testing problems.
This initial phase will be necessary to ensure that the team have available a wide spectrum of examples
which contain industrially relevant problem cases, intellectually challenging examples that
highlight particular features and `worst case' pathological examples for evaluating the behaviour
of the algorithms which will be developed at their boundaries of application.
WORK PACKAGE 02: Theoretic Framework:
DELIVERABLE: Report/paper |
A theory of testability transformation will be established.
Testability transformations need not be equivalence preserving but must preserve
the sets of adequate test data which can be generated from a program and its transformed
version.
This will involve new notions of equivalence preservation, relevant to testability.
The TeTra project will therefore develop a novel program transformation theory to underpin the
algorithmic and tool development work. This will allow proofs of correctness for the transformations
which will be developed
in much the same way that traditional transformations are proved correct with respect to functional
equivalence.
WORK PACKAGE 03: Algorithms:
DELIVERABLE: Report/paper |
Based upon the theory and with regard to the benchmark programs,
algorithms for testability transformation will be developed.
The algorithms will focus upon transformations to alleviate problems associated with side effects, unstructured
control flow and reliance upon flag variables. These features are particularly prevalent in embedded systems of
the type with which DaimlerChrysler has most experience.
Such programs typically contain side effects (for efficiency),
poor structure (because they are often machine generated)
and
flags (because there are many exceptional
controller states which tend to be monitored by flag variables).
WORK PACKAGE 04: Initial Evaluation:
DELIVERABLE: Report/paper |
Using the benchmark programs, the transformation algorithms
will be evaluated. For work package 4, the transformation step of the algorithms
will be performed by hand on the smaller of the benchmark programs.
It is important to evaluate the algorithms in this way before committing
to the effort of prototype tool development. The
evaluation will motivate modifications to the algorithms.
Also additional benchmark programs will be made available as a result of
on-going concurrent work on Evolutionary Testing by the collaborators.
WORK PACKAGE 05: Prototype Tool Development:
DELIVERABLE: Tools |
To fully explore and evaluate the testability transformation algorithms arising from previous work-packages,
prototype implementation will be required.
Implementation will use the FermaT transformation
workbench supplied by Software Migrations Ltd.
WORK PACKAGE 06: Full Evaluation:
DELIVERABLE: Report/paper, enhanced algorithms and tools |
The prototype tools will be used to provide a thorough evaluation of the algorithms.
Since the algorithms aim to remove
impediments to evolutionary testing, the criteria for successful evaluation will be that test effort is reduced (fewer
generations required to achieve a benchmark level of coverage) and quality is increased (higher levels of coverage
obtainable).
The full evaluation stage will also measure the actual performance of the algorithms relative to the
predicated upper bounds on algorithmic complexity, as these are often found to
differ widely in other application areas for program transformation.
WORK PACKAGE 07: Horizons:
DELIVERABLE: Horizons report |
In collaboration with the industrial and academic collaborators, areas for
extension of testability transformation will be identified.
Specific
areas to be considered will include
constraint based test data generation and regression testing, which are not
addressed in the core
of the TeTra project, but for which testability
transformation is also likely to be a valuable assistant.
Next: Bibliography
Up: TeTra: Testability Transformation
Previous: Novelty, Impact, Timeliness and
  Contents
Mark Harman, Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH.