Partners header

ECOSIM Telematics Applications Project:
Deliverable D10.01

ECOSIM Validation Plan

Keywords:
Verification, Validation, Validation Sites, Validation Scenarios Criteria, Quantitative Measurement, Questionnaire, Cost-benefit Analysis.
Final Release 30 June 1997
Authors: Kurt Fedra and Lothar Winkelbauer
Note:
The ECOSIM Validation Plan is based on the original document Version 0.1, prepared by: Mr Paul Brucciani and Dr. Tim Wolfenden, Smith System Engineering Limited.





Synopsis
Programme name Telematics Application Programme
Sector Environment
Project Acronym ECOSIM
Contract number EN 1006
Project title Ecological and environmental monitoring and simulation system for management decision support in urban areas
Deliverable number D10.01
Deliverable title ECOSIM Validation Plan
Deliverable version number 1.0
Work package contributing to deliverable 10
Nature of the deliverable Report
Dissemination level Public
Contractual date of delivery DRAFT: Month 5 (May 1996)
FINAL: Month 18 (June 1997)
Actual date of delivery 30 June 1997
Authors Dr.Kurt Fedra and Lothar Winkelbauer
Environmental Software & Services GmbH
Project technical co-ordinator Dr.Kurt Fedra, Environmental Software & Services GmbH
tel: +43 2252 633 050
fax: +43 2252 633 059
E-mail: kurt@ess.co.at

Table of Contents

  • Technical abstract
  • Executive summary
  • 1 Introduction
  • 1.1 General
  • 1.2 Validation phase
  • 1.3 Overview of ECOSIM
  • 1.4 Structure
  • 2 Applications, users and impacts
  • 2.1 Overview of applications
  • 2.2 Target user groups
  • 2.3 Development phase schedule
  • 2.4 Expected impacts
  • 3 Verification and demonstration plan
  • 3.1 Overview
  • 3.2 ECOSIM validation architecture
  • 3.3 Definitions
  • 3.4 Verification stage
  • 3.5 Demonstration stage
  • 3.6 Summary
  • A Keyword list
  • B Bibliography
  • C Glossary




Technical abstract

This validation plan is a working document that reflects the status of the ECOSIM project team's validation plans. It will therefore be modified on a regular basis until it has to be implemented.

Version 1.0 of the validation plan is based on the preparatory guide supplied to each project team prior to the workshop on validation methods held on 29 May 1996. In addition, it incorporates recommendations made by the Commission's team of experts at the 2nd and 3rd Environment Telematics Evaluation Workshops and by the ECOSIM partners.





Executive summary

The validation plan presents the ECOSIM applications which will be conducted at the validation sites of Berlin, Athens and Gdansk, by the users and the user support partners. The applications focus primarily on the use of near real-time meteorological and pollution data and environmental models which monitor and forecast the impact of human activities (eg traffic, dumping of waste) on the environment.

The validation plan identifies the impacts of ECOSIM applications, the individuals or groups who will be affected and the indicators which could be used to measure the impact. It presents the criteria which will be used to select the impacts for validation, it provides a brief description of the methods which will be used to measure them and it includes the questionnaire to be used for evaluation at the three evaluation sites Berlin, Athens and Gdansk.

The validation plan is divided into a verification plan which describes how the performance of ECOSIM will be measured against the users' requirements and a demonstration plan which describes how the impact of ECOSIM will be assessed.





1 Introduction

ECOSIM will be a support system to investigate and forecast pollution levels in urban areas. By allowing the effects of industrial developments or new roads to be quickly and easily examined, public authorities can use ECOSIM to ensure that their urban plans fully consider environmental impacts. ECOSIM will also be able to forecast pollution levels - typically over the next 24 hours.

The project develops and demonstrates an integrated environmental monitoring and modelling system for management decision support in environmental planning for urban areas. Traffic generated air pollution including photochemical smog, coastal water quality, and groundwater are the initial application domains.

ECOSIM will be based on a set of computer models ranging from very simple screening tools to sophisticated 3D dynamic models such as those which allow ozone levels to be calculated from road traffic emissions. It combines these models with up-to-date measurements of current meteorological conditions and pollution by linking directly into local databases and pollution measurement stations. ECOSIM also links the various domains such as surface water, coastal water and air to ensure that as complete a picture as possible can be predicted of environmental conditions.

ECOSIM makes use of very high performance computers whenever it needs to and uses the latest methods in handling maps and similar data to ensure that its results can be easily translated into practical measures for pollution control.

ECOSIM involves participants from Austria, Germany, Greece, Italy, Poland, and the UK, and works with the cities of Berlin, Athens, and Gdansk as validation sites and initial end users.

After having undergone final refinements based on the insights gained in the valdation phase the ECOSIM system will become an important tool needed by various local and regional authorities such generating significant European Added Value.





1.2 Validation phase

The objective of the validation phase is to validate the operation of the ECOSIM demonstrator within the context of a variety of environmental domains.

The validation activity will comprise two elements: verification and demonstration, and will take place at the three demonstration sites: Berlin, Athens and Gdansk. The verification process will seek to confirm that the demonstrator implements the specified user requirements. The demonstration activity will allow urban authorities to use, and be trained on, the demonstrator. This will facilitate feedback on the validity of the agreed requirements, their method of implementation, new requirements and the general acceptability of the system for operational use. It will also provide opportunities for publicising the demonstrator.

Together with the local users the detailed criteria for success of the verification activity will be defined. These will include comparisons of the ECOSIM results with historical and observation data (where available) as well as well as results derived independently (eg., available from literature).

The validation plan has been produced by ESS after consultation with the ECOSIM partners and the Commission. Validation will be conducted at each site, by the users (SSUB, MEG, COG) and their support partners (GMD, TUA, ENVECO, TUG), in conjunction with the system suppliers (ESS, GMD, AUT).

The same validation process will be applied at all three evaluation sites: GMD will conduct the validation at SSUB, TUA and ENVECO will perform the validation at MEG, and TUG will be responsible for the validation at COG. Afterwards the results will be compared and integrated across the three sites in a joint effort of the support partners and the system suppliers (ESS, GMD, AUT, TUA, ENVECO, TUG).





1.3 Overview of ECOSIM

1.3.1 Objectives

The objectives of the ECOSIM project are to implement an integrated environmental management decision support system. At a high level, the objectives are to:

  • obtain a system whose utility is sufficiently demonstrated that existing and/or new users (urban authorities) are willing to fund further development of the demonstrator;

  • obtain a system which is capable of up-grade (perhaps within 2 years) to a commercial product or set of products;

  • demonstrate that greater insight has been obtained by the validation users into their own particular environmental issues;

  • demonstrate that the levels of integration between data sources and (sub)domains has been achieved.

  • More specific objectives related to its modes of operation are that:

  • a demonstrator should be constructed which allows models and data to be used and/or analysed on a day-to-day basis for urban strategic planning. As a consequence, the system should deliver results interactively, and for major planning simulations within a 24 hours time scale, based on selected, well defined test scenarios;

  • a demonstrator should be constructed which allows operational forecasting of pollution levels based on recent, measured data. Within several hours of receiving its latest data from (potentially remote) monitoring stations, the system should be able to forecast for a time horizon of typically 24 hours within a period of a few hours at the most.

  • It is not intended to support real-time operation as part of the demonstrator although the migration path to this capability should be clear from the results of the project and the major technical risks and costs understood.

    1.3.2 Methodology

    A key element of ECOSIM is to integrate models and data between the various environmental domains and sub-domains. The methodology adopted by ECOSIM is to identify and implement the couplings sufficient to provide the necessary added value of integration. This will often be via physical coupling of models but in other cases, will be through the GIS front-end (eg., by displaying data from both domains as separate overlays on the same base map). In practice, coupling between coastal water and other domains is unlikely to be strong and will not warrant physical integration (verified within the framework of the Athens case study by performing an analysis on the influence of the sea surface temperature to the wind field over the Greater Athens area).





    1.4 Structure

    The main body of this validation plan is divided into 2 further sections as follows:

  • section 2 defines the applications and targeted user groups relevant to ECOSIM, the development schedule, the expected impacts of the project and the impacts to be validated;
  • section 3 is split into three principal parts: the first outlines the validation architecture, the technical requirements for and important definitions in the assessment process, the second presents the verification plan; and the third contains the questionnaire to be used for the evaluation process in the demonstration stage.




  • 2 Applications, users and impacts

    2.1 Overview of applications

    The ECOSIM project will target its validation activities through the definition of a number of scenarios or applications. Each application defines specific objectives for the use of ECOSIM at one of the validation sites. Planned applications are outlined in table 2-1 below.

    Application
    identifier
    Validation
    site/City
    Summary Description of Application
    B1 Berlin Scenario analysis for studying the influence of traffic controlling measures on the regional ozone concentration
    A1 Athens Scenario analysis for studying the influence of traffic controlling measures on the regional ozone (and on NO and NO2 where possible) concentration.
    A2 Athens Scenario analysis for studying the influence of the sea breeze phenomenon on the regional ozone (and on NO and NO2 where possible) concentration in Athens
    A3 Athens Scenario analysis for studying the effects of sanitary landfill on pollution and water pollution
    G1 Gdansk Case studies for data assimilation and pre-processing to support the building of monitoring networks in Gdansk
    G2 Gdansk Limited scenario analysis for studying the influence of traffic controlling measures on the regional ozone concentration in Gdansk
    G3 Gdansk Limited scenario analysis for studying the effects of waste management scenarios on pollution in ground, surface and coastal water

    Table 2-1: Summary of applications

    2.1.1 Berlin

  • Scenario B1: Scenario analysis for studying the influence of traffic controlling measures on the regional ozone concentration on the base of available traffic emission data.

  • The monitoring network for air pollution in Berlin and the meteorological measurement data network will be connected with the ECOSIM server. The general functionality of the ECOSIM server will be tested and a user training will be performed. The scenario analysis will consist in the comparison of a pre-defined standard case with an emission-reduction scenario using the MEMO/DYMOS model system. The standard case will serve as a reference situation, defining the main characteristics of a typical mid-summer day in the region Berlin-Brandenburg. To this aim weather conditions will be selected which normally lead to high ozone concentrations and which have a high ozone-formation potential (moderate wind velocity, high temperatures and insolation). Because traffic is the main cause for high near-surface ozone concentrations in the region of Berlin-Brandenburg, the emission-reduction scenario will consider traffic controlling measures which influences the amount and the composition of traffic-emitted ozone precursor substances. The time-period for simulations will be 24 hours. The necessary input data will be pre-processed in collaboration with the end-user SSUB. SSUB will also be consulted in reference to the concrete measures, their effects on emission behaviour and the choice of an appropriate episode. The verification of the simulation results is performed using the measurement data of the monitoring network connected with the ECOSIM server. Based on the results of the scenario analysis the influence of traffic controlling measures on the regional ozone concentration will be evaluated and decision alternatives will be derived.

    2.1.2 Athens

  • Scenario A1: Scenario analysis for studying the influence of traffic controlling measures on the regional ozone concentration: Actions of B1 adapted to Athens. Since there is no traffic monitoring network in Athens, the demonstrator will have to be validated against the already adopted traffic control measures.

  • Scenario A2: Scenario analysis for studying the influence of the sea breeze phenomenon on the regional ozone concentration in Athens: The ECOSIM server will be fed with historical water temperature and salinity data, and atmospheric parameters (wind speed, air temperature and relative humidity) provided by the atmospheric model. Studies for the influence of sea breeze to the regional ozone concentration levels will be conducted.

    The focus for the collection of data on pollutants in Greece is a division of the Ministry of the Environment, City Planning and Public Works - PERPA. It is currently undergoing modernisation of its environmental data banks through implementation of a workstation and PC network linked over the HELLASPAC network. The focus for initial implementation has been the main offices of PERPA in Athens together with local sub-networks. It provides access to a wide variety of data (SO2, NOx, ozone, CO and black smoke) through a monitoring network of 10 automatic measuring stations.

  • Scenario A3: Scenario analysis for studying the effects of sanitary landfills on groundwater pollution. The available data in Athens on landfill effluents ( leachate ) will be loaded on the ECOSIM server. The pre-processed data will be used to create input data for the ground water model, the objective being to quantify the associated flux of water contaminants under a range of landfill management strategies.

    For the purposes of scenario A3, a limited database of measurements of the ground water quality exists. Measurements of leachates at the landfill sites are made every few months.

  • 2.1.3 Gdansk

  • Scenario G1: Case studies for data assimilation and pre-processing to support the building of monitoring networks in Gdansk: All available environmental data of the Gdansk region including remote sensing data will be collected and pre-processed with the ECOSIM server. The data will be used to create strain maps and to derive strategies for building monitoring networks for air and water pollution.

  • Scenario G2: Limited scenario analysis for studying the influence of traffic controlling measures on the regional ozone concentration in Gdansk: The available traffic and air pollution data will be used to create strain maps as input for the model system defined in B1. On the base of these incomplete data, rough simulations will be carried out. The project will permit the study of the impact of traffic reducing measures on the regional ozone concentration. From the results, actions will be derived to find strategies to reduce the ozone concentration.

  • Scenario G3: Limited scenario analysis for studying the interplay between pollution in ground, surface and coastal water: The available water pollution data in Gdansk will be used to create strain maps as input for the model system defined in A3. From the results, actions will be derived to identify trends and to find strategies for municipal water management in the Gdansk region.
  • Priorities within the project lie mainly in the scenarios focusing on air pollution. Specifically, work under scenario A3 will be limited to the feasibility stage. In addition, all scenarios associated with Gdansk will be heavily constrained by the lack of existing monitoring networks and historical data (pollution of air and water is randomly measured at a few points and no further use is made of these data with decision support systems etc).





    2.2 Target user groups

    Targeted users fall into two groups:

  • direct users comprise the public environmental authorities at the three validation sites ie, MEG, SSUB, COG; and the user support partners (GMD, ENVECO, TUG, NTUA). The public authorities do not have the capacity to carry out an extensive amount of project work in addition to their regular tasks. For their ECOSIM work, they are therefore supported by user support partners - these are technically-based organisations that have a long-term working relationship of mutual trust with the direct end users and therefore are aware of most of their needs and requirements. The mapping between users and their support partners is: GMD supports SSUB; NTUA and ENVECO support MEG; and TUG supports COG;

  • indirect users/ affected individuals or groups: ie, those who will benefit from ECOSIM through,

    inter alia, better pollution forecasts and planning decisions. As well as planing authorities they include environmental groups, climate researchers, meteorological forecasters, local government & health authorities, policy makers, other scientists and the residents of Berlin, Athens and Gdansk.





  • 2.3 Development phase schedule

    Application Verification Demonstration
    B1 July 96 - December 97 January 98 - August 98
    A1 July 96 - December 97 January 98 - August 98
    A2 July 96 - December 97 January 98 - August 98
    A3 July 96 - December 97 January 98 - August 98
    A4 July 96 - December 97 January 98 - August 98
    G1 January 97 - December 97 January 98 - August 98
    G2 January 97 - December 97 January 98 - August 98
    G3 January 97 - December 97 January 98 - August 98
    Table 2-2: Development schedule

    The above schedule is based on an assumed three months extension of the project due to delays introduced by the red flag procedure and the delay with the Polish (INCO) contract.

    Verification will start after the main ECOSIM core functionality implemented in WP5 has been augmented by site-specific components. This will include facilities for up-loading of existing data-sets and interfaces to existing environmental monitoring and modeling networks as defined in the Project Plan (D01.01).

    The demonstration phase will start after the Demonstrator has been built and tested in January 1998. During this demonstration phase, it is intended to operate ECOSIM for a period of approximately 3 months at the validation sites during which time local users will be trained on the use of ECOSIM.





    2.4 Expected impacts

    2.4.1 General impacts

    The project offers the tools for a rational response to urban environmental challenges and has economic and social impact at four levels. At each level, the effect of carrying out the project in the way proposed is to act as a multiplier, greatly enhancing the effectiveness of the investment:

  • Technological level: There is relatively little new technological development in the proposed work but its effect is multiplied by being carried out within the context of the project. Both the strong user input and the constraints of the existing tools and techniques will ensure that the development elements of the work have much greater impact than they would if carried out in isolation.

  • Synthesis level: Most of the technological work involves bringing together existing tools and techniques from several domains. Relatively small investments in this way result in great improvements in the utility and effectiveness of the tools working together and by the much greater insight obtained from a synthetic view of an ecosystem.

  • Exploitation level: The project involves the immediate exploitation of its results in three cities, each of which has a different level of pre-existing capability. There is thus a multiplier of three within the project itself as well as any subsequent commercial exploitation.

  • Commercial level: The project team is firmly located in the commercial world of exploiting advanced technology. The commercial incentives and robust team management will ensure that maximum use is made of the operational capabilities that emerge from the work.

  • 2.4.2 Specific impacts

    General impacts will affect a wide range of user groups across many economic and social sectors (eg the impacts of better policy formulation), whereas specific impacts will have more significant and measurable effects on smaller user groups and are thus of more interest in this context. Specific impacts comprise:

    • Improvements in:
      • response time to environmental queries/problems
      • (cost and time) efficiency of plans/decisions
      • integration and consistency of plans/decisions
      • participation in planning and decision making
      • communication of environmental information
    • Reliability and timeliness of predictions:
      • air quality predictions
      • groundwater quality predictions
      • coastal water quality predictions
      • overall acceptance/use of model predictions
    • Improved understanding:
      • of cause/effects: air quality
      • of cause/effects: groundwater quality
      • of cause/effects: coastal water quality
    • Formulation (efficiency, effectiveness) of:
      • emission control strategies
      • traffic control strategies
      • waste management strategies
      • environmental monitoring strategies
      • overall effectiveness of environmental policies
      • overall effectiveness of environmental management
    • Environmental improvements:
      • observed air quality
      • observed groundwater quality
      • observed coastal water quality
      • overall perceived environmental quality
    Table 2-3 Expected impacts

    These impacts will enable the validation of ECOSIM within a variety of environmental domains which would be of specific interest to one or more validation sites and will test each of the models currently being integrated into ECOSIM. Impact validation will take place during the demonstration phase.





    3 Verification and demonstration plan

    3.1 Overview

    Validation forms part of the evaluation process and is divided into:

  • verification: to determine the technical quality of ECOSIM's applications by testing and comparing the technical performance of individual functions to the users' requirements, and by conducting user acceptance tests to measure the performance of the system as a whole with respect to users' requirements;

  • demonstration: to determine the practical value of ECOSIM's applications to users by, for example, comparing the benefits and costs of ECOSIM applications to those of existing applications.

  • The verification and demonstration plans presented in this section are based on the table provided in the validation plan preparatory work guide provided by the Commission.

    Many of the criteria which will be used to validate the impacts of ECOSIM applications are common to all applications. These criteria are presented as they appear in the validation plan framework and are discussed in sections 3.2.1 to 3.2.6. Sections 3.3 and 3.4 then deal with the assessment methods which could be used in the verification and demonstration phases.

    3.2 ECOSIM validation architecture

    The ECOSIM demonstrator is based on a client-server architecture, taking advantage of the http (hypertext transfer) protocol. The main server provides the basic user interface and controls the user dialogue, displays information, and connects to external information resources (monitoring data, data bases, simulation models) as required.

    This communication is based on the public http protocol, and can be based on the Internet or dedicated connections (such as ISDN phone lines) for the physical communication layer. This protocol also forms the basis of World Wide Web browsers like Mosaic or Netscape. The following diagram summarizes this architecture:

    Diagram

    The ECOSIM demonstrator will be implemented within an object-oriented paradigm. As the central element, OBJECTS have encapsulated methods available to access and utilize information resources

    The information resources are provided by several logical servers in the distributed ECOSIM design, including:

    • database servers that store and convert the various data streams, primarily from monitoring networks as well as the simulation model results; using a common data format like HDF in particular the latter will also facilitate the publication of this information through the Internet/WWW.

    • model servers that are responsible for the execution of the numerical models and may be implemented on parallel high-performance computers or workstation clusters;

    • GUI server which includes the embedded visualization tools, GIS, hypertext, all resident in the main ECOSIM server.

    Logical clients include the main ECOSIM server (that is a client for the data base and model servers); they also include the (distributed) consoles (workstation consoles, X terminals, or PC with browser software) that can access the ECOSIM (GUI) Server. Unfortunately, the terminology is not consistent in that within the context of the X Windows system, server refers to the display system whereas the client denotes the actual application program that provides the data stream (requests sent to the server) displayed by the server.

    Based on this generic architecture the validation architectures for the three validation have been setup as follows:

    Diagram

    Diagram

    Diagram

    The minimum technical requirements at each validation site for the verification and validation phase are that access to (at least) one SUN workstation which must

    • have at least 32 MB RAM (64 recommended) and sufficient (128 MB) swap space
    • run under Solaris 2.4 or higher,
    • run CDE (or ctwm, uwm) as a window manager
    • support high-resolution (1280*1024) 8 bit color graphics
    • be connected through a LAN/WAN TCP/IP connection to
      • a name server (for an external server only), can be run locally;
      • access to a "server" with an http daemon and the cgi and server demo executable

    is provided. Later this year an HP version of the demonstrator will also be made available.

    3.3 Definitions

    3.3.1 Functional testing of applications

    Functional testing involves determining that the technical functionality (functional design) of ECOSIM is correct and that the output from the modules is in agreement - within defined limits - with experimental and/or field data. Since the ECOSIM project does not focus on model development itself only limitated resources (amount of data, computing resources) are available for this task (and the results must be interpreted with these limitations in mind).

    The functional design ensures that the software requirements have been understood and can be implemented. The functional design specifies the software at the functional level, ie what the software will do, and is not concerned with implementation details. It includes functional tests whereby the software can be tested at the functional level.

    Functional testing of applications will be performed to measure the quality of the results derived from ECOSIM and will be the predominant activity of the verification stage. Functional testing will take place for all applications except G3 (a case study examining the impact of increased environmental data availability in Gdansk). It will primarily involve measurement of ozone levels and planning decisions.

    3.3.2 User acceptance

    Within the context of this project user acceptance is considered to be a reflection of the extent to which ECOSIM fulfills direct users' requirements. ECOSIM's appeal is largely a product of 'how well the system works' which is dependent on a set of criteria common to almost all the applications, defined as:

  • presenting the information required by users in order to fulfill their day-to-day tasks (eg, relating to environmental planning issues);

  • presenting information in a format that is convenient to users (ie, at the correct time and frequency and superimposed on other data such road maps ;

  • the 'look and feel' of the interface (eg, is it simple to use, is the GUI acceptable, etc);

  • robustness;

  • flexibility.

  • Acceptance tests will verify that the software is properly installed and is operating correctly. An example of an acceptance test (assuming that functional design tests are complete) would be to compare ozone level predictions from ECOSIM with actual measurements taken from "ground truth" sites. (Due to the complexity of the involved phenomena in some cases these comparisons may only be possible in a qualitative manner; relevant literature will be used where necessary to support the verification process).

    Acceptance testing will take place for each application. Information will be gathered during the verification phase, by surveying the opinions of a representative sample of users. Acceptance testing will not include the opinions of individuals or groups indirectly affected by ECOSIM. Unfulfilled requirements will be identified and if necessary, modifications will be made to ECOSIM before the demonstration stage.

    3.3.3 Impact analysis

    Impact analysis will be conducted on the validated impacts selected after consultation with ECOSIM partners.

    General measurement techniques will comprise 'hard' and 'soft' methods of assessment for measuring the selected impacts for each application. Hard methods will comprise comparison of ECOSIM modeling and forecast results with the results of independently validated data, to determine the quality (ie, timeliness, accuracy, regularity, etc) of the results generated by ECOSIM. Soft methods will comprise qualitative or subjective information gathering methods such as interviews and questionnaires although this information may in some cases, be expressed quantitatively.

    Analysis of impacts will take place during the demonstration stage and will comprise examination of both direct and indirect impacts as discussed in section 2.4.2. Impact analysis will include assessments of the social and financial costs and benefits of ECOSIM.

    3.3.4 Reference cases

    Reference cases will be made to determine the relative worth of ECOSIM applications with respect to those that currently exist (or to establish their worth in cases where no other applications exist at present). To facilitate comparisons between applications, reference cases will be structured in the same way as far as possible. Their structure will comprise:

  • comparison of application quality ie, timeliness, accuracy, level of detail, regularity etc;
  • analysis of the implications of differences in application quality;
  • comparison of the cost of existing and ECOSIM applications;
  • analysis of obstacles to implementing ECOSIM applications;
  • conclusions about the value of ECOSIM applications.
  • Reference case studies will take place during the demonstration stage when ECOSIM applications can be considered to be at a pre-operational stage of development. Reference case studies may be constrained by the availability of data. For this reason, the effort associated with such studies will primarily be focused on Berlin and to a lesser extent, Athens.

    In Berlin, a monitoring network for air quality has been run by SSUB for 20 years. There are currently 45 measuring stations for different air pollutants and two additional stations for meteorological measurements. Most of the stations are arranged on a grid of approximately 4km x 4km. Measurements are made of levels of Dust, VOC, VOC, SO2, NOX, CO and O3. In addition to its position, each measurement station has certain key features recorded within a database: its general location (eg inner city/sub-urban), type of district (eg residential, industrial), traffic levels (in bands of vehicles per day), type of private heating (in terms of level of SO2 emissions).

    In Athens (and particularly Gdansk) the availability of data is more sporadic and as a consequence, validation will focus on those applications related to air pollution. In Greece, the focus for the collection of data on pollutants is a division of the Ministry of the Environment, City Planning and Public Works - PERPA. It is currently undergoing modernisation of its environmental data banks through implementation of a workstation and PC network linked over the HELLASPAC network. The focus for initial implementation has been the main offices of PERPA in Athens together with local sub-networks. It provides access to a wide variety of data (SO2, NOx, ozone, CO and black smoke) through a monitoring network of 10 automatic measuring stations.

    Where possible, previous reference cases relevant to ECOSIM will also be used for validation [2,3].

    For the purposes of scenario A3, a limited database of measurements of the ground water quality exists and measurements of leachates at the landfill sites are made every few months. Work under scenario A3 will therefore be limited to the feasibility stage. Applications associated with Gdansk will be heavily constrained by the lack of existing monitoring networks and historical data (pollution of air and water is randomly measured at a few points and no further use is made of these data with decision support systems etc).

    Reference cases and methods for measuring the level of confidence in the measurement of the indicators of each impact will be defined when the impacts to be validated are finalised.

    3.3.5 Criteria for success

    The overall objectives of the ECOSIM project are to implement an integrated environmental management decision support system and in particular, to:

  • obtain a system whose utility is sufficiently demonstrated that existing and/or new users (urban authorities) are willing to fund further development of the demonstrator;

  • obtain a system which is capable of up-grade (perhaps within 2 years) to a commercial product or set of products;

  • demonstrate that greater insight has been obtained by the validation users into their own particular environmental issues;

  • These objectives will comprise the general criteria which will be used to determine the success of the project as a whole. The criteria for success which could be used during the verification stage of the validation process will focus on the performance of ECOSIM and user acceptance. These criteria are common to almost all applications (except G3). They include:

  • efficiency improvements over existing applications;
  • reliability;
  • ease of use of ECOSIM;
  • information quality improvements over existing applications;
  • the flexibility vis-a-vis modeling of information from different environmental domains.
  • The success criteria which will be used during the demonstration phase relate to the measurable, direct impacts of ECOSIM applications on end-users and indirect beneficiaries. These criteria are more application-specific:

  • in Berlin ECOSIM should allow the authorities to forecast the levels of ozone and classical air pollutants on a day-to-day basis arising from traffic emissions and use similar results to inform urban planning decisions (eg new roads) and traffic control measures to reduce regional ozone concentrations (eg limiting inner city traffic, forcing use of catalytic converters);

  • in Athens ECOSIM should allow the authorities to study the levels of ozone and classical air pollutants for similar reasons to Berlin. It should also be able to examine the effects of see breeze on the ozone concentration;

  • in Athens ECOSIM should provide additional insight for the Athens authorities on the problems caused by leachate from the Ano Liosia land-fill site and specifically its effects on pollution within ground, surface and coastal water;

  • in Gdansk ECOSIM should provide general assistance to urban authorities in interpreting its limited existing pollution monitoring data and to assist in the development of future monitoring and modeling strategies.

  • More "global" criteria such as willingness of users to use ECOSIM (perhaps compared with existing applications) to fulfill their day to day objectives also offer a meaningful way of measuring the success of the demonstration stage of ECOSIM.

    3.4 Verification stage

    The assessment objectives of the verification stage are to verify O3 formation and water flux models which are central to all applications except G1. These models will be verified by testing that they function within the context of their application. The assessment objectives are categorised according to the method of assessment, for each application, in table 3-1. For purposes of brevity, the analysis for user acceptance assessment is not shown below.
    Category of assessment Application Assessment objective User groups involved in validation
    Testing physical functioning of application B1 A1 A2 A3 G1 G2 G3 To test the physical/electronic integration of ECOSIM with existing environmental monitoring networks and data sources Public environment authorities in Berlin, Athens and Gdansk, researchers, other potential users such as met. forecasting organisations, environmental scientists, environmental data centres
    B1 A1 A2 A3 G1 G2 G3 To test the integration of external network and internal (historical) data with existing models within the environmental domains of interest at each site Public environment authorities in Berlin, Athens and Gdansk, researchers, other potential users such as met. forecasting organisations, environmental scientists, environmental data centres
    B1 A1 A2 A3 G1 G2 G3 To test the implementation of coupling between models and data between the one or more (sub)domains of interest at each site Public environment authorities in Berlin, Athens and Gdansk, researchers, other potential users such as met. forecasting organisations, environmental scientists, environmental data centres

    Table 3-1 (1 of 2) Application assessment objectives - verification stage

    Category of assessment Application Assessment objective User groups involved in validation
    Testing physical functioning of application B1 A1 A2 G2 To verify models of concentration of ozone formed as a result of traffic emissions Environmental groups, climate researchers, met. forecasters, regional planners, local government & health authorities, policy makers, other scientists
    A2 To verify models of sea breezes on ozone concentration Environmental groups, climate researchers, met. forecasters, regional planners, local government & health authorities, policy makers, other scientists
    A3 G3 To verify models of the flux of water contaminants Environmental groups, climate researchers, met. forecasters, regional planners, local government & health authorities, policy makers, other scientists
    Table 3-1 (2 of 2) Application assessment objectives - verification stage

    The method by which each application will be validated during the verification stage is presented in table 3-2
    Assessment category Application Assessment objective
    [Indicators]
    Method of measurement
    Testing physical functioning of application B1 A1 A2 A3 G1 G2 G3 To test the physical/electronic integration of ECOSIM with existing environmental monitoring networks and data sources[Access, privileges, transmission speed, reliability, robustness, bandwidth] Links between networks will be established and performance will be measured using test data to ensure that data transmission rates, reliability and access privileges comply with network integration plans
    B1 A1 A2 A3 G1 G2 G3 To test the integration of external network and internal (historical) data with existing models within the environmental domains of interest at each site[File type, file format, file integrity, function of application] Data will be loaded onto the appropriate servers and tested to ensure that it is registered, complete, correctly formatted, and can be retrieved
    B1 A1 A2 A3 G1 G2 G3 To test the implementation of coupling between models and data between the one or more (sub)domains of interest at each site[Quality of modeling results across combinations of (sub) domains, repeatability of results] The models using data from combinations of models and domains will be run and the results will be compared to theoretical predictions and/ or independently validated data. For example, in Athens, the results of calculations of ozone concentrations obtained by coupling the atmospheric flow (MEMO) and coastal (UA) models will be compared with the results from the MEMO model alone and with other independently validated data.

    Table 3-2 (1 of 2) Validation of assessment objectives - verification stage
    Assessment category Application Assessment objective
    [Indicators]
    Method of measurement
    Testing physical functioning of applicationB1 A1 A2 G2 To verify models of concentration of ozone formed as a result of traffic emissions and the effect of wind (in Athens only)[Ozone concentration, sensor location, traffic volume & speed, air temperature and wind speed data, wind direction data] Historical scenario analysis: The application will be run on historical data in Berlin, Athens and Gdansk (subject to the availability of data in Gdansk). The results will be compared with measured data and the quality of the evaluation results will be assessed.

    Future scenario analysis: Future scenarios in urban planning and decision-making support will be run in Berlin, Athens and Gdansk (subject to the availability of data in Gdansk) and the success will be measured against the final success criteria identified in section 3.3.5
    Testing physical functioning of application A3 G3 To verify models of the flux of water contaminants[Hydrology (porosity, permeability, flux rates, etc), geology (lithology, structure, faulting, folding etc) rainfall, leachate composition, land use, waste treatment facilities, flora & fauna, algal activity, humidity, air temperature, human activity (fishing, agricultural practice, transport etc), leisure activities] Limited historical scenario analysis: the application will run in Athens and Gdansk subject to the availability of historical data. The results will be compared with measured data and the quality of the evaluation results will be assessed.

    Limited future scenario analysis: Future scenarios in urban planning and decision-making support will be run in Athens and Gdansk and success will be measured against the final success criteria identified in section 3.3.5
    Table 3-2 (2 of 2) Validation of assessment objectives - verification stage

    3.5 Demonstration stage

    The assessment objectives of the demonstration stage are to determine the value of the ECOSIM application impacts selected for validation. Impact analysis will involve the use of qualitative and quantitative methods to estimate the economic and social costs and benefits of ECOSIM applications and to compare them to existing information systems. To this means the following questionnaire has been developed which will be distributed to as many persons/groups at the three evaluation sites. To allow for an overall comparison within as well as between the evaluation sites one list of questions has been compiled which will be used for all three evaluation sites.

    Questionnaire

    Based on the MARC Checklist of the MEGATAQ Project (TE 2007) the following questionnaire has been developed to serve as basis for the quantitative and qualitative evaluation of the ECOSIM system at the three evaluation sites.

    Each item of the questionnaire represents a specific concept which needs to be evaluated on the basis of its measurable indicators. The evaluation can be done either manually by an expert or using a rule-based expert system (eg. the expert system environment of the ACA ToolKit of ESS GmbH). Using an expert system makes it possible to explicitly state in the knowledge base how the measurable indicators of each concept are related and allows to trace each evaluation step. Thus even the evaluation of qualitative concepts can be made more transparent and the whole evaluation process becomes more objective.

    A description of the functionality and syntax of the expert system environment developed by ESS GmbH can be found at http://www.ess.co.at/toolkit/xps.html.

    The following questionnaire does not represent the final version which will be used for the evaluation process. This is considered to be a living document which will continuously be refined and adapted.

    1. Impact on Context-of-use

    1.1 User characteristics
    Concept: Compatibility

    Question:
    Is the new system compatible with user characteristics (skills, attitudes, etc.)?

    Measurable Indicators:

    Requirement for:

    • new skills, experiences, qualifications, abilities, knowledge
    • new physical attributes
    • attitudes to task / organisation
    • team-orientation / willingness to co-operate
    • attitudes to new (information) technology
    • innovativeness / media awareness
    Assessment:

    absolutely compatible
    compatible
    new but manageable
    not compatible

    1.2 Job, Tasks, Activities
    Concept: Job Content

    Question:
    In which respect does the system change the job content?

    Measurable Indicators:

    • hours of work
    • task goal
    • task duration
    Assessment:

    increased
    unchanged
    decreased
    Concept: Task Complexity

    Question:
    To which degree did the system change task complexity?

    Measurable Indicators:

    • hours of work
    • task goal
    • task duration
    Assessment:

    increased
    unchanged
    decreased

    1.3. Group task characteristics
    Concept: Task Interdependence

    Question:
    How did the system influence the interdependence of the task?

    Measurable Indicators:

    • number of people involved
    • number of datasets involved
    • number of end-users
    Assessment:

    increased
    unchanged
    decreased

    1.4. Organisational Environment
    Concept: Departmental Distribution

    Question:
    How did the system influence the departmental distribution of the task?

    Measurable Indicators:

    • type of organisational structure (functional, product-oriented, matrix, project team, ...)
    Assessment:

    increased
    unchanged
    decreased
    Concept: Hierarchical Distribution

    Question:
    How did the system influence the hierarchical distribution of the task?

    Measurable Indicators:

    • type of organisational structure (functional, product-oriented, matrix, proj ect team, ...)
    Assessment:

    stronger
    unchanged
    weaker
    Concept: Formalisation

    Question:
    How did the system influence the formalisation/standardisation of work processes, rules, procedures?

    Measurable Indicators:

    • efficiency of task completion
    • implemented quality assurance procedures
    Assessment:

    increased
    unchanged
    decreased
    Concept: Organisational Culture

    Question:
    How did the system influence the overall organisational culture?

    Measurable Indicators:

    Assessment:

    improved
    unchanged
    deteriorated
    Concept: Technical Change

    Question:
    When introducing the system did changes occur in other existing technical tools, standards, platforms?

    Measurable Indicators:

    • hardware purchases/updates
    • software purchases/updates
    Assessment:

    complete change
    major changes
    minor changes
    no changes

    2. Impact on Interaction

    2.1 Task interaction

    2.1.1 Task performance
    Concept: Task Performance

    Question:
    How did the introduction of the system influence the performance of the tasks?

    Measurable Indicators:

    • speed of completion of tasks
    • frequency of errors/breakdowns in task performance
    • task duration
    Assessment:

    increased
    unchanged
    decreased

    2.1.2 Usability
    Concept: Mental Effort

    Question:
    Does the new situation require more or less mental effort to perform the task?

    Measurable Indicators:

    • speed of completion of tasks
    • task duration
    • qualification of persons performing the task(s)
    Assessment:

    more effort
    same effort
    less effort
    Concept: Difficulty of Learning

    Question:
    How difficult is it to learn to use the system?

    Measurable Indicators:

    • duration of training
    • number of expert/trainer consultations during use
    • size of the manual
    Assessment:

    very difficult
    with some effort
    no effort required
    Concept: User Satisfaction

    Question:
    How satisfied are the users with the system?

    Measurable Indicators:

    • comparison with previous situation
    • ease of use
    • well defined and ready to use results
    Assessment:

    very satisfied
    satisfied
    no opinion
    dissatisfied
    Concept: System Control

    Question:
    Do users feel they can control the system?

    Measurable Indicators:

    • robustness of system's logical structure
    • ease in managing site-specific user tasks
    Assessment:

    completely
    to some degree
    to little degree
    not at all
    Concept: Understanding

    Question:
    Do users find it easy to understand the functioning of the system?

    Measurable Indicators:

    • ease of use
    • average time spent for completing a task
    Assessment:

    very easy
    to some degree
    not at all
    Concept: Attraction

    Question:
    Do users find it the system attractive and exciting to use?

    Measurable Indicators:

    • number of uses
    • number of people interested in the system
    • number of people interested in the results produced by/with the system
    Assessment:

    very attractive
    OK
    boring

    2.1.3 Network Performance
    Concept: Network Performance

    Question:
    Is the quality of the network performance adequate?

    Measurable Indicators:

    • bandwith
    • latency
    • NetPerf results
    • frequency of delays of information retrieval and exchange
    Assessment:

    very good
    good
    adequate
    not adequate

    2.1.4 Computational Performance
    Concept: Computational Performance

    Question:
    Is the quality of the computational performance adequate?

    Measurable Indicators:

    • duration of simulation runs
    • ratio between network and computational performance
    • accessible computer power
    Assessment:

    very good
    good
    adequate
    not adequate

    2.1.5 Information Access
    Concept: Information Access

    Question:
    Does the system make access to relevant information easier?

    Measurable Indicators:

    • efficiency/duration of task completion
    • network performance
    Assessment:

    easier
    unchanged
    worse
    Concept: Information Availability

    Question:
    Are the users more satisfied with the availability of information?

    Measurable Indicators:

    • amount of information accessible
    • information processing efficiency of the system
    Assessment:

    vary satisfied
    satisfied
    not satisfied
    Concept: Information Quality

    Question:
    Are users more satisfied with the quality of the information?

    Measurable Indicators:

    • appropriate content
    • refinement/detail
    • reliability
    • up-to-date
    Assessment:

    very satisfied
    satisfied
    not satisfied

    2.2 Communication

    2.2.1 System Use
    Concept: System Use

    Question:
    How frequent is the system used, how frequent are separate functionalities used?

    Measurable Indicators:

    • number of system runs
    • number of runs of specific system components
    Assessment:

    very often
    often
    sometimes
    not at all

    2.2.2 Media Choice
    Concept: Media Choice

    Question:
    Did changes in the use of other methods / communications media occur?

    Measurable Indicators:

    • communication media used
    • communication methods
    Assessment:

    yes
    no

    2.2.3 Information Exchange
    Concept: Information Exchange

    Question:
    Did the use of the system lead to changes in the intensity of information exchange?

    Measurable Indicators:

    • number of information exchange event
    • nature and content of the information exchange
    Assessment:

    yes
    no

    2.3 Introduction Process

    2.3.1 User Participation
    Concept: User Participation

    Question:
    Did the users participate in the design of the system during the introduction phase?

    Measurable Indicators:

    • number of users involved in the system design
    • number of users involved in the introduction process
    Assessment:

    all of them
    many
    few
    none

    2.3.2 User Information
    Concept: User Information

    Question:
    To which degree have the users received information about the system during the introduction phase?

    Measurable Indicators:

    • number of users involved in the introduction process
    Assessment:

    very much
    much
    little
    not at all

    2.3.3 User Training
    Concept: User Training

    Question:
    How much training have the users received on the use of the system during the introduction phase?

    Measurable Indicators:

    • number of users involved in the introduction process
    • number of training sessions/hours provided in the introduction process
    Assessment:

    very much
    much
    little
    not at all

    3. Outcomes

    3.1 Improvements

    3.1.1 Response Time
    Concept: Response Time

    Question:
    How much did the response time to environmental queries/problems improve?

    Measurable Indicators:

    • turnaround time of information requests
    • speedup of the overall decision process
    Assessment:

    very much
    much
    little
    not at all
    actually got worse

    3.1.2 Efficiency of plans/decisions Time
    Concept: Plan Efficiency

    Question:
    How much did the efficiency (cost/time) of plans/decisions improve?

    Measurable Indicators:

    • average time required
    • average cost
    Assessment:

    very much
    much
    little
    not at all
    actually got worse

    3.1.3 Integration and consistency of plans/decisions
    Concept: Plan Consistency

    Question:
    How much did the integration and consistency of plans/decisions improve?

    Measurable Indicators:

    • number of plan integrations
    • number of consistent plans
    Assessment:

    very much
    much
    little
    not at all
    actually got worse

    3.1.4 Participation in planning and decision making
    Concept: User Participation

    Question:
    How much more did the users get directly involved in the planning and decision making process?.

    Measurable Indicators:

    • average time required
    • average cost
    Assessment:

    very much
    much
    little
    not at all
    less

    3.1.5 Communication of environmental information
    Concept: Communication

    Question:
    How much did the communication of environmental information improve?

    Measurable Indicators:

    • number of people to whom information is communicated
    • quality of information communicated
    • media in/with which information is communicated
    Assessment:

    very much
    much
    little
    not at all
    actually got worse

    3.2 Reliability and timeliness of predictions

    3.2.1 Air Quality Predictions
    Concept: Air Quality Reliability

    Question:
    How reliable are the air quality predictions?

    Measurable Indicators:

    • comparison with measurements
    • comparison with historic data
    • qualitative analysis absed on patterns and scales of quantities involved
    Assessment:

    very reliable
    reliable
    only sometimes reliable
    not reliable
    Concept: Air Quality Timeliness

    Question:
    How timely are the air quality predictions?

    Measurable Indicators:

    • turnaround time
    • simulation time
    Assessment:

    always in time
    mostly in time
    only sometimes in time
    never in time

    3.2.2 Groundwater quality predictions
    Concept: Groundwater Reliability

    Question:
    How reliable are the groundwater quality predictions?

    Measurable Indicators:

    • comparison with measurements
    • comparison with historic data
    Assessment:

    very reliable
    reliable
    only sometimes reliable
    not reliable
    Concept: Groundwater Timeliness

    Question:
    How timely are the groundwater quality predictions?

    Measurable Indicators:

    • turnaround time
    • simulation time
    Assessment:

    always in time
    mostly in time
    only sometimes in time
    never in time

    3.2.3 Coastal water quality predictions
    Concept: Coastal Water Reliability

    Question:
    How reliable are the coastal water quality predictions?

    Measurable Indicators:

    • comparison with measurements
    • comparison with historic data
    Assessment:

    very reliable
    reliable
    only sometimes reliable
    not reliable
    Concept: Coastal water Timeliness

    Question:
    How timely are the coastal water quality predictions?

    Measurable Indicators:

    • turnaround time
    • simulation time
    Assessment:

    always in time
    mostly in time
    only sometimes in time
    never in time

    3.2.4 Overall acceptance/use of model quality predictions
    Concept: Overall Acceptance

    Question:
    How well are the simulation models accepted/used?

    Measurable Indicators:

    • number of system runs
    • percentage of use of the system vs.\ old methods
    Assessment:

    completely accepted
    accepted
    to some degree accepted
    not accepted

    3.3 Improved understanding of cause/effects

    3.3.1 Air Quality
    Concept: Air Quality Understanding

    Question:
    How much has the understanding of the cause/effects of air quality been improved?

    Measurable Indicators:

    • number of decisions resulting in air pollution reduction
    • number of strategies studied and adopted with the help of the system
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.3.2 Groundwater Quality
    Concept: Air Groundwater Understanding

    Question:
    How much has the understanding of the cause/effects of groundwater quality been improved?

    Measurable Indicators:

    • number of decisions resulting in groundwater pollution reduction
    Assessment:

    very much
    much
    little sometimes reliable
    not at all
    got worse

    3.3.3 Coastal Water Quality
    Concept: Coastal Water Understanding

    Question:
    How much has the understanding of the cause/effects of coastal water quality been improved?

    Measurable Indicators:

    • number of decisions resulting in reduction of coastal water pollution
    Assessment:

    very much
    much
    little sometimes reliable
    not at all
    got worse

    3.4 Efficiency/effectiveness in the formulation of strategies

    3.4.1 Emission Control Strategies
    Concept: Emission Control Strategies

    Question:
    How much has the efficiency and effectiveness in the formulation of emission control strategies been improved?

    Measurable Indicators:

    • reduction of emissions
    • number of strategies studied and adopted with the help of the system
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.4.2 Traffic Control Strategies
    Concept: Traffic Control Strategies

    Question:
    How much has the efficiency and effectiveness in the formulation of traffic control strategies been improved?

    Measurable Indicators:

    • average travel times
    • air quality measurements
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.4.3 Waste Management Strategies
    Concept: Waste Management Strategies

    Question:
    How much has the efficiency and effectiveness in the formulation of waste management strategies been improved?

    Measurable Indicators:

    • amount of waste produced
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.4.4 Environmental Monitoring Strategies
    Concept: Environmental Monitoring Strategies

    Question:
    How much has the efficiency and effectiveness in the formulation of environmental monitoring strategies been improved?

    Measurable Indicators:

    • number of monitoring stations
    • amount of data collected
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.4.5 Overall effectiveness of environmental policies
    Concept: Environmental Policies

    Question:
    How much has the efficiency and overall effectiveness of environmental policies been improved?

    Measurable Indicators:

    • air quality
    • groundwater quality
    • coastal water quality
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.4.6 Overall effectiveness of environmental management
    Concept: Environmental Management

    Question:
    How much has the efficiency and overall effectiveness of environmental management been improved?

    Measurable Indicators:

    • air quality
    • groundwater quality
    • coastal water quality
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.5 Environmental improvements

    3.5.1 Observed Air Quality
    Concept: Observed Air Quality

    Question:
    How much has the observed air quality improved?

    Measurable Indicators:

    • air pollution measurements
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.5.2 Observed Groundwater Quality
    Concept: Observed Groundwater Quality

    Question:
    How much has the observed groundwater quality improved?

    Measurable Indicators:

    • groundwater pollution measurements
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.5.3 Observed coastal water quality
    Concept: Observed Coastal Water Quality

    Question:
    How much has the observed coastal water quality improved?

    Measurable Indicators:

    • coastal water pollution measurements
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.5.4 Overall perceived environmental quality
    Concept: Overall Environmental Quality

    Question:
    How much has the overall perceived environmental quality improved?

    Measurable Indicators:

    • air pollution measurements
    • groundwater pollution measurements
    • coastal water pollution measurements
    Assessment:

    very much
    much
    little
    not at all
    got worse

    3.6 Wider context

    3.6.1 Quality of life
    Concept: Quality of Life

    Question:
    Will the project effect the quality of life of certain groups?

    Measurable Indicators:

    Assessment:

    very much
    much
    little
    not at all

    3.6.2 Policy making processes
    Concept: Policy Making

    Question:
    Will the project have an effect on policy making processes?

    Measurable Indicators:

    • number of policies studied with the help of the system
    Assessment:

    very much
    much
    little
    not at all

    3.6.3 Building of the information society
    Concept: Information Society

    Question:
    Will the project have an effect on building of the information society?

    Measurable Indicators:

    • number of visitors in project's Web pages
    Assessment:

    very much
    much
    little
    not at all

    3.6.4 European integration
    Concept: European integration

    Question:
    Will the project have an effect on European integration?

    Measurable Indicators:

    • number of sites/cities using the system
    Assessment:

    very much
    much
    little
    not at all

    3.6 Summary

    A summary of the assessment objectives at the verification and demonstration stages of the validation phase and the applications that will be validated is shown in table 3-3. It is expected that the direct users and their support partners will be primarily involved in the verification stage and that both direct users and indirect users will be involved in the demonstration stage. Analysis of social and financial benefits and costs will be included as part of the impact analysis.
    Assessment objective Verification stage Demonstration stage
    Testing of physical functions B1 A1 A2 A3 G2* G3* None
    User acceptance testing B1 A1 A2 A3 G2* G3*
    Impact analysis None B1 A1 A2 A3 G1 G2 G3
    Social-cost benefit analysis, financial assessment etc None None

    * Assessment is dependent on the availability of data

    Table 3-3 Summary of validation plan

    A Keyword list

  • Air chemistry
  • Athens
  • Berlin
  • Coastal water
  • Database
  • Demonstration
  • Dispersion
  • ECOSIM
  • Forecast
  • Gdansk
  • Geographic Information System
  • Ground water
  • High performance computing
  • Impact assessment
  • Integration
  • Land-fill
  • Leachates
  • Management decision support
  • Measurement
  • Modeling
  • Monitoring
  • Multi-media
  • Ozone
  • Planning
  • Pollution
  • Project plan
  • Sea breeze
  • Simulation
  • Telematics
  • Telematics Application Programme
  • Traffic emissions
  • Urban
  • Validation
  • Verification
  • B Bibliography

    1 ECOSIM Project Plan: D01.01, Smith System Engineering Limited (restricted to project participants).

    2 MEGATAQ Methods and Guidelines for the Assessment of Telematics Application Quality , Albert G. Arnold (Delft University of Technology, The Netherlands) and Anne Marie Fleming (University of Glasgow, UK)
    Handout from the MEGATAQ Workshop, Brussels, May 26, 1997.
    http://www.megataq.mcg.gla.ac.uk

    3 Guidelines for Preparation of Validation Plans, D. Maltby (Salford University Business Services Ltd., UK), J.A. Cunge (Laboratoire d'Hydraulique de France, Grenoble, France) and H.J. Heich (TÜV Rheinland, Köln, Germany). ANIMATE Support Contract DVQ2, December 1996.

    4 Functionality and Syntax of the Expert System Environment of ESS GmbH
    http://www.ess.co.at/toolkit/xps.html

    C Glossary

  • AUT Aristotle University of Thessaloniki
  • CEO EC Joint Research Centre, Space Applications Institute, Centre for Earth Observation Unit
  • COG City of Gdansk - City Board
  • DBMS DataBase Management System
  • ECOSIM Ecological and environmental monitoring and simulation system for management decision support in urban areas
  • EMB ECOSIM Management Board
  • ENVECO ENVECO SA
  • ESS Environmental Software and Services GmbH
  • GIS Geographic Information System
  • GMD GMD - Forschungszentrum Informationstechnik GmbH
  • HPC High Performance Computing
  • HPCN High Performance Computing and Networking
  • MEG Ministry for the Environment, Physical Planning and Public Works, Athens
  • NTUA National Technical University of Athens
  • PR Public Relations
  • SSUB Senatsverwaltung für Stadtentwicklung, Umweltschutz und Technologie Berlin
  • TUG Technical University of Gdansk
  • UA National and Kapodistrian University of Athens
  • Schorling und Partner

  • Copyright 1995-2002 by:   ESS   Environmental Software and Services GmbH AUSTRIA