Abstract

A new methodology to assess source apportionment model performance in intercomparison exercises, encompassing the preparation of real-world and synthetic datasets and the evaluation of the source apportionment results reported by participants, is described. The evaluation consists of three types of tests: complementary tests, preliminary tests, and performance tests. The complementary tests provide summary information about the source apportionment results as a whole. The preliminary tests check whether source/factors belong to a given source category. Three types of indicators: Pearson correlation (Pearson), standardized identity distance (SID), and weighted difference (WD) are used to test factor/source chemical profiles, while factor/source time series and contribution-to-species values are tested only using the Pearson. The performance tests, based on international standards for proficiency testing, are targeted at evaluating whether the reported biases in the quantification of the factor/source contribution estimates (SCEs) and uncertainties are consistent with previously established quality standards in a fitness-for-purpose approach. Moreover, the consistency of the SCE time series is evaluated using a variant of the RMSE normalised by the reference standard uncertainty.The described methodology facilitates a thorough evaluation of the source apportionment output. The new indicator to compare source or factor profiles presented in this study (SID) is more robust and provides additional information compared to the existing ones.

Highlights

  • Reliable estimations of the amounts pollution sources contribute to measured ambient air pollution concentrations are essential for the design of effective air quality policies

  • It has been used on receptor models only, the proposed tests are suitable to assess any kind of source apportionment output in intercomparison exercises

  • One of the strengths of this methodology is that it relies on quantitative indicators that are widely used in proficiency tests and model quality assessment

Read more

Summary

Introduction

Reliable estimations of the amounts pollution sources contribute to measured ambient air pollution concentrations are essential for the design of effective air quality policies. Models (RMs) are among the most common source apportionment (SA) techniques (Viana et al, 2008a; Belis et al, 2013). Favez et al, 2010; Hopke et al, 2006; Sandradewi et al, 2008) and regression analysis between SCE provided by different models The evaluations from different studies were not fully comparable among each other due to the lack of common standards to assess the source apportionment model output. Jrc.ec.europa.eu/) and the Forum for Air quality Modelling in Europe (FAIRMODE) It has been tested on PM datasets and was used to evaluate the performance of RMs in two intercomparison exercises (IEs) whose results are presented in the companion paper by Belis et al (2015, submitted)

Overview of the methodology
Real-world dataset
Synthetic dataset
Complementary tests
Weighted difference test
Performance tests
Concluding remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call