Abstract

National statistical agencies annually spend great budgets in the continuous collection of a huge amount of data on individual persons, households, business activities, etc., to serve the information needs of a number of national and international government and private users. Substantial parts of their budgets are consumed in checking and improving the quality of the data collected. Because of their complexities, these tasks have depended on the handling of specialists. To save both processing time and resources and to improve the processing involved in solving and servicing data requests, these tasks have high priority. The present paper outlines research carried out in Norway on using the neural network paradigm to improve the data quality checking and improvement in large-scale data masses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call