Abstract

The subject of the study is the process of collecting, preparing, and searching for anomalies on data from heterogeneous sources. Economic information is naturally heterogeneous and semi-structured or unstructured. This makes pre-processing of input dynamic data an important prerequisite for the detection of significant patterns and knowledge in the subject area, so the topic of research is relevant. Pre-processing of data is several unique problems that have led to the emergence of various algorithms and heuristic methods for solving such pre-processing problems as merging and cleaning and identifying variables. In this work, an algorithm for preprocessing and searching for anomalies using LSTM is formulated, which allows you to consolidate into a single database and structure information by time series from different sources, as well as search for anomalies in an automated mode. A key modification of the preprocessing method proposed by the authors is the technology of automated data integration. The technology proposed by the authors involves the joint use of methods for building a fuzzy time series and machine lexical matching on a thesaurus network, as well as the use of a universal database built using the MIVAR concept. The preprocessing algorithm forms a single data model with the possibility of transforming the periodicity and semantics of the data set and integrating into a single information bank data that can come from various sources.

Highlights

  • Nowadays, data analysis and decision-making based on it are critical in many economic and social areas

  • Data from primary documents which are incomplete, noisy, or inconsistent should be improved by filling in the default values, smoothing out the noise, and correcting data inconsistencies

  • There are several ways to get around this problem

Read more

Summary

Introduction

Data analysis and decision-making based on it are critical in many economic and social areas. As more data is generated, collected, and analyzed on an ever-increasing scale, there is an increasing need for methods to purify source information and detect knowledge based on it. Poor data quality is a serious problem as it is often created automatically, entered manually, or integrated from disparate and heterogeneous sources. Before data is analyzed, it must be pre-processed to correct errors, typos in raw data, and convert raw data into homogeneous data [1], making it usable. This process is both timeconsuming and tedious. The quality of preprocessing results affects the result of pattern detection and analysis [3]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.