Abstract

High efficacy algorithm development for prognostics requires quality data from sensors and other contextual sources, such as maintenance, usage and inspection data. Data quality challenges, such as lack of sensor-based history (depth) across the entire fleet of components (breadth), can prohibit the ability to develop algorithms which are both cost-effective and useful. Therefore, the first step in prognostics modeling is determining the sufficiency of the data required to support the development of predictive algorithms. We present an assessment process for determining data suitability in the development of prognostic models based on available data that determines which modeling approaches are feasible, allowing for a first determination of decision-making for data adequacy. The assessment process follows a full data quality framework which also identifies where data eligibility and quality may be further enhanced using advanced technologies for data quality improvement approaches such as imputation, increasing the probability of obtaining the required data needed for the successful development of predictive algorithms. Use of this framework maximizes the quantity of quality data harvested from industrial data sources, increasing the probability of obtaining the required data needed for the successful development of predictive algorithms. Additionally, repeating this assessment as further data becomes available enables further expansion of the set of usable prognostic models as data availability grows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call