Abstract

In data science applications, multi-modal and multidimensional nature of data leads to various types of manual features and tailored classifiers to specific problems. Traditional feature engineering demands expertise and the performance from integrated classifiers has reached a plateau after years of development. Emerging technologies such as deep learning make a great performance leap after transition from feature engineering to architecture engineering. However, putting this technology to practical use faces numerous challenges. First, it is known that the success of deep learning is built upon large amount of training data, but most data science problems do not have sufficient training data. In addition, designing the architecture of deep models needs substantial expertise. Moreover, existing deep neural networks have limitations on the type of inputs they can work on, and their output is limited in the type of problems they can solve. To address these challenges, we propose a unified deep learning framework that suits data in multiple modalities and dimensions, and prove that using the proposed framework yields better performance than traditional machine learning approaches. Besides the examples used for validation, the proposed framework demonstrates potential in becoming a foundation, upon which more wide-spread data science problems can benefit from using deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call