Abstract

This chapter presents a highly scalable and adaptable co-learning framework on multimodal data mining in a multimedia database. The co-learning framework is based on the multiple instance learning theory. The framework enjoys a strong scalability in the sense that the query time complexity is a constant, independent of the database scale, and the mining effectiveness is also independent of the database scale, allowing facilitating a multimodal querying to a very large scale multimedia database. At the same time, this framework also enjoys a strong adaptability in the sense that it allows incrementally updating the database indexing with a constant operation when the database is dynamically updated with new information. Hence, this framework excels many of the existing multimodal data mining methods in the literature that are neither scalable nor adaptable at all. Theoretic analysis and empirical evaluations are provided to demonstrate the advantage of the strong scalability and adaptability. While this framework is general for multimodal data mining in any specific domains, to evaluate this framework, the authors apply it to the Berkeley Drosophila ISH embryo image database for the evaluations of the mining performance. They have compared the framework with a state-of-the-art multimodal data mining method to demonstrate the effectiveness and the promise of the framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.