Abstract

The digital retina excels at providing enhanced visual sensing and analysis capability for city brain in smart cities, and can feasibly convert the visual data from visual sensors into semantic features. With the deployment of deep learning or handcrafted models, these features are extracted on front-end devices, then delivered to back-end servers for advanced analysis. In this scenario, we propose a model generation, utilization and communication paradigm, aiming at strong front-end sensing capabilities for establishing better artificial visual systems in smart cities. In particular, we propose an integrated multiple deep learning models reuse and prediction strategy, which dramatically increases the feasibility of the digital retina in large-scale visual data analysis in smart cities. The proposed multi-model reuse scheme aims to reuse the knowledge from models cached and transmitted in digital retina to obtain more discriminative capability. To efficiently deliver these newly generated models, a model prediction scheme is further proposed by encoding and reconstructing model differences. Extensive experiments have been conducted to demonstrate the effectiveness of proposed model-centric paradigm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.