Abstract
Mobile vision systems, often battery-powered, are now incredibly powerful in capturing, analyzing, and understanding real-world events uncovering interminable opportunities for new applications in the areas of life-logging, cognitive augmentation, security, safety, wildlife surveillance, etc. There are two complementary challenges in the design of a mobile vision system today - improving the recognition accuracy at the expense of minimum energy consumption. In this work, we posit that best-effort sensing with degradable featurization and an elastic inference pipeline offers an interesting avenue to bring energy autonomy to mobile vision systems while ensuring acceptable recognition performance. Borrowing principles from Intermittent Computing, and Numerical Computing we propose such best-effort sensing using a Degradable-Inference pipeline supported by a parameterized Discrete Cosine Transformation (DCT) based featurization and an Anytime Deep Neural Network. These two principles aim at extending the lifetime of a mobile vision system while minimizing compute and communication cost without compromising recognition performance. We report the design and early characterization of our proposed solution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.