Abstract

There has been a tremendous increase in the amount of user generated content (UGC), and many mobile devices are also equipped with sensors (magnetic compass, accelerometer, gyroscope, etc.). We present an automatic video remixing system (AVRS), which intelligently processes UGC in combination with sensor information. The system aims to generate a video remix with minimal user effort. We present sensor-based as well as sensor-less architectures of such a system. The sensor-based AVRS system involves certain architectural choices that meet the key system requirements (leverage user generated content, use sensor information, reduce end-user burden) and user experience requirements. Architecture adaptations are required if any of the operating parameters need to be constrained for real world deployment feasibility. We present sensor-less architecture adaptations that enable the usage of the automatic remixing logic in different operating scenarios. The challenge for these system adaptations is to improve the benefits for certain key performance parameters, while reducing the compromise for other parameters. Subsequently, two key sensor-less AVRS architecture adaptations (Cloud Remix System and Smartview system) are presented. We show that significant reduction in system complexity can be achieved when a smaller reduction in the user experience is allowed. Similarly, the system can also be optimized to reduce the need for other key requirement parameters such as storage and user density.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call