Abstract

Device centric music computation in the era of the Internet is participant-centric data recognition and computation that includes devices such as smartphones, real sound sensors, and computing systems. These participatory devices enhance the progression of Internet of Things, the devices which are responsible for gathering sensor data to the devices as per the requirements of the end users. This contribution analyzes a class of qualitative music composition applications in the context of the Internet of Things that we entitle as the Internet of Music Things. In this work, participated individuals having sensing devices capable of music sensing and computation share data within a group and retrieve information for analyzing and mapping any interconnected processes of common interest. We present the crowdsensing architecture for music composition in this contribution. Musical components like vocal and instrumental performances are provided by a committed edge layer in music crowdsensing architecture for improving computational efficiencies and lessening data traffic in cloud services for information processing and storage. Proposed opportunistic music crowdsensing orchestration organizes a categorical step toward aggregated music composition and sharing within the network. We also discuss an analytical case study of music crowdsensing challenges, clarify the unique features, and demonstrate edge-cloud computing paradigm along with deliberate outcomes. The requirement for four-layer unified crowdsensing archetype is discussed. The data transmission time, power, and relevant energy consumption of the proposed system are analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call