Abstract

In the past decades, data structure analysis was mainly done at a high level of abstraction in the computer science community. For instance, choosing a linked list as a data structure as opposed to an array for a specific situation, was mainly motivated from a performance point of view under the implicit assumption that the computer platform (that had to run the software) consisted out of one monolithical, physical memory. In the context of mobile, embedded devices, energy consumption is as important as performance. In addition to this, the assumption of one monolithical memory is outdated for many (if not all) current-day platforms! Clearly, there is a need to improve the choices that are made during data structure analysis given specific knowledge of the memory hierarchy of the platform under investigation. We show how memory related energy consumption can heavily be reduced by taking into account the access behaviour of the application on the one hand and the available on-chip and off-chip memory space on the other hand. We do this by exploiting the sparseness that is present in one steady state of the data structure under investigation. Analytical results show that energy reductions of a factor of 8.7 are feasible in comparison to common data structure implementations. We trade these gains off with on-chip memory space consumption of a custom memory architecture.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.