Abstract

The data one needs to cope to solve today's problems is large scale, so are the graphs and hyper graphs used to model it. Today, we have Big Data, big graphs, big matrices, and in the future, they are expected to be bigger and more complex. Many of today's algorithms will be, and some already are, expensive to run on large datasets. In this work, we analyze a set of efficient techniques to make "big data", which is modeled as a hyper graph, smaller so that its processing takes much less time. As an application use case, we take the hyper graph partitioning problem, which has been successfully used in many practical applications for various purposes including parallelization of complex and irregular applications, sparse matrix ordering, clustering, community detection, query optimization, and improving cache locality in shared-memory systems. We conduct several experiments to show that our techniques greatly reduce the cost of the partitioning process and preserve the partitioning quality. Although we only measured their performance from the partitioning point of view, we believe the proposed techniques will be beneficial also for other applications using hyper graphs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.