Abstract
Evaluating new methods and algorithms in unsupervised learning obviously requires thorough benchmarking studies on data sets that most closely reflect performance in actual usage. Designing data sets that do exactly that is quite a challenging task in itself; standing up to the challenge in comparison to other methods is another point which poses a risk of compromising the goal of an objective benchmarking study. We want to address the latter by proposing a framework that standardizes the format of artificial data, or rather its metadata. We intend to introduce a web repository that functions as an exchange for metadata of artificial data and an accompanying R package that can generate actual data from the descriptions obtained from the repository. It is therefore much simpler to find data designed by others and which has been used in previous benchmarking studies. This removes some of the temptation to specifically design artificial data in a way so that a proposed method performs significantly better than existing ones, a claim that might not hold in real life applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.