Abstract

Databases such as research data management systems (RDMS) provide the research data in which information is to be searched for. They provide techniques with which even large amounts of data can be evaluated efficiently. This includes the management of research data and the optimization of access to this data, especially if it cannot be fully loaded into the main memory. They also provide methods for grouping and sorting and optimize requests that are made to them so that they can be processed efficiently even when accessing large amounts of data. Research data offer one thing above all: the opportunity to generate valuable knowledge. The quality of research data is of primary importance for this. Only flawless research data can deliver reliable, beneficial results and enable sound decision-making. Correct, complete and up-to-date research data are therefore essential for successful operational processes. Wrong decisions and inefficiencies in day-to-day operations are only the tip of the iceberg, since the problems with poor data quality span various areas and weaken entire university processes. Therefore, this paper addresses the problems of data quality in the context of RDMS and tries to shed light on the solution for ensuring data quality and to show a way to fix the dirty research data that arise during its integration before it has a negative impact on business success.

Highlights

  • Research data are essential resources and are becoming more and more extensive

  • As well as some institutions, in addition to research data management systems (RDMS), institutional repositories have been used as the basis for data storage, while others are experimenting with more extensive data description environments despite the diversity of existing workflows [4]

  • The aim is to use a case study to show methods of action and approaches when dealing with bad data in the data quality management process and to answer the following research question “how can RDMS users ensure their data quality?”

Read more

Summary

Introduction

Research data are essential resources and are becoming more and more extensive. This has to do with the fact that the way researchers work has changed and more and more data is being digitized and stored. The aim is to use a case study to show methods of action and approaches when dealing with bad data in the data quality management process and to answer the following research question “how can RDMS users ensure their data quality?”. The novelty with this work is to present a framework for the treatment of bad big research data for the institutions, which was used by the author to improve the quality problems in practice and reached 75% of the quality of research data. Using this developed solution, valuable knowledge can be generated for the RDMS community or the users (institutions and their scientists). If the DMP is kept up-to-date, it makes it easier to share and reuse the project data and reduces the risk of data loss

Data access
Data reuse
Data Quality—Success Factor of Research Institutions
Bad Data—Its Emergence in RDMS
Dealing with Big Bad Research Data—Best Practice Framework
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call