Abstract

An increasing amount of information today is generated, exchanged, and stored digitally. This also includes long-lived and highly sensitive information (e.g., electronic health records, governmental documents) whose integrity and confidentiality must be protected over decades or even centuries. While there is a vast amount of cryptography-based data protection schemes, only few are designed for long-term protection. Recently, Braun et al. (AsiaCCS’17) proposed the first long-term protection scheme that provides renewable integrity protection and information-theoretic confidentiality protection. However, computation and storage costs of their scheme increase significantly with the number of stored data items. As a result, their scheme appears suitable only for protecting databases with a small number of relatively large data items, but unsuitable for databases that hold a large number of relatively small data items (e.g., medical record databases).In this work, we present a solution for efficient long-term integrity and confidentiality protection of large datasets consisting of relatively small data items. First, we construct a renewable vector commitment scheme that is information-theoretically hiding under selective decommitment. We then combine this scheme with renewable timestamps and information-theoretically secure secret sharing. The resulting solution requires only a single timestamp for protecting a dataset while the state of the art requires a number of timestamps linear in the number of data items. Furthermore, we extend the scheme, that supports a single client, to a multi-client setting. Subsequently, we characterize the arising challenges with respect to integrity and confidentiality and discuss how our multi-client scheme tackles them. We implemented our solution and measured its performance in a scenario where 9600 data items are aggregated, stored, protected, and verified over a time span of 80 years. Our measurements show that our new solution completes this evaluation scenario an order of magnitude faster than the state of the art.

Highlights

  • 1.1 Motivation and problem statement Today, huge amounts of information are generated, exchanged, and stored digitally, and these amounts will further grow in the future

  • Algorithm 12: MCELSA.detRecom (c) recomList ← {}; for name ∈ ES.evidence do e ← evidence[ name]; i = |e|; repeat if ei.c = c ∧ ei.d = ⊥ recomList+=name; until i = 0 ∨ ei.d = ⊥; return recomList; We prove, that by executing MCELSA.detRecom the shareholders correctly distribute the list of documents, they receive from the evidence service, among the clients

  • We should remark that the longer a document has been stored in MCELSA, the more entries its proof of integrity has, the longer the integrity verification takes

Read more

Summary

Introduction

1.1 Motivation and problem statement Today, huge amounts of information are generated, exchanged, and stored digitally, and these amounts will further grow in the future. Much of this data contains sensitive information (e.g., electronic health records, governmental documents, enterprise documents) and requires protection of integrity and confidentiality. Integrity of digitally stored information is most commonly ensured using digital signatures (e.g., RSA [21]) and confidentiality is ensured using encryption (e.g., AES [19]). The commonly used schemes are secure under certain computational assumptions. Muth et al EURASIP Journal on Information Security (2020) 2020:9 require that computing the prime factors of a large integer is infeasible. Computationally secure cryptographic schemes have a limited lifetime and are insufficient to provide long-term security

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call