Abstract

We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the original file. Our Positional Inverted Self-Index (PISI) stores the word position gaps encoded by variable byte code. Inverted lists of single terms are combined into one inverted list that represents the backbone of the text file since it stores the sequence of the indexed words of the original file. The inverted list is synchronized with a presentation layer that stores separators, stop words, as well as variants of the indexed words. The Huffman coding is used to encode the presentation layer. The space complexity of the PISI inverted list is O ( ( N − n ) ⌈ log 2 b N ⌉ + ( ⌊ N − n α ⌋ + n ) × ( ⌈ log 2 b n ⌉ + 1 ) ) where N is a number of stems, n is a number of unique stems, α is a step/period of the back pointers in the inverted list and b is the size of the word of computer memory given in bits. The space complexity of the presentation layer is O ( − ∑ i = 1 N ⌈ log 2 p i n ( i ) ⌉ − ∑ j = 1 N ′ ⌈ log 2 p j ′ ⌉ + N ) with respect to p i n ( i ) as a probability of a stem variant at position i, p j ′ as the probability of separator or stop word at position j and N ′ as the number of separators and stop words.

Highlights

  • The amount of the stored text has been rapidly growing in last two decades together with the Internet boom

  • We address a scenario that is typical for web search engines or for other IR systems [15]

  • What is the difference in the search speed of Positional Inverted Self-Index (PISI) compared to standard positional inverted index?

Read more

Summary

Introduction

The amount of the stored text has been rapidly growing in last two decades together with the Internet boom. Apart from the web textual content, there still exist many other systems working with large amounts of text, i.e., systems storing e-mail records, application records, scientific papers, literary works or congressional records. All these systems represent gigabytes of textual information that need to be efficiently stored and repeatedly searched and presented to the user in a very short response time. Compressing became necessary for all these systems. Appropriate compression algorithms significantly reduce the necessary space and so more data can be cached in a faster memory level closer to processor (L1/L2/L3 cache or RAM). The compression brings an improvement in processing speed

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.