Abstract

There is a solid requirement for the real-time ordering of enormous measures of information streaming at the rate of 10GB/s or more. This information should be scanned for designs and the query items are time-basic in fields as different as security reconnaissance, monetary services including stock exchanging, checking the basic well-being states of patients, atmosphere notice frameworks, etc. Here, the file will be required to age-off in a little time and thus will be of limited size. Notwithstanding, such situations can't endure any infringement of indexing latency and severe pursuit reaction times. Likewise, future greatly parallel (multicore) structures with capacity class recollections will empower rapid in-memory constant indexing, where the index can be totally put away in a high limit storage class memory. As the web is growing, it means a number of documents on the web are also growing so index size will also grow. The approach would be to partition a single large search index into smaller partitions and assigned to each node in the cluster. Later when a search request comes each node in the cluster will perform a search on the local index for each query term and send the result back to the central machine, which will later combine search results from all nodes in the cluster and then send it to the user. In this paper also comparative analysis of rabin-karp and Knuth-Morris-Pratt and Boyer-Moore algorithm has been done based on the execution time of the search pattern. An experimental result of the algorithm is presented in this paper, based on the result it has been concluded that KMP algorithm is better to use in MLIR framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call