Purpose: The aim of the study is to examine how an ontology-based web crawler with a near-duplicate detection system improves the performance of a web crawler. Methodology: The experiment was carried out using secondary data from a sample web site which was used since crawling is an endless process. Using these two approaches, the ontology web crawler would search for relevant searches according to the search query of the user while the near-duplicate detection system would eliminate redundant data. Findings: It was observed that ontology web crawler performed better and faster than a normal crawler. It takes less execution time to search the web than other web crawlers. This is due to the fact that web documents are being filtered by the ontology web crawler such that only relevant web documents are retrieved according to the search query of the user. The relevant documents are further filtered by a near-duplicate detection system by removing web pages that are duplicates of each other and also remove near-duplicate web documents. This further reduces the number of web pages retrieved by the web crawler. This model saves on storage space because of the reduced number of web pages retrieved as it takes care of irrelevant and redundant web pages searched. Unique Contribution to Theory, Practice and Policy: The study recommends that the model can be improved to be dynamic by adding new relations that is the crawler should search for web pages related to the search even if they don’t contain the keywords searched. Domains and concepts should be added when visiting new web pages. Standardization of weights needs to be done because as of now experts assign weights to terms according to the area of expertise and knowledge.