Developing a distributed web crawler obliges major engineering challenges, all of which are eventually associated to scale. To retain corpus of search engine and a reasonable state of freshness, the crawler must be distributed over multiple computers. In distributed crawling, crawling agents are given a task to fetch and download web pages. The number and heterogeneous structure of web pages are increasing rapidly. This made the performance a serious challenge to web crawler systems. In this paper, a distributed web crawler for the hidden web is proposed and implemented. It combines and integrates, scrapy framework and Redis server. Crawling is split into three stages-adaption, relevant source selection and underlying content extraction. The crawler accurately detects and submit the searchable forms. Duplication detection is based on hybrid technology using hash-maps of Redis and Sim+Hash. Redis server is also acting as a data store for a massive amount of web data so that the growth of hidden web databases is handled ensuring scalability.