The goal of zero-shot sketch-based image retrieval is to retrieve relevant images from a search set against a hand-drawn sketch query, which belongs to a class, previously unseen by the model. The knowledge gap between such unseen and seen classes along with the domain-gap between the query and search-set makes the problem extremely challenging. In this work, we address this problem by proposing a novel retrieval methodology, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">StyleGuide</i> using style-guided <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fake</i> -image generation. In addition, we further study the scenario of generalized zero-shot sketch-based image retrieval, where the search set contains images from both seen and unseen categories. Specifically, we propose a detection approach for unseen class samples in the search-set, based on pre-computed seen class-prototypes, to obtain a refined search-set for a particular unseen-class query. Thus, the query sketch needs to be compared only to those image data which are more likely to belong to the unseen classes, resulting in improved retrieval performance. Extensive experiments on two large-scale sketch-image datasets, Sketchy extended and TU-Berlin show that the proposed approach performs better or comparable to the state-of-the-art for ZS-SBIR and gives significant improvements over the state-of-the-art for generalized ZS-SBIR.