Research on ocular biometric systems in unrestricted environment has grown steadily over the last decade. Though sclera recognition is widely investigated as an alternative or supplementary ocular biometric modality, most of the work has been done in restricted environment. So there are several unanswered questions like, how a model trained in a restricted environment performs in an unrestricted scenario, or whether the performance improves if it is trained in a combined environment. In this work, we aim to study sclera biometrics in multiple environments and answer these questions. In the process, we have worked on all components in the sclera biometric pipeline, viz., sclera segmentation, vessel extraction, gaze detection and sclera recognition. We have trained and tested the deep models proposed for the tasks with high-quality datasets prepared in restricted environment as well as mobile datasets prepared in unrestricted environment. We have also trained the models with images from two or more datasets. This helps us to ascertain how the images captured in diverse environments can be used collectively for the training purpose. We have used two high-quality datasets SBVPI, MASD and two mobile datasets MSD, MASDUM for our work. MASDUM is the proposed mobile dataset prepared in unrestricted environment. It is designed specifically with research of sclera biometrics in mind. It contains a total of 1055 eye images along with 1055 sclera-segmented and 68 sclera-vessel-mark-up images. Since there is a severe lack of vessel-annotated datasets in the field, MASDUM dataset hopefully caters the need of researchers. State-of-the-art results are achieved for sclera segmentation on the datasets SBVPI, MASD, MSD and MASDUM with F1-scores of 96.3, 97.3, 87.9 and 91.2 respectively. State-of-the-art results are also achieved for sclera recognition with respective EERs of 0.132, 0.079, 0.086 and 0.080. We also present in-depth analysis on the novel idea of Dual-output variant of recognition model DeepR. This variant directly produces the matching score between two vessel-extracted binary images and it shows better results by largely reducing FAR.