In this study, a multiscale local blur estimation is proposed based on the existing local focus measure that combines gradient and toggle mapping. This method evaluates the quality of images regardless of their content (not in an autofocus context) and can predict Optical Character Recognition accuracy based on local blur. The resulting approach outperforms state of the art blur detection methods. Quantitative results are given on DIQA database. Moreover, the authors demonstrate its usefulness for extracting a region of interest from partially blurry images. Results are shown on images acquired by a project devoted to smartphone based text extraction for visually impaired people. In this study, sharp region extraction is essential since it allows warning the users when their picture is unusable. Moreover, it saves computing time.