Abstract
Abstract: Visually impaired people have difficulty moving safely and independently, which interferes with normal indoor and outdoor work and social activities. Similarly, they have a hard time identifying the basics of the environment. This paper presents a model for detecting the brightness and key colors of real-time images using the RGB method with an external camera and identifying basic objects and face recognition from human datasets.[2]. Object detection is a department of pc imaginative and prescient that appears for times of lexical entities in photographs and videos. The gadget makes use of the ESP-32 Cam's digital digicam to continuously seize severa frames, which can be sooner or later converted to audio segments. In this project, we use the You Only Look Once V3 (YOLO v3)algorithm, which runs thru a version of a really complex Convolutional Neural Network structure with OpenCV. Then with the aid of using the usage of Google Text to Speech, we convert the photo to textual content and afterwards textual content - to - speech for the visually impaired individual. Thus, the Visually Impaired individual receives the place of the gadgets withinside the digital digicam's view through audio. Distance calculation is aided with the aid of using an ultrasonic sensor. By the usage of The amassed consequences show that the proposed prototype is a hit in presenting visually impaired customers with the cappotential to realise surprising settings the usage of a user-pleasant machine that integrates this unique item detection Model
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have