This project introduces an innovative AI-based interactive shopping assistance system tailored specifically for visually impaired individuals, aiming to enhance their shopping experience and promote inclusivity in retail environments. Leveraging advanced artificial intelligence algorithms, the system provides personalized assistance, navigation support, and seamless interaction for users facing visual challenges. Through intuitive interfaces and adaptive technologies, disabled individuals are empowered to navigate retail spaces with greater independence and confidence. The system utilizes a camera module and YOLO-based deep learning algorithms on a Raspberry Pi 4 for real-time object classification, converting processed information into accessible audio output. In addition to these features, the system incorporates a language customization capability using OCR and gTTS technology, enabling conversion of audio output into multiple languages based on user preference. By prioritizing the needs of visually impaired individuals, this groundbreaking system aims to enhance accessibility and promote an inclusive shopping experience for all.