This study focuses on integrating Text-To-Speech software, Global Positioning System (GPS) and other technologies attached to existing white cane to create a robust navigation system that provides real-time feedback and assistance to Students with Visual Impairment (SVI) using Nigerian accent. It uses the design science research methodology for the development and validation of the GPS based mobility into object detection white cane for orientation and mobility of SVI. A speech-corpus database was created to serve as a dictionary for the Text-To-Speech and synthesized through machine learning and artificial intelligence to enable the object detection white cane to detect objects and identify common places at 30 meters in Federal College of Education (Special), Oyo campus, Oyo state, Nigeria. The developed object detection white cane was evaluated with 20 SVI selected for the study using the purposive sampling technique and data were collected through interviews and questionnaires. Two research questions were raised for the study. Data collected were analyzed both quantitatively and qualitatively, using Statistical Package for the Social Sciences (SPSS) and Atlas.ti. The results revealed that the mean response of the participants to all the items on the integration of Text-To-Speech software into object detection white cane is “1” an indication that Text-To-Speech software enhances the independent navigation of students with visual impairment. The study recommended that the components used were imported and expensive, hence the need for locally source components that can be used in producing the devices in large quantities and at reduced cost.