Abstract Navigation in unfamiliar places is a big challenge for partially sighted and visually impaired people. Improving visual information on the location and content of objects such as drugs can help navigation in unfamiliar environments. There are several existing navigation solutions capable of helping these people. However, navigation solutions are rarely adopted and implemented in reality. In order to optimize the perception of digital information of objects from sensors and to naturally interact with the pervasive computing landscape, Augmented Reality (AR) equipment has to be seamlessly integrated into the user’s environment. For this purpose, we develop an architecture of text and objects recognition based on AR in order to assist partially sighted and visually impaired people. Such architecture makes navigation easier in the environment by helping users to find drugs and to verify the number of pills using speech. The proposed architecture uses context-aware mobile computing and shows great potential to integrate AR for object recognition through AR engines.