Handwritten word recognition is one of the hot topics in automatic handwritten text recognition that received a lot of attention in recent years. Unlike character recognition, word recognition deals with considerable variations in word shape and written style. This paper proposes a novel deep model for language-independent handwritten word recognition. The proposed deep structure has two parallel stages for jointly learning character and word-level information. In the character-level stage, a weakly character segmentation method is performed and then applies a series of Long short-term memory (LSTM) layers for character-level representation. The word-level stage employs a series of convolutional layers for the shape and structure representation of the word. These representations are then concatenated and followed by a series of fully connected layers for jointly learning the words and the character-level information. Since the character segmentation is language independent and error-prone, the proposed deep structure only applies weakly separation scheme and does not rely on any character segmentation algorithm. Thus, it effectively utilizes character level representation without bounding on any language model. In the proposed methodology, we use two new data augmentation strategies based on a psychological assumption to increase the model generalization performance. Experimental results on five public datasets including Arabic, English and German languages demonstrate that the proposed deep model has a superior performance to the state-of-the-art methods.