The advantages of user click data greatly inspire its wide application in fine-grained image classification tasks. In previous click data based image classification approaches, each image is represented as a click frequency vector on a pre-defined query/word dictionary. However, this approach not only introduces high-dimensional issues, but also ignores the part of speech (POS) of a specific word as well as the word correlations. To address these issues, we devise the factorized deep click features to represent images. We first represent images as the factorized TF-IDF click feature vectors to discover word correlation, wherein several word dictionaries of different POS are constructed. Afterwards, we learn an end-to-end deep neural network on click feature tensors built on these factorized TF-IDF vectors. We evaluate our approach on the public Clickture-Dog dataset. It shows that: 1) the deep click feature learned on click tensor performs much better than traditional click frequency vectors; and 2) compared with many state-of-the-art textual representations, the proposed deep click feature is more discriminative and with higher classification accuracies.