Video surveillance has become an important measure of urban traffic monitoring and control. However, due to the complex and diverse video scenes, traffic data extraction from original videos is a sophisticated and difficult task, and corresponding algorithms are of high complexity and calculation cost. To reduce the algorithm complexity and subsequent computation cost, this study proposed an autoencoder model which effectively reduces the video dimension by optimizing structural parameters; thus several traffic recognition models can conduct image processing work based on dimension-reduced videos. Firstly, an optimal autoencoder model A ∗ with five hidden layers was constructed. Then, it was combined with a linear classifier, support vector machine, deep neural network, DNN linear classification method, and the k-means clustering method; thus, five traffic state recognition models were constructed: A ∗ -Linear, A ∗ -SVM, A ∗ -DNN, A ∗ -DNN_Linear, and A ∗ -k-means. Train and test results show that the accuracy rate and recall rate of A ∗ -linear, A ∗ -SVM, A ∗ -DNN, and A ∗ -DNN_Linear were 94.5%–97.1%, and the F1 score was 94.4%–97.1%; besides, the accuracy rate, recall rate, and F1 score of A ∗ -k-means were all approximately 95%, which suggests that the combination of the autoencoder A ∗ and common classification or clustering methods achieve good recognition performance. Comparison was also implemented among the five models proposed above and four CNN-based models such as AlexNet, LeNet, GoogLeNet, and VGG16, which shows that the five proposed modes achieve F1 scores of 94.4%–97.1%, while the four CNN-based models achieve F1 scores of 16.7%–94%, indicating that the proposed light weight design methods outperform more complex CNN-based models in traffic state recognition.