BERT, a boon to natural language understanding, extracts the context information of words and forms the basis of the newly-designed sentiment classification framework for Chinese microblogs. Coupled with a CNN and an attention mechanism, the BERT model takes Chinese characters as inputs for vectorization and outputs two kinds of vectors: character-level vectors and sentence-level vectors. The character-level vectors are input into the CNN, and both top-K-average pooling and attention pooling are used to mine emotions from the microblogs. The attention mechanism is used to fuse various features to the microblog vectors. The final classification results are derived via a softmax function over the microblog vectors. The results show an outstanding improvement over benchmark methods on the two classification tasks.