This paper presents lexical story co-segmentation, a new approach to automatically extracting stories on the same topic from multiple Chinese broadcast news documents. Unlike topic tracking and detection, our approach needs not the guidance of well-trained topic models and can consistently segment the common stories from input documents. Following the MRF scheme, we construct a Gibbs energy function that feasibly balances the intra-doc and inter-doc lexical semantic dependencies and solve story co-segmentation as a binary labeling problem at sentence level. Due to the significance of measuring lexical semantic similarity in story co-segmentation, we propose a weakly-supervised correlated affinity graph (WSCAG) model to effectively derive the latent semantic similarities between Chinese words from the target corpus. Based on this, we are able to extend the classical cosine similarity by mapping the observed words distribution into the latent semantic space, which leads to a generalized lexical cosine similarity measurement. Extensive experiments on benchmark dataset validate the effectiveness of our story co-segmentation approach. Besides, we specifically demonstrate the superior performance of the proposed WSCAG semantic similarity measure over other state-of-the-art semantic measures in story co-segmentation.