Aspect Sentiment Triplet Extraction (ASTE) aims to identify triplets consisting of aspect terms, opinion terms, and sentiment polarities in reviews. Although previous methods of training specific models for a single domain have achieved refreshing results, the assumption that there are enough labeled samples in each domain does not always hold, and the performance of most models in domains lacking labeled samples can suffer significant degradation. To this end, we explore the ASTE task under a cross-domain setting to solve the problem of sentiment triplet extraction in unlabeled sample scenarios. In this paper, we propose a domain-consistent syntactic representation method, which aims to transfer the knowledge mined from resource-rich source domains to unlabeled target domains by minimizing the discrepancy of syntactic representations between domains. Specifically, we design a part-of-speech perception network that captures the part-of-speech rules of words playing the role of task-related entities and a dependency perception network that learns potential dependencies between words. Meanwhile, Maximum Mean Discrepancy (MMD) is adopted to minimize the discrepancy of syntactic representations containing rule information between domains to obtain domain-consistent syntactic representation, thereby transferring mined knowledge and providing domain-invariant features for classifier decision-making. Extensive experimental results on eight different domain datasets demonstrate the superiority of our method.