Abstract
Zero-shot event detection aims to involve the automatic discovery and classification of new events within unstructured text. Current zero-shot event detection methods have not considered addressing the problem more effectively from the perspective of improving event representations. In this paper, we propose dual-contrastive prompting (COPE) model for learning event representations to address zero-shot event detection, which leverages prompts to assist in generating event embeddings using a pretrained language model, and employs a contrastive fusion approach to capture complex interaction information between trigger representations and sentence embeddings to obtain enhanced event representations. Firstly, we introduce a sample generator to create ordered contrastive sample sequences with varying degrees of similarity for each event instance, aiding the model in better distinguishing different types of events. Secondly, we design two distinct prompts to obtain trigger representations and event sentence embeddings separately. Thirdly, we employ a contrastive fusion module, where trigger representations and event sentence embeddings interactively fuse in vector space to generate the final event representations. Experiments show that our model is more effective than the most advanced methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.