In crisis situations, emergency response organizations can derive significant benefits from actionable information available on social media. Such information provides effective guidance and support for rescue operations. Unlike traditional sources of information, social media reflects the latest developments during a crisis, as well as actionable suggestions and the needs of affected individuals. However, existing methods that leverage pre-trained language models for this task require a large amount of annotated data for fine-tuning. This data acquisition process is costly and impractical in some data-scarce scenarios. To address this challenge, we propose a Knowledge-injected Actionable Information Extraction Model (KAIEM) that can extract actionable information from crisis-related tweets in few-shot and zero-shot settings. Based on prompt learning, KAIEM automatically constructs the label word space of verbalizers using word embeddings, eliminating the need for manual design or external knowledge bases. This mechanism enables our model to make accurate predictions without relying on labeled data for fine-tuning. Additionally, KAIEM employs knowledge probing to inject conceptual knowledge into prompt templates, thereby enhancing the semantics of crisis-related tweets. We conducted a series of experiments on a real-world crisis-related dataset. The experimental results demonstrate the effectiveness of KAIEM in few-shot and zero-shot scenarios.
Read full abstract