The joint extraction of elements such as entities, relations, events, and their arguments, along with their specific interrelationships, is a critical task in natural language processing. Existing research often employs implicit handling of interactions between tasks through techniques like shared encodings or parameter sharing, lacking explicit modeling of the specific relationships between tasks. This limitation hinders the full utilization of inter-task correlation information and impacts effective collaboration between tasks. To address this, we propose a model for joint extraction of elements and relations based on causal relationship representation enhancement (CRE). This model aims to capture specific relationships between tasks in multiple stages, facilitating finer adjustments and optimization of subtasks and thereby enhancing the overall model performance. Specifically, CRE comprises three key modules: feature adaptation, feature interaction, and feature fusion. The feature adaptation module selects and adjusts features from shared encodings based on the requirements of specific tasks to better adapt to semantic differences between different tasks. The feature interaction module employs causal reasoning to comprehensively capture the causal relationships between tasks while mitigating negative transfer brought about by interference from semantic information. The feature fusion module further integrates features to obtain optimized task-specific representations. Ultimately, CRE exhibits a significant improvement in average performance across multiple information extraction tasks.