Multi-agent reinforcement learning (MARL) has demonstrated significant potential in enabling cooperative agents. The communication protocol, which is responsible for message exchange between agents, is crucial in cooperation. However, communicative MARL systems still face challenges due to the noisy messages in complex multi-agent decision processes. This issue often stems from the entangled representation of observations and messages in policy networks. To address this, we propose the Message Action Adapter Framework (MAAF), which first trains individual agents without message inputs and then adapts a residual action based on message components. This separation isolates the effect of messages on action inference. We explore how training the MAAF framework with model-agnostic message types and varying optimization strategies influences adaptation performance. The experimental results indicate that MAAF achieves competitive performance across multiple baselines despite utilizing only half of the available communication, and shows an average improvement of 7.6% over the full attention-based communication approach. Additional findings suggest that different message types result in significant performance variations, emphasizing the importance of environment-specific message types. We demonstrate how the proposed architecture separates communication channels, effectively isolating message contributions.