The integration of AI and ML into energy forecasting is crucial for modern energy management. Federated Learning (FL) is particularly noteworthy because it enhances data privacy and facilitates collaboration among distributed energy resources. It enables model training across multiple locations while minimizing reliance on centralized servers and data transfers. However, FL faces significant security challenges, particularly from adversarial attacks that can undermine the models' integrity and reliability. This paper addresses these security concerns by evaluating the effectiveness of Centralized Federated Learning (CFL) and Decentralized Federated Learning (DFL) in distributed load forecasting. We conducted a comparative analysis using two publicly available datasets, household data and smart grid data, for short-term load forecasting. Our study finds that DFL is more robust against adversarial attacks and artificial noise compared to CFL. Specifically, adversarial model poisoning attacks impact only the targeted client in DFL, while CFL is more broadly affected. For instance, in the presence of attacks, CFL's average client Mean Absolute Error (MAE) increased from 0.084 to 0.78 kWh for Dataset 1 and from 0.475 to 11.159 MW for Dataset 2, whereas DFL maintained relatively low error rates. Conversely, artificial noise had a less pronounced effect on CFL compared to DFL. Additionally, we introduce Decentralized Random Layer Aggregation (DRLA) to enhance DFL's robustness, offering further insights into improving FL methodologies in the energy sector. Our findings also indicate that only a single layer of the neural network is required for local updates during the aggregation process in DFL.