In this paper, agents learn how often to exchange information with neighbors in cooperative Multi-Agent Systems (MASs) such that their user-defined cost functions are minimized. The investigated cost functions capture trade-offs between the MAS local control performance and energy consumption of each agent in the presence of exogenous disturbances. Agent energy consumption is critical for prolonging the MAS mission and is comprised of both control (e.g., acceleration, velocity) and communication efforts. The proposed methodology starts off by computing upper bounds on asynchronous broadcasting intervals that provably stabilize the MAS. Subsequently, we utilize these upper bounds as optimization constraints and employ an online learning algorithm based on Least Square Policy Iteration (LSPI) to minimize the cost function for each agent. Consequently, the obtained broadcasting intervals adapt to the most recent information (e.g., delayed and noisy agents’ inputs and/or outputs) received from neighbors and provably stabilize the MAS. Chebyshev polynomials are utilized as the approximator in the LSPI while Kalman Filtering (KF) handles sampled, corrupted and delayed data. The proposed methodology is exemplified in a consensus control problem with general linear agent dynamics.