The recent big model such as GPT-3.5 possesses an extensive understanding of natural language, and it can perform a wide range of tasks, making it a significant advancement in the field of artificial intelligence (AI). A critical challenge in the design and implementation of big model is that it imposes a heavy load on the wireless transmission due to a huge size of the network parameters, especially for the distributed implementation. To tackle this challenge, we investigate big model transmission under practical double Rayleigh fading environments, where the big model is simultaneously distributed to multiple training nodes. To evaluate the system performance, we study the system outage probability (OP) based on the transmission latency, where an analytical expression is derived for the OP. Finally, we present some simulations under double Rayleigh fading environments, in order to show the validity of the proposed big model transmission.
Read full abstract