In this paper, we consider a networked control system (NCS) with network-induced delay, in which the control center needs to control the remote unmanned aerial vehicle (UAV) to complete the trajectory tracking task. The sensor of the controlled UAV adopts the event-triggered mechanism, while the control center uses the adaptive dynamic programming (ADP) based tracking control method to generate control actions. The application of ADP method brings new transmission options, i.e., the control center can choose to transmit control action or neural network model. Considering the fundamental tradeoff between these two transmission options with different transmission energy consumption and tracking cost, we formulate the joint optimization problem as a markov decision process (MDP). Due to the continuous value of state in MDP, we propose the DQN-based strategy, which uses the reinforcement learning (RL) algorithm, specifically deep Q-network (DQN). Besides, we further propose a greedy strategy by calculating the instantaneous expected cost. Simulation results show that DQN-based strategy has better performance but depends on the training process, while greedy strategy is suboptimal but easy to compute. Besides, compared with the benchmark strategies, the proposed strategies can achieve a better compromise in the long-term average energy consumption and tracking cost by adjusting the value of the weighted factor. Furthermore, by comparing the difference of transmission decisions in the proposed strategies, we show that the proper transmission sequence in DQN-based strategy can reduce the tracking cost and transmission energy at the same time.