Purpose This paper aims to study the application of reinforcement learning (RL) in the control of an output-constrained flapping-wing micro aerial vehicle (FWMAV) with system uncertainty. Design/methodology/approach A six-degrees-of-freedom hummingbird model is used without consideration of the inertial effects of the wings. A RL algorithm based on actor–critic framework is applied, which consists of an actor network with unknown policy gradient and a critic network with unknown value function. Considering the good performance of neural network (NN) in fitting nonlinearity and its optimum characteristics, an actor–critic NN optimization algorithm is designed, in which the actor and critic NNs are used to generate a policy and approximate the cost functions, respectively. In addition, to ensure the safe and stable flight of the FWMAV, a barrier Lyapunov function is used to make the flight states constrained in predefined regions. Based on the Lyapunov stability theory, the stability of the system is analyzed, and finally, the feasibility of RL in the control of a FWMAV is verified through simulation. Findings The proposed RL control scheme works well in ensuring the trajectory tracking of the FWMAV in the presence of output constraint and system uncertainty. Originality/value A novel RL algorithm based on actor–critic framework is applied to the control of a FWMAV with system uncertainty. For the stable and safe flight of the FWMAV, the output constraint problem is considered and solved by barrier Lyapunov function-based control.