This paper considers no-regret learning for repeated continuous-kernel games with lossy bandit feedback. Since it is difficult to give an explicit model of the utility functions in dynamic environments, the players’ actions can only be learned with bandit feedback. Moreover, due to unreliable communication channels or privacy protection, the bandit feedback may be lost or dropped at random. Therefore, we study the asynchronous online learning strategy of the players to adaptively adjust the next actions for minimizing the long-term regret loss. The paper provides a novel no-regret learning algorithm, called Online Gradient Descent with lossy bandits (OGD-lb). We first give the regret analysis for concave games with differentiable and Lipschitz utilities. Then we show that the action profile converges to a Nash equilibrium with probability 1 when the game is also strictly monotone. We further provide the mean-squared convergence rate ONpi−2k−1/3 when the game is β-strongly monotone, where N denotes the number of players and pi is the update probability. In addition, we extend the algorithm to the case when the loss probability of the bandit feedback is unknown, and prove its almost sure convergence to Nash equilibrium for strictly monotone games. Finally, we take the resource management in fog computing as an application example, and carry out numerical experiments to empirically demonstrate the algorithm performance.