This paper presents a distributed event-triggered unadjusted Langevin algorithm (DETULA) to address the Bayesian learning problem. We consider a set of networked learning agents who have access to their own independently distributed data sets. The objective of each agent is to reconstruct the global posterior of the unknown model parameters through local learning along with interaction with neighboring agents. We propose an event-triggered communication mechanism for a distributed Langevin algorithm to limit the inter-agent interactions and thus reduce the communication overhead. We provide conditions on the algorithm step sizes and the triggering threshold to ensure mean-square consensus of the agents’ parameter estimates and convergence of the estimates to the global posterior as if the data sets were aggregated at a central location. A major improvement of our result over previous studies is the establishment of said consensus without imposing any bounded restriction of the gradient of the objective function. Additionally, we establish probabilistic guarantees to prevent consecutive triggering by any agent while maintaining the same rate of convergence as in the case without event-triggering We demonstrate the DETULA using distributed supervised-learning problems. Our results indicate that the agents successfully recover the global posterior by periodically sharing their samples with neighbors.