Theoretical analysis of the divide-and-conquer based distributed learning with least square loss in the reproducing kernel Hilbert space (RKHS) have recently been explored within the framework of learning theory. However, the studies on learning theory for general loss functions and hypothesis spaces remain limited. In order to comprehend the properties and behaviors of distributed ERM, we introduce multi-view learning to explore it at a more profound level. In this work, we adopt an innovative perspective by examining the distributed ERM from multi-view, and study the risk performance of distributed ERM for general loss functions and hypothesis spaces. The main theoretical results are two-fold. First, we derive two tight risk bounds under certain basic assumptions on the hypothesis space, as well as the smoothness, Lipschitz continuity, strong convexity of the loss function. Second, we further develop two more general risk bounds for distributed ERM without the restriction of strong convexity. The present work not only bridges the knowledge deficit in learning theory of distributed ERM for general loss functions and hypothesis spaces, from multi-view sight, it also shows that under certain conditions the number of views can guarantee the performance.