This study delves into applying Distributed Learning Machines (DLM), a subset of Physics-Informed Neural Networks (PINN), in tackling benchmark challenges within two-phase flows. Specifically, two variants of DLM, namely Distributed Physics-Informed Neural Networks (DPINN) and Transfer Physics-Informed Neural Networks (TPINN), are studied. The DLM architecture strategically divides the global domain into distinct non-overlapping sub-domains, with interconnected solutions facilitated by interface conditions embedded in the loss function. Forward and inverse benchmark problems in two-phase flows are explored: (a) bubble in a reversing vortex and (b) Bubble Rising under Buoyancy. The Volume of Fluid (VOF) method handles interface transport in both scenarios, with the inverse problem incorporating interface-position data during the training phase. The forward problem highlights the effectiveness of DPINN in capturing the interface using a simple transport equation. The distinctive contribution of this work lies in its exploration of the inverse problem, offering insights into the scalability of distributed architectures when dealing with a system of governing equations. Following the validation of an initial PINN model against Computational Fluid Dynamics (CFD) data, the study extends to DPINN and TPINN. A parametric study optimizes network hyperparameters, emphasizing the regularization of loss terms within the DPINN loss function. A self-adaptive weighting strategy based on a Gaussian probabilistic model dynamically adjusts loss weights during training to overcome challenges associated with manual parameter tuning. The evaluation of accuracy against CFD data and published results underscore the efficacy of DLMs in addressing two-phase flow problems. Additionally, the computational efficiency of distributed networks is explored compared to traditional PINNs.