A parallel boundary integral algorithm for solving boundary value problems on distributed memory computer systems is presented in this paper. The paper focuses on parallelizations of the influence coefficients matrix generation and the solution of the resulting linear system, which are the two main parts of boundary integral formulations. The distributed parallel boundary integral algorithm presented in this paper generates a part of the influence coefficients matrix on each computation node of a multicomputer platform and stores that part in the local memory. The distributed influence coefficients matrix is then used in its partitioned form by the parallel linear system solver to obtain the solution. The matrix of coefficients is large, dense and nonsymmetric. Three parallel linear system solvers are presented. Among them, Algorithm-1 and Algorithm-2 are based on the conjugate-gradient-squared (CGS) method, Algorithm-3 is based on a direct method. It is found that distributed memory parallel algorithms are very much machine dependent. The performance of a parallel linear system solver depends not only on the method of solution but also on the multicomputer network topology, and its ratio of node computation speed to network data transfer bandwidth. By selecting the algorithm which fits the characteristics of a distributed computer system hardware one can achieve relative maximum performance. The performance characteristics of distributed linear system solvers are studied in order to select an optimum method for the available hardware at minimum development cost. The performance of sequential linear system solvers which have been extensively documented in the literature are not applicable to distributed memory systems. This paper presents results and discussions on the selection of simple robust algorithms, easily implemented on specific hardware topologies and provides information for extending the selected methods to different parallel machines. The results can also serve as a reference for parallel computer benchmarking.