Abstract. Electrical resistivity tomography (ERT) is a broadly accepted geophysical method for subsurface investigations. Interpretation of field ERT data usually requires the application of computationally intensive forward modeling and inversion algorithms. For large-scale ERT data, the efficiency of these algorithms depends on the robustness, accuracy, and scalability on high-performance computing resources. In this regard, we present a robust and highly scalable implementation of forward modeling and inversion algorithms for ERT data. The implementation is publicly available and developed within the framework of PFLOTRAN, an open-source, state-of-the-art massively parallel subsurface flow and transport simulation code. The forward modeling is based on a finite-volume discretization of the governing differential equations, and the inversion uses a Gauss–Newton optimization scheme. To evaluate the accuracy of the forward modeling, two examples are first presented by considering layered (1D) and 3D earth conductivity models. The computed numerical results show good agreement with the analytical solutions for the layered earth model and results from a well-established code for the 3D model. Inversion of ERT data, simulated for a 3D model, is then performed to demonstrate the inversion capability by recovering the conductivity of the model. To demonstrate the parallel performance of PFLOTRAN's ERT process model and inversion capabilities, large-scale scalability tests are performed by using up to 131 072 processes on a leadership class supercomputer. These tests are performed for the two most computationally intensive steps of the ERT inversion: forward modeling and Jacobian computation. For the forward modeling, we consider models with up to 122 ×106 degrees of freedom (DOFs) in the resulting system of linear equations and demonstrate that the code exhibits almost linear scalability on up to 10 000 DOFs per process. On the other hand, the code shows superlinear scalability for the Jacobian computation, mainly because all computations are fairly evenly distributed over each process with no parallel communication.