Big data is a commodity that is highly valued in the entire globe. It is not just regarded as data but in the world of experts, we can derive intelligence from it. Because of its characteristics which are Variety, Value, Volume, Velocity, and the growing need of how it can be handled, Organizations are facing difficulties in ensuring optimal as well as affordable processing and storage of large datasets. One of the already existing models used for rapid processing together with storage in big data is known as Hadoop MapReduce. MapReduce is used for large-scale data processing in a parallel and distributed computing environment, while Hadoop is used for running applications and storing data in clusters of commodity hardware Furthermore, the Hadoop MapReduce framework needs to tune more than 190 configuration parameters which are mostly done manually. Due to complex interactions and large spaces between parameters, manual tuning is not effective. Even worse, these parameters must be tuned every time Hadoop MapReduce applications are run. The main goal of this research is to create an algorithm that will improve efficiency by automatically optimizing parameter settings when MapReduce jobs are running. The algorithm employs the Multi-Objective Particle Swarm Optimization (MOPSO) technique, which uses two objective functions to look for a Pareto optimal solution while optimizing the parameters. The results of the experiments have shown that the algorithm has remarkably improved MapReduce job performance in comparison to the use of default settings.