Particle Swarm Optimization (PSO), one of the versatile nature-inspired optimization algorithm, continue to suffer from premature convergence despite the numerous amount of research trying to improve this algorithm. Many research had tried to address this issue but often use a complex algorithm which tax on computational time and complexity. This research introduced a novel perturbation method to mitigate premature convergence / to increase exploration while keeping the computational cost at a minimum. The particles’ memories (i.e the position of personal and global best) are modified by a random multiplier which in turn will ‘perturb’ the particles’ velocity. The implementation of this novel perturbation method in early iterations had resulted in 100% success rate in finding global optima in multimodal benchmark tests including the Rastrigin problem - whereas the original PSO failed in all benchmark tests - without adding a significant amount of computational complexity and time.