Restricted Boltzmann Machines are energy-based models capable of learning probability distributions. In practice, though, it is seriously limited by the fact that the computational cost associated with the exact evaluation of the gradients, required during learning, is prohibitively high. The standard approach to mitigate this problem is to use the Contrastive Divergence algorithm, but it leads to a rough approximation that presents issues on its own. As a completely different alternative, a model called RAPID (Pozas-Kerstjen et al., 2021) recently appeared, where unit weights are constructed from high-probability patterns that allow for an effective evaluation of the update rules along learning. In this work we analyze RAPID to find that it also presents some drawbacks that constrain its performance. We identify the problematic sources in RAPID and modify them accordingly to build a similar but more flexible alternative, called PIW (Pattern Induced Weights). Experiments show that PIW performs better than the original RAPID implementation, bringing it to a competitive level when compared to a standard RBM with CDk, with a substantial reduction in the number of training parameters.