Having effective learning and retrieval phases of satisfiability logic in Discrete Hopfield Neural Network models ensures optimal synaptic weight management, which consequently leads to the production of optimal final neuron states. However, the problem with this model is that different initial states can affect the biasedness of the retrieval phase since the model memorizes final states without generating new ones and produces suboptimal final neuron states. To date, there is no recent research that solves this issue by improving both phases in the Discrete Hopfield Neural Network that involves first-order satisfiability logic. Therefore, this research contributes to the improvement of the learning and retrieval phases by integrating the Hybrid Differential Evolution Algorithm and Swarm Mutation respectively. This research utilizes Y-Type Random 2 Satisfiability, which combines first and second-order clauses to expand the storage capacity of DHNN models, facilitating the retrieval of optimal final neuron states. To evaluate the effectiveness of the Hybrid Differential Evolution Algorithm and Swarm Mutation in the learning and retrieval phases, several performance metrics are employed in terms of synaptic weight management, learning errors, testing errors, energy profiles, solution variations, and similarity for 10 different cases. Quantitative evaluations show that the proposed model successfully enhances the optimization of both phases, ranking first compared to 10 recent algorithms for all metrics. In terms of convergence analysis, the proposed model progressed fast towards the optimal solution with only one iteration for all cases. Additionally, the proposed model can generate a 100 % global minima ratio when dealing with a high number of neurons for Case 5.