Automatic Program Repair (APR) techniques have shown the potential of reducing debugging costs while improving software quality by generating patches for fixing bugs automatically. However, they often generate many overfitting patches which pass only a specific test-suite but do not fix the bugs correctly. This paper proposes MIPI, a novel approach to reducing the number of overfitting patches generated in the APR. We leverage recent advances in deep learning to exploit the similarity between the patched method’s name (which often encloses the developer’s intention about the code) and the semantic meaning of the method’s body (which represents the actual implemented behavior) for identifying and removing overfitting patches generated by APR tools. Experiments with a large dataset of patches for QuixBugs and Defects4J programs show the promise of our approach. Specifically, in a total of 1,191 patches generated by 23 existing APR tools, MIPI successfully filters out 254 (32%) of the total 797 overfitting patches with a precision of 90% while preserving 93% of the correct patches. MIPI is more precise and less damaging to the APR than existing heuristic patch assessment techniques, achieving a higher recall than automated testing-based techniques that do not have access to the test oracle. In addition, MIPI is highly complementary to existing automated patch assessment techniques.