There is an ever-increasing need for accurate and efficient methods to identify protein homologs. Traditionally, sequence similarity-based methods have dominated protein homolog identification for function identification, but these struggle when the sequence identity between the pairs is low. Recently, transformer architecture-based deep learning methods have achieved breakthrough performances in many fields. One type of model that uses transformer architecture is the protein language model (pLM). Here, we describe methods that use pLMs for protein homolog identification intended for function identification and describe their strengths and weaknesses. Several important ideas emerge, such as filtering the substitution matrix generated from embeddings, selecting specific pLM layers for specific purposes, compressing the embeddings, and dividing proteins into domains before searching for homologs that improve remote homolog detection accuracy considerably. All of these approaches produce huge numbers of new homologs that can reliably extend the reach of protein relationships for a deeper understanding of evolution and many other problems.
Read full abstract