Matrix multiplication is a fundamental operation in many computational fields, requiring optimization to handle increasing data sizes efficiently. In this paper, the implementation of Deep Learning in Matrix multiplication is reviewed, which is considered important nowadays due to the growing complexity of matrix multiplication for gaming and complex programs. The current standard matrix multiplication and the time taken by it on different matrix sizes are described. The Tiled Matrix multiplication, which trims the matrix into various pieces and calculates the product for each piece, and thereafter combines the result, is also described. The times taken by both methods for different matrix sizes were compared. The main idea was to use Deep Neural Networks (DNN) to compare and rank code variants that are obtained in pieces and determine their relative performance. A tournament-based ranking system is used for assigning ranks to the code versions. The effectiveness of these techniques was evaluated on various matrix multiplication operations commonly found in deep learning workloads. Up to 8.844x speedup over the naive implementation for a matrix size of 1024 is achieved by this approach. The results demonstrate the effectiveness of combining compiler optimization techniques and deep learning models in optimizing matrix multiplication.