“Black box” models created by modern machine learning techniques are typically hard to interpret. Thus, the necessity of explainable artificial intelligence (XAI) has grown for understanding the rationale behind those models and converting them into white boxes. Random Forest is a black box model essential in various domains due to its flexibility, ease of use, and remarkable predictive performance. One method for explaining a Random Forest is transforming it into a self-explainable Decision Tree using Forest-Based Tree (FBT) algorithm. It basically consists of three main phases, pruning, conjunction set generation, and Decision Tree construction. In this paper, we examine six state-of-the-art pruning approaches and analyze their effect on FBT performance through pruned FBT (PFBT) in order to minimize its computational complexity. This would make it appropriate for forests and datasets of any size. They are assessed on 30 datasets, and the results show that UMEP and Hybrid pruning methods can be effectively used in the pruning stage of the PFBT algorithm in terms of pruning time and predictive performance. However, the AUC-Greedy method achieves good performance with small-size datasets.