Electrical distribution system (EDS) reliability has always been a critical subject due to the considerable share of distribution systems in power interruptions to end-users. Meeting regulatory service quality standards requires distribution companies to achieve defined reliability levels for performance improvement. Conventionally, specialists determine target reliability based on experience, experimental knowledge, and rules of thumb. However, these methods are error-prone and may result in suboptimal investments, economic inefficiency, and inadequate resource allocation in EDS operation and planning. Accordingly, this paper presents a machine learning-based method to determine the target level for EDS's reliability indices based on its historical performance. The approach involves comprehensive analysis of historical EDS reliability data, followed by a data envelopment analysis (DEA) to assess performance across subdivisions like substations or feeders, identifying the most effective units. Subsequently, machine learning models—Random Forest (RF), Support Vector Machine (SVM), and Radial Basis Function (RBF) network—are trained on optimal subdivisions from the DEA stage. These models accurately estimate target reliability for all subdivisions, contributing to the overall determination of the EDS target reliability level. A three-scenario case study validates the proposed method, suggesting its potential applicability as a valuable tool for regulatory agencies in setting EDS reliability targets. While admitting the potential challenges, notably the method's reliance on the quality and completeness of the historical data collected by distribution companies, the paper emphasizes the importance of collaborative efforts to ensure data accuracy and sufficiency. The proposed method leads to a fair mechanism to determine the target level for a real EDS's reliability indices in comparison with a business-as-usual scenario. Moreover, it provides a reliability performance ranking for the EDS's subdivisions.