Abstract

Khezrimotlagh et al. (Eur J Oper Res 274(3):1047–1054, 2019) propose a new framework to deal with large-scale data envelopment analysis (DEA). The framework provides the fastest available technique in the DEA literature to deal with big data. It is well known that as the number of decision-making units (DMUs) or the number of inputs–outputs increases, the size of DEA linear programming problems increases; and thus, the elapsed time to evaluate the performance of DMUs sharply increases. The framework selects a subsample of DMUs and identifies the set of all efficient DMUs. After that, users can apply DEA models with known efficient DMUs to evaluate the performance of inefficient DMUs or benchmark them. In this study, we elucidate their proposed method with transparent examples and illustrate how the framework is applied. Additional simulation exercises are designed to evaluate the performance of the framework in comparison with the performance of the two former methods: build hull (BH) and hierarchical decomposition (DH). The disadvantages of BH and HD are transparently demonstrated. A single computer with two different CPUs is used to run the methods. For the first time in the literature, we consider the cardinalities, 200,000, 500,000 and 1,000,000 DMUs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.