Abstract

Data envelopment analysis (DEA) is a self-evaluation method which assesses the relative efficiency of a particular decision making unit (DMU) within a group of DMUs. It has been widely applied in real-world scenarios, and traditional DEA models with a limited number of variables and linear constraints can be computed easily. However, DEA using big data involves huge numbers of DMUs, which may increase the computational load to beyond what is practical with traditional DEA methods. In this paper, we propose novel algorithms to accelerate the computation process in the big data environment. Specifically, we firstly use an algorithm to divide the large scale DMUs into small scale and identify all strongly efficient DMUs. If the strongly efficient DMU set is not too large, we can use the efficient DMUs as a sample set to evaluate the efficiency of inefficient DMUs. Otherwise, we can identify two reference points as the sample in the situation of just one input and one output. Furthermore, a variant of the algorithm is presented to handle cases with multiple inputs or multiple outputs, in which some of the strongly efficient DMUs are reselected as a reduced-size sample set to precisely measure the efficiency of inefficient DMUs. Last, we test the proposed methods on simulated data in various scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call