Deep-learning object-detection architectures are gaining attraction, as they are used for critical tasks in relevant environments such as health, self-driving, industry, security, and robots. Notwithstanding, the available architectures provide variable performance results depending on the scenario under consideration. Challenges are usually used to evaluate such performance only in terms of accuracy. In this work, instead of proposing a new architecture, we overcome the limitations of those challenges by proposing a computationally undemanding comparative model based on several Data Envelopment Analysis (DEA) strategies, not only for the comparison of deep-learning architectures, but also to detect which parameters are the most relevant features for achieving efficiency. In addition, the proposed model provides with a set of recommendations to improve object-detection frameworks. Those measures may be applied in future high-performance meta-architectures, since this model requires lower computational and temporal requirements compared to the traditional strategy based on training neural networks – based on the trial-error method – for each configurable parameter. To this aim, the presented model evaluates 16 parameters of 139 configurations of well-known detectors present in the Google data set [1].