Abstract

Meta-syntheses from experts’ judgements and quantitative metrics are two main forms of evaluation. But they both have limitations. This paper constructs a framework for mapping the evaluation results between quantitative metrics and experts’ judgements such that they may be solved. In this way, the weights of metrics in quantitative evaluation are objectively obtained, and the validity of the results can be testified. Weighted average percentile (WAP) is employed to aggregate different experts’ judgements into standard WAP scores. The Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is used to map quantitative results into experts’ judgements, while WAP scores are equal to the final closeness coefficients generated by the TOPSIS method. However, the closeness coefficients of TOPSIS rely on the weights of quantitative metrics. In this way, the mapping procedure is transformed into an optimization problem, and a genetic algorithm is introduced to search for the best weights. An academic journal ranking in the field of Supply Chain Management and Logistics (SCML) is used to test the validity obtained by mapping results. Four prominent ranking lists from Association of Business Schools, Australian Business Deans Council, German Academic Association for Business Research, and Comite National de la Recherche Scientifique were selected to represent different experts’ judgements. Twelve indices including IF, Eigenfactor Score (ES), H-index, Scimago Journal Ranking, and Source Normalized Impact per Paper (SNIP) were chosen for quantitative evaluation. The results reveal that the mapping results possess high validity for the relative error of experts’ judgements, the quantitative metrics are 43.4%, and the corresponding best weights are determined in the meantime. Thus, some interesting findings are concluded. First, H-index, Impact Per Publication (IPP), and SNIP play dominant roles in the SCML journal’s quality evaluation. Second, all the metrics are positively correlated, although the correlation varies among metrics. For example, ES and NE are perfectly, positively correlated with each other, yet they have the lowest correlation with the other metrics. Metrics such as IF, IFWJ, 5-year IF, and IPP are highly correlated. Third, some highly correlated metrics may perform differently in quality evaluation, such as IPP and 5-year IF. Therefore, when mapping the quantitative metrics and experts’ judgements, academic fields should be treated distinctively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.