Optimization competitions give an impulse to develop optimization algorithms. However, there is no common agreement on how to rank the contestants. This paper proposes a method of assessing the performance of the algorithms. The proposed mark is easily interpretable by humans and can be compared with previously published marks. The proposed method was used to create a ranking for contestants of the CEC 2022 competition on single objective bound constrained numerical optimization. The resulting ranking is different from the official one. The ranks of some algorithms differ by up to five. As the proposed ranking is more focused on the results at the end of the budget, winning algorithms are better suited for most real-world applications. Since the tuning effort influences the algorithm’s results, the paper also examines the influence of parameter tuning on the place achieved in both rankings by the top four algorithms. The parameters to tune were extracted from the papers that introduced the algorithms and from the source codes. The results showed that all considered algorithms were not carefully tuned for CEC. Generally, tuning using the target performance metric helps, but tuning using the other metric is harmful. The profit of tuning is up to a 33% increase in the number of trails that found the global optimum. The ablation analyses of the algorithms’ parameters showed that only a few parameters strongly influence the results. Frequently, parameters not listed in the papers are among the most important.