Meta-regression is used to adjust for sources of treatment heterogeneity in meta-analysis, but are often underpowered. Current best practice for identifying candidates for meta-regression is to use effect modifiers identified in sub-group analysis but these are rare and typically reviewers are forced to rely on clinical opinion. The increased accessibility of statistical models designed to detect sparse signals may offer a viable alternative. We conducted a simulation study comparing two simulated clinical experts to a Bayesian analysis with a sparsity inducing horseshoe prior, and a weighted random forest implemented in the R package metaforest. clinician one used a rule of one covariate for every 10 studies, while clinician two included all variables believed to be important. Five hundred simulations of 20, 40, and 60 studies were conducted using an outcome model including two true and eight noise variables. Success was evaluated as the ratio of means (RoM), coverage, and confidence interval width. Both clinician approaches performed similarly in terms of RoM (20, 40, 60 studies: 1.27/1.32, 1.32/1.37, 1.29/1.32) although confidence intervals (CIs) for clinician one were always the largest. Metaforest resulted in the lowest bias (RoM = 1.23, 1.07, 1.03) and second CIs while ranking highest for coverage when 40 or more studies were included. The Bayesian horseshoe aggressively shrunk coefficients to zero and had universally poor performance (RoM = 2.20, 1.71, 1.43) but the smallest CIs. The metaforest package offers a fast and accessible interface to detect and visualize effect modifiers and may be a valuable addition to meta-analysis workflow as a compliment to existing theoretically driven approaches. Future work should investigate hybrid approaches (e.g. using clinical expertise) to flag promising coefficients in horseshoe or Bayesian regression trees.