ABSTRACT As firms move towards data-driven decision-making using algorithmic systems, concerns are raised regarding the lack of transparency of these systems which could have ramifications related to users’ trust and the potential for provoking discriminatory decisions. Although previous research has developed methods to improve algorithmic transparency, little empirical evidence exists regarding the extent of the effectiveness of these approaches. Drawing upon Rest’s theory of ethical decision-making and the literature on algorithmic transparency and bias, we investigate the effectiveness of feature importance (FI), a common transparency-enhancing approach, which illustrates the nature and the weights of the features utilised by an algorithm. Through an online experiment employing a fictitious tool that provided recommendations for selecting employees for a promotion-related training programme, we find that FI is effective when biased recommendations include direct discrimination (i.e. when individuals are treated less favourably on protected grounds such as gender); but is of little assistance when discrimination is indirect (i.e. when a criterion or practice that is apparently neutral, disadvantages a group of individuals who are of a protected class). Additionally, we propose a new transparency approach, using aggregated demographic information, to accompany FI in indirect discrimination circumstances and report the results of testing its effects.