The increasing use of automated systems to support human decision-making is a development that has practical implications across multiple domains, and the dynamics of trust formation in an autonomous system is a critical element in the success of the human-automation team. Here, we employ existing models of human-automation trust to narrow our scope to address, specifically, the concept of dynamically learned trust. In the present experiments we explored how trust in an autonomous system is influenced by variations in system speed, system accuracy, and a novel operationalization of system uncertainty, in which the automated system corrects itself mid-response. Participants monitored the performance of an automated 'Captcha'-like decision support system, and were tasked with indicating whether the system was correct or incorrect on each trial. Dependent variables included subjective trust ratings, response times, hit rates, and false alarm rates. In addition to validating our methodology for quantifying the impact of low-level system design features, we further demonstrate that participants are more likely to miss system errors when they have high trust in a system, and that the speed and level of self-correction with which an automated system produces responses has an impact on human trust in that system.