Previous research on personal health records (PHRs) has focused on applications that are “tethered” to a specific electronic health record (EHR). However, there is a gap in research on the usability of unaffiliated, independent PHRs, as well as research on college-aged PHR users. Therefore, we performed a comparative usability study with 18 college-aged participants on three popular, freely available, independent PHRs. We chose a within-subject design to allow for comparative feedback from each participant in the post-experiment interview. In order to control for a learning effect between PHRs potentially affecting the usability performance measures, the order that participants used the PHRs was randomized across participants. We used analysis of variance (ANOVA) to determine if there were statistically significant differences in the usability measures. Based on variability of PHR use and design reported in previous research, we hypothesized that there would be a significant difference for each of the measures previously described and that one PHR would be the most usable product based on these measures. This hypothesis is based on the PHR’s extensive resources and company history. Participants completed the same six tasks in three different PHRs. Dependent variables included task time, mouse movement, mouse clicks, keystrokes, errors, and user satisfaction ratings based on the Computer System Usability Questionnaire (CSUQ). Analysis of variance (ANOVA) was used to determine the significance in the difference of the means for each dependent variable. Results showed statistically significant differences in CSUQ survey categories, errors, and keystrokes. Results supported one of the three PHRs as having better usability than its tested counterparts. While the initial purpose of this study was comparative usability testing of PHRs for college-aged students, the study provided other insights as well. Similar to other usability studies found in the literature review, the study used multiple methods, including objective task metrics, a survey, and an interview to solicit feedback on the systems. This provides a new addition to the literature in that it analyzes the usability of a system with a new user group and completed a comparative analysis of three leading Web-based, untethered PHRs. The initial hypothesis that there would be a significant difference in usability for each of the dependent measures and that one PHR would have better usability based on these measures was partially supported by the results. While not all of the criteria had statistically significant results for the three different systems, such as task time, mouse movement, mouse clicks, and interface quality, many of the measures did have significant differences in their means, and one of the PHRs had the best results in the majority of metrics analyzed. Though one PHR seemed to more usable than the other systems, it does not imply that this PHR is without error. There are still improvements that could be made to enhance the usability of the system. For example, several participants commented on how they liked the interface of one of the competing PHRs. Several participants also commented that some of the drop-downs limit the options and are not representative of the information they were trying to enter. Expanded dropdowns or typing options could be added to improve this. As found in the a different study, there are still barriers to the adoption of the PHR that was found to be most usable. Using the survey and open-ended survey results as a guide to improvement, the PHR that was found to be most usable in our study has the potential to further improve its usability.