Understanding how people perceive algorithmic decision-making is remains critical, as these systems are increasingly integrated into areas such as education, healthcare, and criminal justice. These perceptions can shape trust in, compliance with, and the perceived legitimacy of automated systems. Focusing on San Francisco's decade-long policy of algorithmic school assignments, we draw on procedural and distributive justice theory to investigate parents' fairness perceptions of San Francisco's school assignment system. We find that key differences in parents' definitions of fairness shape their preferences for what constitutes a fair school assignment system, while also controlling for parents' school assignment outcomes. Moreover, parents' definitions and perceptions of fairness differ across socioeconomic and racial groups. For instance, among white respondents, the most used definition of fairness was "proximity'' to their assigned school, whereas, among Hispanic or Latino parents, the most popular definition of fairness was that the "same rules'' are applied to everyone. It is crucial for computational system designers and policymakers to consider these differences when deciding on the goals and values embedded in decision-making systems and who those goals and values reflect.