Mr. Cai discusses ways to use cross-national studies not simply to rank nations but to help teachers improve their students' learning of mathematics. OVER THE past several decades, a number of cross-national studies of mathematics achievement have been conducted. The researchers who conduct them collect a great deal of descriptive information and try to understand the observed differences in performance from the perspectives of differences in schooling, in cultural and social factors, and in student characteristics. Yet the public, educators, some other researchers, parents, and policy makers focus much of their attention on the international rankings of mathematical performance. Since the U.S. generally places lower in these rankings than many Americans would hope, this focus on the rankings has led to the widely accepted view that the U.S. is doing poorly in mathematics, especially as compared to students from Asian countries. Consequently, the reform of mathematics education - with the attendant efforts to raise curriculum standards, improve the quality of classroom instruction, and reform teacher education programs - has become a national priority. Meanwhile, some researchers have argued that U.S. students rank low in these comparisons because the content of the international tests disadvantages U.S. students. That is, the test items cover material not taught in the tested grades in American schools. Indeed, some researchers have even challenged the validity and reliability of previous cross-national studies and argued that the tests lack proper statistical sampling procedures and are not representative of the participating countries. This debate continues. My purpose here is not to add another coal to this fire. Rather, I wish to consider the broader purposes of international comparisons and to promote ways that researchers can use their findings to improve student learning. Moreover, I agree with Ian Westbury's conclusion that an overemphasis on rankings sidetracks the search for what is responsible for the cross-national differences in mathematics performance.1 Consequently, I will present here information from a series of cross- national studies involving Chinese and U.S. students to show how a variety of assessments, detailed analyses, and interpretations in cross-national studies can help inform education reform. Assessing Performance Differences in Mathematics Obviously, a cross-national study involves much more than simply collecting data in different countries and comparing the results. In these studies, mathematical performance has generally been assessed on a set of tasks and represented by a numerical score. And the conclusions reached depend to a great extent on different formats of the tasks used. In order to better understand the differences in students' mathematical performance, it is desirable to use a variety of types of assessment tasks to measure different facets of students' mathematical thinking.2 Some tasks are better measures of students' high-level thinking skills, while others may be better measures of students' procedural knowledge and routine problem-solving skills. Indeed, some cross-national studies have already started to use various types of assessment tasks, and analyses of the results are beginning to reveal interesting patterns of performance differences.3 For example, I recently conducted a study examining the mathematical performance of U.S. and Chinese sixth-graders, using four types of assessment tasks: 1) 13 multiple-choice tasks measuring computation skills, 2) 18 multiple-choice tasks measuring simple problem-solving skills, 3) six process-constrained performance assessment tasks measuring complex problem-solving skills, and 4) six process-open performance assessment tasks measuring complex problem-solving skills. A process-constrained task requires students to carry out a procedure or a set of routine procedures in solving the problem. …