Software engineers have many questions to answer. The test team wants to know which technique is best for finding faults in code. Maintainers seek the best tool to support configuration management. Project managers try to determine what types of experience make the best programmers or designers, while designers look for models that are good at predicting reliability. To answer these and other questions, we often rely on the advice and experience of others, which is not always based on careful, scientific reseaxch [Fenton et al. 1994]. As a software practitioner, it is important to make key decisions or assessments in an objective and scientific way. So we need to know two things: what assessment techniques are available to us, and which should we use in a given situation? Will Tracs has asked me to write an on-going column in SIGSOFT Notes to address these issues. In this first article, I will explain some of the terminology to be used in subsequent articles. In future axticles, I and my invited colleagues will try to point out some of the key items affecting decisions about how to do research and how to evaluate the research of others. In the course of doing this, we will also show how softwaxe engineering reseaxch sometimes differs from research in other fields. For instance, in medicine, it is usually easy to test a new drug by giving a placebo to the control group, But in software engineering we cannot do that we cannot ask a group not to use a design technique; the group has to use some technique if it is to complete the design, and we end up losing control rather than having a carefully-controlled comparison. As always, your ~comments axe welcome, preferably through electronic mail; I will try to address in future columns the major points you may raise in your queries.