What's in a name [brand]? In the first of three Research Spotlights articles this issue, authors Joseph D. Johnson, Adam M. Redlich, and Daniel M. Abrams address this issue by developing a mathematical model for the dynamics of competition through advertising. Specifically, in “A Mathematical Model for the Origin of Name Brands and Generics," readers are presented with a mathematical model for the dynamics of competition through advertising. Here, the terms “generic” and “name brand” refer to low and high advertising investment states, respectively. A distinguishing feature of the approach taken by the authors in this work is that they model “monopolistic competition," meaning that although there may be many suppliers of a product/service, these are distinguished only by brand and/or quality. The analysis of the existence and stability of equilibria of their ordinary differential equation system model predicts that “when advertising is relatively cheap compared to the benefit of advertising," these two advertising investment states will arise. This segmentation “contrasts starkly with (often implicit) assumptions of smooth, singly peaked functions for economic metrics." The authors note that their model predicts that segmentation should be reflected in price distributions. Indeed, although there are limitations in their model which readers are invited to consider addressing in future work, they show good qualitative agreement to this prediction on a large consumer data set. The need to solve linear least squares problems is ubiquitous in science and engineering applications, and it is at the heart of our second article, “Some Comments on Preconditioning for Normal Equations and Least Squares," authored by Andy Wathen. Iterative solvers are often preferred over direct methods for large-scale least squares as they require only the ability to perform matrix-vector products with the system matrix and its (conjugate) transpose. A preconditioner, which is an approximation to the original matrix with desirable properties (e.g., easy to apply, easy to “invert''), is usually employed to speed convergence. To be effective, the spectrum of the preconditioned normal equations operator should be appropriately clustered. Using several concrete examples as motivation, the author explores a subtle and underappreciated difficulty in designing preconditioners for least squares problems called the “matrix squaring problem." Simply put, a good approximation, $P$, to the original matrix, $B$, may not translate into having an effective preconditioner, $P^T P$, for the normal equations matrix $B^T B$. The article includes theory and discussion about when the matrix squaring problem can be expected versus when it is a nonissue in the case of invertible matrices. The author's final example shows that the matrix squaring problem can occur even in the full rank rectangular case, which should serve as a warning to practitioners that, indeed, it may not be sufficient to seek approximations to the original operator in the design of an effective preconditioner for iterative solution to the least squares problem. The final article, “Hypergraph Cuts with General Splitting Functions," by Nate Veldt, Austin R. Benson, and Jon Kleinberg, tackles a problem that is central to the study of hypergraphs. While readers may be familiar with a standard graph representation in which an edge connects exactly two vertices, in a hypergraph, an edge (a.k.a. hyperedge) refers to a grouping of (possibly) more than two vertices. This extra dimensionality associated to an edge complicates the generalization of graph splitting to hypergraphs, a fact that can be appreciated by studying the graphical illustrations provided in the article. Nevertheless, the authors provide a framework for hypergraph cuts that leads to a rich set of results and illuminates new research questions. Specifically, their framework utilizes so-called “splitting functions" which assign a penalty to each rearrangement of a hyperedge's nodes. These functions enable them to characterize the minimum $s$-$t$ cut problem of finding a minimum weight set of hyperedges to cut in order to separate (e.g., as might be required in data clustering applications) nodes $s$ and $t$ from each other. The paper contains many new contributions, among them algorithms for some variants of the hypergraph $s$-$t$ cut problem that are polynomial time and theoretical results on the NP-hardness of other variants. As this article “includes broad contributions at the intersection of graph theory, optimization, scientific computing, and other subdisciplines in applied mathematics," and offers several suggestions for follow-up research questions, it is likely to appeal to many SIREV readers.