Abstract

Readers may be surprised to learn that there is something on which everybody in my department agrees: the heating system is absolutely ineffective. Some offices get too hot too fast (usually inhabitated by some who like it cold), while other offices (like mine, unfortunately) take forever to become even slightly warm. The building is heated by a network of pipes through which hot steam is circulated. Question: How to design a heating system that delivers the same amount of steam to each office as fast as possible, thereby ensuring that all offices have the same temperature as fast as possible? Answer: Construct a specific matrix, called a weighted Laplacian, and choose the weights so as to maximize its second largest eigenvalue. The weights tell us how wide each pipe needs to be. Of course, this is a constrained optimization problem, because we are on a budget and can afford only a limited amount of material for the pipes. A more general problem is discussed by Jun Sun, Stephen Boyd, Lin Xiao, and Persi Diaconis in their paper “The Fastest Mixing Markov Process on a Graph and a Connection to a Maximum Variance Unfolding Problem.” They express the problem of maximizing the second largest eigenvalue of the Laplacian as a semidefinite program. The dual of this program has a simple geometric interpretation: It’s the problem of positioning n points in n‐space, so that they are as far apart as possible, but do not exceed prescribed distances between any two points. Coming back now to the heating issues in my department, we have been promised a new building. Ground breaking is to start anytime now (or so, at least, we are told). I have been thinking about giving this paper to the architects; it might inspire them to install a more effective heating system. In the well‐written paper “Globalization Techniques for Newton–Krylov Methods and Applications to the Fully Coupled Solution of the Navier–Stokes Equations,” Roger Pawlowski, John Shadid, Joseph Simonis, and Homer Walker discuss methods for the solution of systems of nonlinear equations $F(u)=0$. Such systems arise, for instance, when one discretizes partial differential equations to solve fluid flow problems. Arguably the most popular method for solving $F(u)=0$ is Newton’s method. It starts from an initial approximation $u_0$ and produces successively better (we hope) iterates $u_{k+1}$ as updates of the previous iterate, $u_{k+1}=u_k+s_k$. The step $s_k$ is computed as the solution to the linear system $F^{\prime}(u_k)s_k=-F(u_k)$, where the Jacobian $F^{\prime}(u)$ is the matrix of derivatives. When the linear system is solved by a Krylov space method, for instance, one talks about a Newton–Krylov method. Convincing Newton’s method to converge to the solution is not always easy, especially when the initial approximation $u_0$ is far away. A variety of strategies is available that can enhance the performance of Newton’s method. The authors discuss two. To increase robustness, one can solve the linear systems more or less accurately; this is done by terminating the linear system solution as soon as the residual norm $\|F^{\prime}(u_k)+F(u_k)s_k\|$ falls below a specified forcing term. To improve the chances for convergence, one can globalize Newton’s method by changing the length of the step $s_k$ (as opposed to its direction), or by choosing a step $s_k$ that minimizes the residual norm over a particular region. The authors prove convergence results, and perform numerical experiments on standard benchmark problems to compare different forcing terms and globalization strategies. Are you one of those people who firmly believes that there is one and only one way to win a tennis match? And that’s by subjecting your opponent to that impossible‐to‐return 700‐horse‐power serve? Yes? Then we might have just the paper for you. In “Monte Carlo Tennis,” Paul Newton and Kamran Aslam analyze the probability of winning in tennis, and express it in terms of the probability that a player wins a point when serving. In previous work, Paul Newton and coauthor Joe Keller had assumed that this probability is constant—throughout the whole match, and even a tournament. This amounts to assuming that points in tennis are random variables, independently and identically distributed (i.i.d.). However, this assumption fails to account for the “hot‐hand,” when everything goes just swimmingly; the “back‐to‐the‐wall” effect, when miraculous feats become possible in the face of looming loss; or simply the adjustment to new tennis balls. Do these things really make a difference? Is the i.i.d. assumption unrealistic? Paul Newton and Kamran Asham perform Monte Carlo simulations in MATLAB to answer this question. Read the paper if you want to know what they come up with.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call