Abstract

The two papers in this issue are concerned with fair division and with approximation of functions. “Divide-and-Conquer: A Proportional, Minimal-Envy Cake-Cutting Algorithm,” by Steven Brams, Michael Jones, and Christian Klamler, has nothing to do with wedding cakes or Food Network Challenges. Instead it deals with the mathematical problem of proportional division, which often arises in social science applications. Let us illustrate the cake-cutting paradigm with the divide-and-choose algorithm for two players. Suppose you and I are to cut a cake, so that each of us gets half of the cake, where “half” is interpreted according to our preferences. Suppose you like chocolate and prefer a piece with at least half of all the chocolate decoration. I, on the other hand, like marzipan and prefer a piece with at least half of all the marzipan decoration. In the divide-and-choose algorithm, I start by cutting the cake into two pieces, so that each piece contains the same amount of marzipan. Then you pick the piece with the most chocolate. This is a proportional and envy-free division of the cake. The division is proportional because each of us gets at least half of what we want (chocolate in your case, marzipan in mine), and it is envy-free because none of us would want the other's piece. Mathematically, one expresses the preference for chocolate or marzipan in terms of a probability density function. For n players the situation is much more complicated. The authors propose a divide-and-conquer algorithm that requires only the minimal number of $n-1$ cuts. Although the algorithm is not envy-free, it does minimize the maximal number of players that any player can envy. The algorithm is also simple and easy to implement. Now for the second paper. Back in 1901 the German mathematician Carl Runge showed that polynomial interpolation of a function at n equally spaced points in the interval $[-1,1]$ can produce interpolants that fail to converge as $n\rightarrow\infty$, because they exhibit large-amplitude oscillations near the interval endpoints. In their paper “Impossibility of Fast Stable Approximation of Analytic Functions from Equispaced Samples,” Rodrigo Platte, Lloyd Trefethen, and Arno Kuijlaars look at the situation in a more general setting, and with finite precision computation in the back of their minds. Now the approximants do not have to be interpolating polynomials. They can be much more general. All that is required is that an approximant $\phi_n(f)$ depend on f at the n equally spaced grid points in $[-1,1]$, but nowhere else. The question is: How can perturbations in f affect the convergence of the approximant $\phi_n(f)$ in the worst case? To answer this question, the authors introduce a condition number $\kappa_n$ that measures how sensitive $\phi_n(f)$ can be to changes in f. You can think of $\kappa_n$ as sort of a Lipschitz constant for $\phi_n$. The main result is: If the approximants converge exponentially fast, in the sense that $\|f-\phi_n(f)\|\sim \sigma^n$ for some $\sigma<1$, then $\kappa_n\sim C^n$ for some $C>1$. This means, in the worst case, exponentially fast convergence comes at the price of exponential ill-conditioning. The authors also identify connections with potential theory, matrix iterations in the form of Krylov space methods, and quadrature formulae.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call