Abstract

AbstractOptical properties of nanoparticles largely depend on their shape and material distribution. Given any such property, e.g., a desired color, the aim is to design an optimal nanoparticle, whose properties best match these targets. The corresponding nonlinear optimization problem is challenging due to numerous difficulties: The objective function is given by a multi‐dimensional integral over wavelengths, directions of incoming light rays and nanoparticle design distributions, where the integrand depends on the extinction cross section of the nanoparticle design.Thus, classical full gradient schemes based on numerical integration prove ineffective due to their tremendous computational cost. On the other hand, known stochastic gradient descent methods cannot deal with the nonlinear fashion in which the design variables enter the objective function. Therefore, we introduce the continuous stochastic gradient (CSG) method, which does not compute a full gradient, but instead uses information from previous iterations in an optimal way to learn the exact gradient during the optimization process. Hence, CSG thematically lies in between a stochastic optimization scheme and a full gradient method, combining both the low computational cost of SG due to very few partial gradient evaluations (small “batch size”) as well as step size rules and line search techniques known from full gradient descent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call