Abstract

Almost two decades ago supercomputers and massively parallel computers promised to revolutionize the landscape of large-scale computing and provide breakthrough solutions in several application domains. Massively parallel processors achieve today terraFLOPS performance – trillion floating point operations per second – and they deliver on their promise. However, the anticipated breakthroughs in application domains have been more subtle and gradual. They came about as a result of combined efforts with novel modeling techniques, algorithmic developments based on innovative mathematical theories, and the use of high-performance computers that vary from top-range workstations, to distributed networks of heterogeneous processors, and to massively parallel computers. An application that benefited substantially from high-performance computing is that of finance and financial planning. The advent of supercomputing coincided with the so-called “age of the quants” in Wall Street, i.e., the mathematization of problems in finance and the strong reliance of financial managers on quantitative analysts. These scientists, aided by mathematical models and computer simulations, aim at a better understanding of the peculiarities of the financial markets and the development of models that deal proactively with the uncertainties prevalent in these markets. In this paper we give a modest synthesis of the developments of high-performance computing in finance. We focus on three major developments: (1) The use of Monte Carlo simulation methods for security pricing and Value-at-Risk (VaR) calculations; (2) the development of integrated financial product management tools and practices – also known as integrative risks management or enterprise-wide risk management, and (3) financial innovation and the computer-aided design of financial products.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call