We apply ADOL-C in reverse mode on the C++ finite element code DROPS. The derivative is needed for an optimization in the context of parameter estimation. We report on the results such as the computational efficiency and the convergence behaviour of the objective function.
After discussing the recent features integrated in ADOL-C version 2.0, the talk presents the handling of OpenMP parallelized function evaluations by ADOL-C for a derivative calculation also in parallel. As a numerical example, we employ the libor code of Mike Giles. It is extended to a parallel evaluation of the path sensitivities that are required in a real-world application. Preliminary results on timings are given.
The Automatic Differentiation tool Tapenade shares a
common kernel for Fortran or C source programs. This
kernel must therefore handle the Pass-by-Value behavior
in addition to the former Pass-by-Value-Return or Pass-by-Reference. This has interesting consequences
in the context of the reverse mode of AD. In the general
case, we show that Call-by-Value arguments require the
creation of two differentiated variables. In some cases
however, only one differentiated variable is enough.
We are able to specify and detect these cases in terms
of simple Data-Flow properties that the AD tool analyses
We report about the results of the just finished CompAD-II
project that aimed to incorporate basic source code
transformation capabilities for adjoint code generation
into an industrial strength Fortran 95 compiler.
We will show a number of successful applications of the
AD-enabled compiler, and discuss the further development
towards efficient adjoints in the upcoming COMPAD-III
1530 –1600 Coffee
1600 –1800 Special Session: Applications and Challenges in Economics and Finance
A key application of automatic differentiation (AD) is to facilitate
numerical optimization problems. Such problems are at the core of many estimation techniques, including maximum likelihood. As one of the first applications of AD in the field of economics, we used Tapenade to construct derivatives for the likelihood function of any
linear or linearized general equilibrium model solved under the assumption of rational expectations.
We view our main contribution as providing an important check on finite-difference numerical derivatives. We also construct Monte Carlo experiments to compare maximum-likelihood estimates obtained with and without the aid of automatic derivatives. We find that the convergence rate of our optimization algorithm can increase substantially when we use AD derivatives.
Luca Capriotti (Credit Suisse)
No more bumping: the promise and challenges of Adjoint Algorithmic Differentiation
Hedging complex derivative securities typically requires estimating the sensitivity of their price with respect to a large number of risk factors. In the common case in which a portfolio of derivatives is priced by means of numerical techniques, like partial differential equations or Monte Carlo simulations, such sensitivities are often obtained by perturbing (bumping) each risk factor in turn
and repeating the price calculation in order to form standard finite
difference estimators. As a result, estimating the risk of such securities is a computationally demanding task that often requires the use of large parallel computers. Adjoint Algorithmic Differentiation (AAD), recently introduced to the computational finance community, represents a much more efficient approach often resulting in computational savings of orders of magnitude. In this talk, I present an overview of some of the applications of AAD to risk management and I discuss some of the challenges of its practical implementation in a Financial Industry setting.
Many economics models reduce to the solution of a
finite-dimensional system of nonlinear but smooth equations.
Furthermore, there are often special cases where one can compute the
solution. The implicit function theorem can be used to compute Taylor
series expansions for the solutions in terms of a parameter
representing the distance from the solvable cases, and these Taylor
series have nontrivial radii of convergence when the nonlinear
equations are analytic functions. We illustrate the general idea with
some simple examples coded in Mathematica, and argue that
user-friendly tools using automatic differentiation would have many
users in economics.
1800 Dinner (on your own)
Tuesday, November 25, 2008
900 –1100 Special Session: Applications and Challenges in Economics and Finance
We use the Laplace approximation to fit stochastic volatility (SV) models.
The models are implemented in the software package AD Model Builder which computational
engine involves the use of third order automatic differentiation (AD).
The algorithms will be described and comparison to other estimation
methods will be made both on real and simulated data. It is found that
the approach substantially reduce computation time
relative to Markov chain Monte Carlo (MCMC) algorithms.
Stochastic volatility (SV) models are used to model the time-varying volatility in financial price-return data. They typically consist of an observed price process driven by a latent volatiliy process, where the two processes have a joint Markov structure. Likelihood calculations are made hard by the fact that the latent volatility process needs to be integrated out of the joint likelihood, resulting in integrals over high-dimensional spaces.
Laplace importance samplers have successfully been applied for likelihood calculations in the log-normal SV model. In the work presented here, we provide a framework that extend the class of SV models that can efficiently fitted using a change of integration variable and Laplace importance samplers.
Calculations in this framework involves the maximization and Hessian evaluation of a highly non-trivial function. To implement this, we use several of the techniques in the AD catalog, including mixed forward and backward AD, local preaccumulation and AD applied to iterative solvers.
In the talk, we give an outline of the computational algorithm with emphasis on AD issues, and briefly discuss an application to real data.
I have been using my own c++ toolbox and my student's toolbox called fad which uses traits and expression templates. Comparisons with the backward mode and Odyssee will be given and applications to Greeks and calibration.
Providing Unmanned Aerial Vehicles (UAVs) with collision avoidance capabilities that are able to ensure at least the same level of safety currently provided by human pilots constitutes one of the main requirements of the national/international aviation regulatory bodies in order to allow UAVs routine operation in non-segregated airspace.
This talk discusses the use of AD tools within a novel technique, Gradient-aided Swarm Optimization (GRASP), employed to adjust ideal collision avoidance trajectory solutions, provided by the KB3D geometric approach, to a realistic UAV kinematic model and a realistic radar sensor model.
ADiMat enables the computation of derivatives for MATLAB codes in two ways: 1. use a MATLAB class to compute a number of desired directional derivatives during one execution of the differentiated program, 2. execute the differentiated program multiple times seeding only one directional derivative during each execution. In my talk I will describe the two approaches and present some performance results.
Computational models have long been used to predict the performance of
some baseline design given its design parameters. Given
inconsistencies in manufacturing, the manufactured product always
deviates from the baseline design. There is currently much interest in
both evaluating the effects of variability in design parameters on a
design's performance (uncertainty estimation), and robust optimization
of the baseline design such that near optimal performance is obtained
despite variability in design parameters. Traditionally, uncertainty
analysis is performed by expensive Monte-Carlo methods. This work
presents the software package developed for computing statistical
moments in Matlab using automatic differentiation tool. It explains
the mathematical background, introduces the interface, and
demonstrates the performance based on several examples.