
Programme of the Twelfth Euro AD Workshop
Wednesday, December 7, 2011
 19^{00} Welcome Reception at the Griewanks, Rosestr. 3a, 12524 Berlin, please contact elisabeth(at)griewank.com

Thursday, December 8, 2011
 8^{00}  8^{45} Registration
 8^{45}  9^{00} Opening
 9^{00}  9^{30} Rich Neidinger
 Richard Neidinger (Davidson College, NC, USA)
Comparing ArbitraryOrder Multivariable AD Methods
Three alternative methods for highorder multivariate AD were programmed in the same language (MATLAB) on the same computer (laptop) in order to compare accuracy and efficiency. In limited testing, the Direct Forward method was seen to be more accurate than two interpolation methods, the original from [GUW] and a method using nested directions [N]. Runtime measures sometimes confirmed the theoretical savings of the interpolation methods, though program structures can overshadow operation counts. In addition, the Direct Forward and [GUW] methods require onetime computation to set up the method for a specific number of variables and order of derivatives. Reasonable time for generating these reference arrays requires efficiency tricks that are not in most publications. Such a trick for the direct method will be shared.
 9^{30}  10^{00} Trond Steihaug
 Trond Steihaug (INRIA Sophia Antipolis and University of Bergen)
Factorable programming revisited
McCormick introduced factorable programming in a technical report from
1967 which was later published
in the book "Nonlinear Programming: Sequential Unconstrained Minimization Techniques" by Fiacco and McCormick from 1968. This is one of the first attempt to define structure in optimization as a tool to make more efficient implementations and new methods. In this talk we look closer at factorable programming and argue its equivalence to AD. We also look at some other classical 'structures in optimization' and the use of structure.
 10^{00}  10^{30} Andreas Griewank
 Andreas Griewank (HumboldtUniversität zu Berlin)
Piecewise Linearization via Algorithmic Differentiation
Allowing besides smooth elementals the functions abs, min, and max one obtains a class of piecewise differentiable functions, about which a fair amount of mathematical theory is known. They can be approximated to second order truncation errors by piecewise linearizations. We show how these may be evaluated constructively in an AD fashion and how they can be used to iteratively minimize nonsmooth functions and solve nonsmooth systems of equations.
 10^{30}  11^{00} Coffee Break
 11^{00}  12^{00} 20 min. session
 Johannes Lotz (LuFG Inf. 12: STCE)
Adjoining a matrixfree iterative nonlinear solver
For computing adjoints of codes including an iterative matrixfree algorithm for solving a nonlinear system (e.g. Newton/gmres), we want to give for discussion three different approaches. It is important to mantain the algorithm to be matrixfree as well as to tackle the memory consumption if trying to adjoin the algorithm blackbox. The alternatives are using semidirect (semianalytic) approaches.
 Kshitij Kulshreshtha (Universität Paderborn)
Implementing Permutations and Selections in ADOLC
Many applications require permuting an input vector or intermediate results in a given way in order to evaluate a function, however derivatives of the result are required w.r.t. the original independents. Sometimes the permutation itself may be computed within the function evaluation and has no relevance outside of this computation. Similarly selection of certain independent or dependent variables based on some conditions may be required. Such operations may be categorised as branching in program code, but are infact simpler to handle in an ADTool like ADOLC than general ifthenelse branches that require retracing the function. In this talk I shall show how ADOLC can handle such situations without the need for retracing the function.
 Hernan Eugenio Leovey (HumboldtUniversität zu Berlin)
Calculation of Optimal Weights for Lattice Rules via Fast Evaluation of Mixed Derivatives or "CrossDifferentiation"
For a fixed given integrand function f, good weights are required for construction of lattice rules for integration in highdimensions.
The weights define the embedding of f on a precise weighted unanchored Sobolev space. 'Optimal weights' are chosen to minimize certain upper bounds for the worst case integration error over these spaces. We use the new algorithmic differentiation technique ("CrossDifferentiation") for a low cost computation of 'optimal weights', based on the loweffective dimension of the integrands in some examples related to finance.
 12^{00}  12^{30} Karsten Ahnert
 Karsten Ahnert (Ambrosys GmbH)
Taylor series method for ordinary differential equations
In this presentation I will introduce the Taylor series method for ordinary differential equations. It works by calculating the Taylor coefficients of the solution up to arbitrary order and by using these coefficients to compute the next step of the solution. Due to its high order, the method is especially suited for problems where very high precision is needed, like in astrophysical applications or simulations of chaotic dynamical systems. Along with the method a very efficient computation scheme is introduced which is based on autodifferentiation.
This scheme exploits the power of the C++ template system and is uses
C++ metaprogramming and expression templates. Contrary to existing
implementation of the Taylor series method, an additional preprocessing step is not necessary because the ODE can be completely expressed in terms of C++ templates. Furthermore, the performance of this method is as good as specialized code for an individual ODE.
 12^{30}  14^{00} Lunchbreak
 14^{00}  14^{30} Klaus Roebenack
 Klaus Roebenack (TU Dresden)
Toward a native implementation of Lie derivatives with ADOLC
Lie derivatives are derivatives of tensor fields along the solution of a given dynamic system. They occur in many fields such as general relativity, nonrelativistic mechanics and control theory. We are working toward an efficient implementation of the calculation of Lie derivatives of scalar, vector and covector fields based on the lowlevel C drivers of ADOLC.
 14^{30}  15^{00} Andrea Walther
 Andrea Walther (Universität Paderborn)
On the efficient computation of sparsity patterns for Hessians
TBA
 15^{00}  15^{30} Asgeir Biskisson
 Asgeir Birkisson (University of Oxford)
Automated linearity detection via automatic differentiation
Given a function, or more generally an operator $F: u \mapsto F(u)$, the question ``Is $F$ linear?" seems simple. In many applications of scientific computing it might be worth determining the answer to that question in an automated way; some functionality is only defined for linear operators and in other problems, time saving is available if it is known that the problem being solved is linear. However, implementing such an automated detection is not as straightforward as one might expect. This talk describes how linearity detection is implemented within the Chebfun system. The key ingredient is that information obtained via automatic differentiation is propagated as derivatives are evaluated.
 15^{30}  16^{00} Coffee Break
 16^{00}  17^{00} 20 min. session
 Klaus Leppkes (RWTH Aachen)
On Call Tree Reversal in adjoint mode AD by overloading
In adjoint mode AD, the Call Tree Reversal problem is known to be NPcomplete (U. Naumann. Call Tree Reversal is NPcomplete. In C.
Bischof et. al., Advances in Automatic Differentiation, pages 13–22.
Springer, 2008).
Based on our overloading AD tool dco/c++ (derivative code by overloading for c++), we discuss a method for implementing arbitrary (optimized) Call Tree Reversal schemes in a semi automatic fashion.
 Benjamin Letschert (Universität Paderborn)
Extensions to ADOLC for differentiating parallel evaluations using MPI
This project extends the ADOLC package such that the differentiation of MPIparallel simulation codes becomes possible. Using MPI the user explicitly decides which calculation is done by which process. Currently the simulation to be differentiated with ADOLC can include the MPI routines Send, Receive, Barrier, Broadcast, Scatter, Gather and Reduce.
For the implementation of these routines each MPIfunction has its own wrapper containing appropriate algorithms for sending/receiving data using the active variables.
For the subsequent derivative calculation, additional send and receivetype operations were added to take different modes like firstorderscalar, higherordervector, etc. into account. Furthermore, to perform data transactions in forward or reverse mode, strategies were included to take the corresponding order into account.
 Max Sagebaum (HumboldtUniversität zu Berlin and RWTH Aachen)
Challenges in automatic differentiation of an industrial code
The automatic differentiation of a huge C++ package, using the AD tool DCO, with many dependencies and underlying libraries has its own challenges. We will report about the problems encountered during the automatic differentiation and how we could solve them. Finally, we present the verification of the calculated gradients and give an overview of the amount of work it took to differentiate the code.
 19^{00} Dinner at Ratskeller Koepenick

Friday, December 9, 2011
 9^{00}  9^{30} Nicolas Gauger
 Nicolas Gauger (RWTH Aachen)
Optimal Unsteady Flow Control and OneShot Aerodynamic Shape Optimization using AD
For efficient detailed aerodynamic design as well as optimal unsteady flow control, the use of adjoint approaches is a first essential ingredient. We compare continuous and discrete adjoint approaches. For the generation of discrete adjoint solvers, we discuss the use of Automatic Differentiation (AD) and for unsteady optimal flow control its combination with checkpointing techniques. In the case of detailed aerodynamic shape optimization, we discuss socalled oneshot methods. Here, one achieves simultaneously convergence of the primal state equation, the adjoint state equation as well as the design equation. The direction and size of the oneshot optimization steps are determined by a carefully selected design space preconditioner. It turns out, that oneshot methods enable aerodynamic shape designs for the computational effort of a small, constant multiple of the effort of an aerodynamic simulation. Integral part of these approaches are, next to suitable preconditioners for the coupled oneshot loop, gradient smoothing and shape derivatives.
 9^{30}  10^{00} John Pryce
 John Pryce (Cardiff School of Mathematics)
Possibilities for Quasilinearity Analysis in Structural Analysis of DAEs
Ned Nedialkov and I are the authors of the C++ code DAETS for solving DAE initial value problems, using Pryce's structural analysis (SA) approach, and a Taylor series expansion of the solution at each timestep.
Ned is talking about DAESA, which is essentially the SA frontend of DAETS, rewritten as a standalone Matlab tool with extra features to help exploratory analysis.
My talk is about the synergy, during the SA, between block triangularization and the detection of quasilinearity. Our algorithms determine a blocklowertriangular (BLT) sequence of solving for variables and their derivatives in the correct order. This is done as part of consistent initialization, and also at each integration step to project the solution on the consistent manifold.
The equations concerned are nonlinear in general, and a mix of square (as many equations as unknowns) and underconstrained (fewer equations than unknowns). Our semisymbolic processing lets us determine whether the unknowns to solve for, in some block subsystem, actually occur linearly: we call this quasilinearity.
When a block is both linear and square, it needs no initial values or trial guesses for solution. In many of the test examples we have tried, this reduces the amount of initial data the user must submit, and has the potential to seriously speed up solution. For the Chemical Akzo problem for instance, an apparently nonlinear 8x8 system reduces to eight linear 1x1 blocks that are trivial to solve.
The quasilinearity analysis currently in DAETS is fairly primitive. We are putting an improved version into DAESA before including it in DAETS.
I show what improvements to quasilinearity analysis seem desirable and practical, with a number of examples.
 10^{00}  10^{30} René Lamour
 René Lamour (HumboldtUniversität zu Berlin)
Computational aspects of detecting DAE structures
A regularity region describes the local characteristics of a DifferentialAlgebraic Equation (DAE).
We determine regularity regions of DAEs by means of sequences of continuous matrix functions.
The matrix sequence is built stepbystep by certains admissible projector functions starting with the Jacobian matrices of the DAE data. For timedependent and nonlinear DAEs the sequence contains a differentiation of a projector function.
Beside common linear algebra tools such as matrix factorizations and generalized inverses, widely orthogonal projector functions are applied and algorithmic differentiation technices are used to compute the needed Jacobian matrices and to realize the differentiation.
A comparison with other approaches, the discussion of numerical experiments and open problems completes the paper.
 10^{30}  11^{00} Coffee Break / Discussion
 11^{00}  11^{30} Volker Mehrmann
 Volker Mehrmann (Technische Universität Berlin)
Reformulation of DAEs using derivative arrays
Differentialalgebraic equations (DAEs) present today the stateoftheart in
dynamical systems arising from automated modularized modeling in almost all areas
of science and engineering. While the modeling becomes more and more convenient, the
resulting models are typically not easy to treat with current numerical simulation, control and optimization methods.
In many cases a reformulation of the models or even a regularization
is necessary to avoid failure of the computational methods.
In this contribution we will discuss general DAE control problems and how (based on derivative arrays and smooth factorizations)
they can be systematically reformulated and regularized so that the resulting system can be used in simulation, control
and optimization procedures without much further difficulty.
 11^{30}  12^{00} Ned Nedialkov
 Ned Nedialkov (McMaster University, Canada)
DAESA: A Matlab Tool for Structural Analysis of DAEs
Ned Nedialkov and John Pryce are the authors of DAETS, a C++ code for
solving highindex, any order differentialalgebraic equations (DAEs).
It is based on Pryce’s structural analysis (SA) theory and a Taylor
series expansion of the solution. Although DAETS can be used for SA,
our goal is to provide a "lighter", standalone tool for SA of
DAEs. In the last few years, we have been using Matlab to investigate
the structure of numerous DAE problems, which resulted in the DAESA
package.
Using DAESA, a user can specify a DAE in a general form: it can be of
highindex, fully implicit, and contain derivatives of order higher
than one. DAESA preprocesses the DAE through operator overloading to
determine what variables and derivatives of them occur in each
equation, and to determine if the problem is linear in highest order
derivatives. Then DAESA finds the structural index of a DAE, its
degrees of freedom, constraints, and what variables and derivatives
need to be initialized. It also constructs a blocktriangular
structure of the problem (where such exists), and prescribes a
solution scheme exploiting this structure.
Joint work in John Pryce and Guangning Tan.
 12^{00}  12^{30} Matthias Gerdts
 Matthias Gerdts (Universität der Bundeswehr München)
AD in DAE optimal control methods
The calculation of derivatives appears at different levels in numerical solution methods for DAE optimal control problems ranging from the computation of iteration matrices in BDF methods for coupled DAEPDE systems to checkpointing strategies in semismooth Newton methods.
Some applications of AD are discussed.
 12^{30}  14^{00} Lunchbreak
 14^{00}  14^{30} Caren Tischendorf
 Caren Tischendorf (Universität zu Köln)
Numerical Stability Problems for DAEs Caused by Differentiation
The rising demand in the simulation and optimization of complex processes in different fields (e.g. automotive industry,
energy supplying systems, electronic industry, medicine) yields more and more often the task to solve differentialalgebraic
equations (DAEs) in an efficient and robust way.
Solving differentialalgebraic equations (DAEs) numerically, one is confronted with integration tasks
but also with differentiation tasks. These differentiations may cause substantial stability problems
for numerical methods applied to DAEs. We will explain these stability problems and discuss some strategies
to avoid them. We will see that the model formulation may play a significant role. Furthermore, we will address
the problem that the inherent differentiation tasks usually are not given explicitly.
 14^{30}  15^{30} 20 min. session
 Johannes Willkomm (TU Darmstadt)
The new user interface of ADiMat  and how to use it with DAE solvers
in Matlab and Octave
Our AD tool for Matlab and Octave, ADiMat, has been provided with a
new high level user interface. Derivatives can now be computed in
forward and reverse mode by a single command, specifying a function
handle and arguments and the seed matrix. Matlab and Octave can both
solve index1 DAEs using so called direct methods. Matlab provides the
integrator ode15s to that end and Octave has interfaces to the
integrators DASPK and DASSL. We present the new ADiMat user interface
and give examples of how ADiMat supplied derivatives can be used with
these integrators.
 Andreas Steinbrecher (Technische Universität Berlin)
Constraint Preserving Regularization of QuasiLinear DifferentialAlgebraic Equations
Differentialalgebraic equations (DAEs) are mainly used for the modeling of dynamical processes. For instance, the dynamical behavior of mechanical systems, electrical circuits, chemical reactions and many other are often described by DAEs, in particular, of quasilinear structure.
It is well known that the numerical treatment of DAEs is nontrivial in general and more complicated than the one of ordinary differential equations. Arising effects in the numerical treatment of DAEs are for example drift, instabilities, convergence problems, or inconsistencies.
In this talk we will discuss quasilinear DAEs of higher index, with respect to their regularization in view of an efficient and robust numerical simulation. We will present two iterative procedures which provides a general tool for the regularization of quasilinear DAEs of an arbitrary index. One of this procedures is of rank inflation type while the other one is of rank deflation type. Both procedures regularize a quasilinear DAE by lowering the index while all constraints are maintained, in particular, the hidden constraints. The procedures end with the projectedstrangenessfree form of the quasilinear DAE which can be used as basis for numerical simulations or further numerical investigations of the dynamical system.
 Lutz Lehmann (HumboldtUniversität zu Berlin)
Voice of the DAlEks
Of all the DAE examples in the CWI test suite of stiff ODE and DAE, only the transistor amplifier and the ring modulator prove to be nontrivial under the structural analysis of Pryce/Nedialkov. The talk will discuss methods to minimally modify those DAE systems so that structural analysis leads to a valid result. The transistor amplifier is transformed to a regular form by identification of common subexpressions. For the ring modulator, further study of the computational graph is necessary.
 15^{30} Discussion (Unveiling the ultimate Indexconcept based on AD) moderated by A. Griewank

