Publication: Vector Forward Mode Automatic Differentiation on SIMD/SIMT Architectures
Introduction
Applications
Tools
Research Groups
Workshops
Publications
   List Publications
   Advanced Search
   Info
   Add Publications
My Account
About

Vector Forward Mode Automatic Differentiation on SIMD/SIMT Architectures

- Part of a collection -
 

Author(s)
Jan Hückelheim , Michel Schanen , Sri Hari Krishna Narayanan , Paul Hovland

Published in
49th International Conference on Parallel Processing -- ICPP

Year
2020

Publisher
Association for Computing Machinery

Abstract
Automatic differentiation, back-propagation, differentiable programming and related methods have received widespread attention, due to their ability to compute accurate gradients of numerical programs for optimization, uncertainty quantification, and machine learning. Two strategies are commonly used. The forward mode, which is easy to implement but has an overhead compared to the original program that grows linearly with the number of inputs, and the reverse mode, which can compute gradients for an arbitrary number of program inputs with a constant factor overhead, although the constant can be large, more memory is required, and the implementation is often challenging. Previous literature has shown that the forward mode can be more easily parallelized and vectorized than the reverse mode, but case studies investigating when either mode is the best choice are lacking, especially for modern CPUs and GPUs. In this paper, we demonstrate that the forward mode can outperform the reverse mode for programs with tens or hundreds of directional derivatives, a number that may yet increase if current hardware trends continue.

AD Theory and Techniques
Parallelism

BibTeX
@INPROCEEDINGS{
         Huckelheim2020VFM,
       author = "H\"{u}ckelheim, Jan and Schanen, Michel and Narayanan, Sri Hari Krishna and
         Hovland, Paul",
       title = "Vector Forward Mode Automatic Differentiation on {SIMD/SIMT} Architectures",
       year = "2020",
       isbn = "9781450388160",
       publisher = "Association for Computing Machinery",
       address = "New York, NY, USA",
       url = "https://doi.org/10.1145/3404397.3404470",
       doi = "10.1145/3404397.3404470",
       abstract = "Automatic differentiation, back-propagation, differentiable programming and related
         methods have received widespread attention, due to their ability to compute accurate gradients of
         numerical programs for optimization, uncertainty quantification, and machine learning. Two
         strategies are commonly used. The forward mode, which is easy to implement but has an overhead
         compared to the original program that grows linearly with the number of inputs, and the reverse
         mode, which can compute gradients for an arbitrary number of program inputs with a constant factor
         overhead, although the constant can be large, more memory is required, and the implementation is
         often challenging. Previous literature has shown that the forward mode can be more easily
         parallelized and vectorized than the reverse mode, but case studies investigating when either mode
         is the best choice are lacking, especially for modern CPUs and GPUs. In this paper, we demonstrate
         that the forward mode can outperform the reverse mode for programs with tens or hundreds of
         directional derivatives, a number that may yet increase if current hardware trends continue.",
       booktitle = "49th International Conference on Parallel Processing -- ICPP",
       articleno = "39",
       numpages = "11",
       keywords = "Julia Language, GPU, SIMD, Reduced Precision, Automatic Differentiation, Vector
         Forward Mode",
       location = "Edmonton, AB, Canada",
       series = "ICPP '20",
       ad_theotech = "Parallelism"
}


back
  

Contact:
autodiff.org
Username:
Password:
(lost password)