Skip to main content
Browse by:
GROUP

Effective March 10, 2020, all Duke-sponsored events over 50 people have been cancelled, rescheduled, postponed or virtualized.
Please check with the event contact regarding event status. For more information, please see https://coronavirus.duke.edu/events

Approximation theory and regularization for deep learning

Event Image
Icon calendar
Wednesday, February 20, 2019
Icon time
12:00 pm - 1:00 pm
Icon speaker
Haizhao Yang (National University of Singapore)
Icon series
Applied Math And Analysis Seminar

This talk introduces new approximation theories for deep learning in parallel computing and high dimensional problems. We will explain the power of function composition in deep neural networks and characterize the approximation capacity of shallow and deep neural networks for various functions on a high-dimensional compact domain. Combining parallel computing, our analysis leads to an important point of view, which was not paid attention to in the literature of approximation theory, for choosing network architectures, especially for large-scale deep learning training in parallel computing: deep is good but too deep might be less attractive. Our analysis also inspires a new regularization method that achieves state-of-the-art performance in most kinds of network architectures.