報告題目:機器學習的數學理論
報 告 人:鄂維南 院士 普林斯頓大學
報告時間:2020年06月19日 上午 09:00-10:00
報告地點:騰訊會議 ID:212 239 221
或點擊鍊接直接加入會議:
https://meeting.tencent.com/s/jPIMgNcWrprA
校内聯系人:張然 zhangran@jlu.edu.cn
報告摘要:
The heart of modern machine learning is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality. We will discuss representations and approximations that overcome this difficulty, as well as gradient flows that can be used to find the optimal approximation. We will see that at the continuous level, machine learning can be formulated as a series of reasonably nice variational and PDE-like problems. Modern machine learning models/algorithms, such as the random feature and shallow/deep neural network models, can be viewed as special discretizations of such continuous problems. At the theoretical level, we will present a framework that is suited for analyzing machine learning models and algorithms in high dimension, and present results that are free of the curse of dimensionality. Finally, we will discuss the fundamental reasons that are responsible for the success of modern machine learning, as well as the subtleties and mysteries that still remain to be understood.
報告人簡介:
鄂維南,數學家,主要從事機器學習、計算數學、應用數學及其在力學、物理、化學和工程等領域中的應用等方面的研究。1999年成為普林斯頓大學數學系和應用數學及計算數學研究所教授;2011年當選為中國科學院院士;2012年入選美國數學學會會士。曾獲國際工業與應用數學協會頒發的 Collatz 獎,首屆美國青年科學家和工程師總統獎,馮康科學計算獎,由SIAM和ETH Zürich聯合授予的 Peter Henrici獎等。