Skip to Content

Department of Mathematics

FRG on Variationally Stable Neural Networks

The Focused Research Group on Variationally Stable Neural Networks is a joint project of four universities: University of South Carolina, Georgia Tech, Portland State University, and University of Texas at Austin. 

2024 – 2025 Academic Year

Organized by: Peter Binev (binev@math.sc.edu)

This is an online-only series. All seminars will take place over Zoom.

This page will be updated as new seminars are scheduled. Make sure to check back each week for information on upcoming seminars.

When: Thursday, October 10th, 2024 from 2:30 to 3:45 p.m. EDT

Speaker: Assad A. Oberai (University of Southern California)

Click to join Zoom meeting

When: Wednesday, September 18th, 2024 from 2:30 to 3:30 p.m. EDT

Speaker:  Justin Dong (Lawrence Livermore National Laboratory)

Abstract: Recently, neural networks have been utilized for tasks that have traditionally been in the domain of scientific computing, for instance the forward approximation problem for partial differential equations (PDEs). While neural networks are known to satisfy a universal approximation property, they are difficult to train and often stagnate prematurely. In particular, neural networks often fail to deliver an approximation with controllable error – increasing the number of parameters in the network does not improve approximation error beyond a certain point.

We present some recent developments towards neural network-based numerical methods that provide error control. In the first part of this talk, we introduce the Galerkin neural network framework which constructs a finite-dimensional subspace whose basis functions are the realizations of a sequence of neural networks. The hallmark of this framework is an a-posteriori error estimator for the energy error that provides the user with full control of the approximation error. In the second part of this talk, we discuss issues of well-posedness as it pertains to loss functions used to train neural networks. Most common loss functions proposed in the literature for physics-informed learning may be viewed as the functionals to corresponding least squares variational problems. Viewed in this light, we demonstrate that many such loss functions lead to ill-posed variational problems and present recent work towards constructing well-posed loss functions for arbitrary boundary value problems.

 


Challenge the conventional. Create the exceptional. No Limits.

©