Skip to Content

Department of Mathematics

Applied and Computational Mathematics Student Seminar

We invite speakers to present original research in applied and computational mathematics (ACM).

2024 – 2025 Academic Year

When:  October 4th, 2024 from 2:30 p.m. – 3:30 p.m.

Speaker: Dongwei Chen (Colorado State University)

Location: Virtual via Zoom

Abstract: In this talk, I am going to present my latest work on the approximation in reproducing kernel Hilbert spaces. We generalize the least square method to probabilistic approximation in reproducing kernel Hilbert spaces and show the existence and uniqueness of the optimizer. Furthermore, we generalize the celebrated representer theorem in this setting, and especially when the probability measure is finitely supported or the Hilbert space is finite-dimensional, we show that the approximation problem turns out to be a measure quantization problem. Some discussions and examples are also given when the space is infinite-dimensional, and the measure is infinitely supported. This is a joint work with Kai-Hsiang Wang from Northwestern University.

 

Previous Years

Organized by: McKenzie Black, Thomas Hamori, Chunyan Li

 Note: Due to the COVID-19 pandemic, we are currently leaving the format of the seminar up to each individual speaker. To make the seminar as accessible as possible, we will host a zoom for each in-person presentation live so that anyone who can't or would prefer not to attend in person can still participate.

Yuankai Teng, University of South Carolina
  • February 25th
  • 1:00 pm

Abstract: Partial differential equations are often used to model various physical phenomena, such as heat diffusion, wave propagation, fluid dynamics, elasticity, electrodynamics and so on. Due to their important applications in scientific research and engineering, many numerical methods have been developed in the past decades for efficient and accurate solutions of these equations. Inspired by the rapidly growing impact of deep learning techniques, we propose in this paper a novel neural network method, “GF-Net”, for learning the Green’s functions of the classic linear reaction-diffusion equations in the unsupervised fashion. The proposed method overcomes the challenges for finding the Green’s functions of the equations on arbitrary domains by utilizing the physics-informed neural network approach and domain decomposition. Consequently, it particularly leads to a fast algorithm for solving the target equations subject to various sources and Dirichlet boundary conditions without network retraining. We also numerically demonstrate the effectiveness of the proposed method by extensive experiments in the square, annular and L-shape domains.

 

Chunyan Li, University of South Carolina

  • February 25th
  • 1:00 pm

Abstract:  In this talk, we will introduce a nonlinear dimensionality reduction method with neural networks, called VAE. Two parameterized conditional distributions are learned as the encoder and decoder by minimizing the so called variational lower bound objective in VAE. We will go through the derivation and reparameterization trick used in this whole process. Applications will be shown in the end. 

Zongyi Li, California Institute of Technology

  • February 11th
  • 1:00 pm

Abstract:  The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. We prove a universal approximation theorem for our construction. Furthermore, we introduce four classes of operator parameterizations: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators and describe efficient algorithms for computing with each one. The proposed neural operators are resolution-invariant: they share the same network parameters between different discretizations of the underlying function spaces and can be used for zero-shot super-resolutions. Numerically, the proposed models show superior performance compared to existing machine learning based methodologies on Burgers' equation, Darcy flow, and the Navier-Stokes equation, while being several order of magnitude faster compared to conventional PDE solvers.

Yunkai Teng, University of South Carolina

  • October 29th
  • 12:00 pm

Abstract: Level Set Learning and Function Approximations on Sparse Data through Pseudo-reversible Neural Network

Chunyan Li, University of South Carolina

  • October 15th
  • 12:00 pm

Abstract:  PCA, one of the popular dimensionality reduction methods, is an orthogonal linear transformation that transforms the data to a new coordinate system. In this talk, we will learn how to derive this new basis and characterize the structure of all principal components via SVD of covariance matrix of data.  The variant of PCA, Dual PCA and Kernel PCA are mentioned as well. 

McKenzie Black, University of South Carolina

  • October 1st
  • 12:00 pm

Abstract:  In this talk, we will introduce the Pressure-less Euler Alignment system and update the system with nonlinear velocity.  We explore local well posedness of the system while discussing varying method to get there. Focusing on the nonlinear velocity, we introduce a similar system to determine how the magnitude of nonlinearity effects unconditional flocking and all subsets to follow.

Thomas Hamori, University of South Carolina

  • September 24th
  • 12:00 pm

Abstract:  Conservation laws are foundational in fluid dynamics. I will derive conservation laws for traffic flow from conservation of mass for macroscopic traffic flow models. A brief discussion will follow regarding the classical theory for macroscopic traffic flow, and I will present joint work with my advisor Dr. Changhui Tan on a class of nonlocal traffic models. In these models, the nonlocality is used to combat the nonlinearity of the PDE. I will show that the nonlocality broadens the class of initial conditions with global smooth solutions for these models.

 

 

 


Challenge the conventional. Create the exceptional. No Limits.

©