Seminars and Colloquia by Series

Neural Networks with Local Converging Inputs for Solving the Stokes Equations Using Subdomain Data Generation

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 27, 2026 - 14:00 for 30 minutes
Location
Skiles 005
Speaker
Farjana SiddiquaVisiting Assistant Professor, Georgia Institute of Technology

Deep neural network–based surrogate models have recently gained traction for solving fluid-flow partial differential equations, but their reliance on global interpolation often demands large, computationally expensive architectures and extensive training data. Neural networks with local converging inputs (NNLCI) offer a contrasting strategy. By restricting attention to the local domain of dependence and using converging coarse-grid solutions as inputs, NNLCI dramatically reduces computational cost and data requirements while achieving strong generalization. In this work, we extend the NNLCI framework to the three‑dimensional Stokes equations and introduce a new subdomain data generation methodology specifically tailored for NNLCI, enabling high‑fidelity prediction while completely eliminating the need to compute fine‑grid numerical solutions on the full domain at any stage of the computing process. This innovation eliminates the most computationally intensive component of 3D simulations at its root.

Multiscale-Multiphysics Phenomena in Complex Fluids: The Energetic Variational Approaches

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 20, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Chun LiuIllinois Institute of Technology

Complex fluids are abundant in our daily life. Unlike traditional solids, liquids and the diluted solutions, the model equations for complex fluids continue to evolve with the new experimental evidences and emerging applications. Most of these important properties are due to the coupling and competition between effects from different scales or even from different physical origins/principles. The energetic variational approaches (EnVarA), motivated by the seminal works of Onsager and Rayleigh, are designed to study such systems. In this talk, I will discuss several complex fluid systems, and the associated mathematical issues.

In-Context Operator Learning on the Space of Probability Measures

Series
Applied and Computational Mathematics Seminar
Time
Monday, April 13, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Dixi WangPurdue University

We introduce in-context operator learning on probability measure spaces for optimal transport (OT). The goal is to learn a single solution operator that maps a pair of distributions to the OT map, using only few-shot samples from each distribution as a prompt and without gradient updates at inference. We parameterize the solution operator and develop scaling-law theory in two regimes. In the nonparametric setting, when tasks concentrate on a low-intrinsic-dimension manifold of source– target pairs, we establish generalization bounds that quantify how in-context accuracy scales with prompt size, intrinsic task dimension, and model capacity. In the parametric setting (e.g., Gaussian families), we give an explicit architecture that recovers the exact OT map in context and provide finite-sample excess-risk bounds. Our numerical experiments on synthetic transports and generative modeling benchmarks validate the framework.

Boundary integral methods without surface parameterization

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 30, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Richard TsaiUniversity of Texas at Austin

I will review a general framework for developing numerical methods working with non-parametrically defined surfaces for various problems. In this talk, I will focus on boundary integral equations. The main idea is to formulate appropriate extensions of a given problem defined on a surface to ones in the narrow band of the surface in the embedding space. The extensions are arranged so that the solutions to the extended problems are equivalent, in a strong sense, to the surface problems that we set out to solve. Such extension approaches allow us to analyze the well-posedness of the resulting system, develop, systematically and in a unified fashion, numerical schemes for treating a wide range of problems involving differential and integral operators, and deal with similar problems in which only point clouds sampling the surfaces are given. At the end of this talk, I will mention our work in developing multilevel neural network methods for inverting dense and large matrices that arise from boundary integral equations.

(Cancelled) Multiscale-Multiphysics Phenomena in Complex Fluids: The Energetic Variational Approaches

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 16, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Chun LiuIllinois Institute of Technology

 

Complex fluids are abundant in our daily life. Unlike traditional solids, liquids and the diluted solutions, the model equations for complex fluids continue to evolve with the new experimental evidences and emerging applications. Most of these important properties are due to the coupling and competition between effects from different scales or even from different physical origins/principles. The energetic variational approaches (EnVarA), motivated by the seminal works of Onsager and Rayleigh, are designed to study such systems. In this talk, I will discuss several complex fluid systems, and the associated mathematical issues.

Breaking the Curse of Dimensionality: Graphs, Probability Measures, and Data

Series
Applied and Computational Mathematics Seminar
Time
Friday, March 13, 2026 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
James MurphyTufts University

The curse of dimensionality renders statistical and machine learning in high dimensions intractable without additional assumptions on the underlying data.  We consider geometric models for data that allow for mathematical performance guarantees and efficient algorithms that break the curse.  The first part of the talk develops a family of data-driven metrics that balance between density and geometry in the underlying data.  We consider discrete graph operators based on these metrics, and prove performance guarantees for clustering with them in the spectral graph paradigm.  Fast algorithms based on Euclidean nearest-neighbor graphs are proposed and connections with continuum operators on manifolds are developed. 
 
In the second part of the talk, we move away from Euclidean spaces and focus on representation learning of probability measures in Wasserstein space.  We introduce a general barycentric coding model in which data are represented as Wasserstein barycenters of a set of fixed reference measures.  Leveraging the geometry of Wasserstein space, we develop a tractable optimization program to learn the barycentric coordinates when given access to the densities of the underlying measures.  We provide a consistent statistical procedure for learning these coordinates when the measures are accessed only by i.i.d. samples.  Our consistency results and algorithms exploit entropic regularization of optimal transport maps, thereby allowing our barycentric modeling approach to scale efficiently.  Extensions to learning suitable reference measures and linearizations of our barycentric coding model will be discussed.  Throughout the talk, applications to synthetic and real data demonstrate the efficacy of our methods.

Nonlocal Attention Operator: Understanding Attention Mechanism for Physical Responses

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 9, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yue YuLehigh University

While foundation models have gained considerable attention in core AI fields such as natural language processing (NLP) and computer vision (CV), their application to learning complex responses of physical systems from experimental measurements remains underexplored. In physical systems, learning problems are often characterized as discovering operators that map between function spaces, using only a few samples of corresponding function pairs. For instance, in the automated discovery of heterogeneous material models, the foundation model must be capable of identifying the mapping between applied loading fields and the resulting displacement fields, while also inferring the underlying microstructure that governs this mapping. While the former task can be seen as a PDE forward problem, the later task frequently constitutes a severely ill-posed PDE inverse problem.

In this talk, we will explore the attention mechanism towards a foundation model for physical systems. Specifically, we show that the attention mechanism is mathematically equivalent to a double integral operator, enabling nonlocal interactions among spatial tokens through a data-dependent kernel that characterizes the inverse mapping from data to the hidden PDE parameter field of the underlying operator. Consequently, the attention mechanism captures global prior information from training data generated by multiple systems and suggests an exploratory space in the form of a nonlinear kernel map. Based on this theoretical analysis, we introduce a novel neural operator architecture, the Nonlocal Attention Operator (NAO). By leveraging the attention mechanism, NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and enhancing generalizability. To demonstrate the applicability of NAO to material modeling problems, we apply it to the development of a foundation constitutive law across multiple materials, showcasing its generalizability to unseen data resolutions and system states. Our work not only suggests a novel neural operator architecture for learning an interpretable foundation model of physical systems, but also offers a new perspective towards understanding the attention mechanism.

Approximation of intrinsic Hölder functions on manifolds by ambient Gaussian kernels

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 16, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Xiuyuan ChengDuke University

We study approximation properties of Gaussian reproducing kernel Hilbert spaces restricted to low-dimensional manifolds embedded in Euclidean space. Using only ambient Gaussian kernels, and without assuming any smooth ambient extensions or estimating geometric quantities of the manifold, we show that intrinsically defined Hölder functions on the manifold can be approximated at rates governed by intrinsic dimension and smoothness. The construction is based on a small-scale expansion in real space rather than a spectral representation. As an application, we obtain adaptive nonparametric convergence rates for Gaussian process regression on manifolds, where the regression procedure itself is unchanged and intrinsic adaptivity results from the approximation analysis.

The Uzawa Method: Historical Perspectives, Current Advances, and Future Directions

Series
Applied and Computational Mathematics Seminar
Time
Friday, January 23, 2026 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Professor Xiaoming YuanThe University of Hong Kong

Abstract:
This talk explores the Uzawa method, tracing its development from early applications in partial differential equations (PDEs) to modern advancements in optimization, image processing, and scientific computing. We will examine recent refinements for developing GPU-adaptive solvers for huge-scale linear programming and its extension to semidefinite programming arising in quantum information science. The discussion will also highlight the method's integration with deep learning and unrolling techniques for optimal control problems of PDEs, as well as its applications in industry.

 

Bio:

Xiaoming Yuan is a Professor in the Department of Mathematics at The University of Hong Kong. His research spans optimization, optimal control, scientific machine computing, and artificial intelligence. He is well recognized for his fundamental contributions to first-order optimization algorithms, including the Alternating Direction Method of Multipliers (ADMM), primal-dual methods, and proximal point algorithms. He also collaborates extensively with the AI and cloud computing industries. He led the development of the first automatic bandwidth allocation system for the cloud computing sector. His team was honored as a Franz Edelman Award Finalist in 2023.

Learning geometry from incomplete pairwise distances: Theory, algorithms and applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 12, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 254 and https://gatech.zoom.us/j/94954654170
Speaker
Abiy TasissaTufts

The advancement of technology has significantly enhanced our capacity to collect data. However, in many real-world applications, certain inherent limitations, such as the precision of measurement devices, environmental conditions, or operating costs, can result in missing data. In this talk, we focus on the setting where the available data consists of pairwise distances between a set of points, with the goal of estimating the configuration of the underlying geometry from incomplete distance measurements. This is known as the Euclidean distance geometry (EDG) problem and is central to many applications.

We first start by describing the solution when all distances are given using the classical multidimensional scaling (MDS) technique and then discuss a constructive approach to interpret the key mathematical objects in MDS. Next, we introduce a mathematical framework to address the EDG problem under two sampling models of the distance matrix: global sampling (uniform sampling of the entries of the distance matrix) and structured local sampling, where the measurements are limited to a subset of rows and columns. We discuss the conditions required for the exact recovery of the point configuration and the associated algorithms. The last part of the talk will illustrate the algorithms using synthetic and real data and discuss ongoing work.

Pages