Seminars and Colloquia by Series

Multiscale-Multiphysics Phenomena in Complex Fluids: The Energetic Variational Approaches

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 16, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Chun LiuIllinois Institute of Technology

 

Complex fluids are abundant in our daily life. Unlike traditional solids, liquids and the diluted solutions, the model equations for complex fluids continue to evolve with the new experimental evidences and emerging applications. Most of these important properties are due to the coupling and competition between effects from different scales or even from different physical origins/principles. The energetic variational approaches (EnVarA), motivated by the seminal works of Onsager and Rayleigh, are designed to study such systems. In this talk, I will discuss several complex fluid systems, and the associated mathematical issues.

Breaking the Curse of Dimensionality: Graphs, Probability Measures, and Data

Series
Applied and Computational Mathematics Seminar
Time
Friday, March 13, 2026 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
James MurphyTufts University

The curse of dimensionality renders statistical and machine learning in high dimensions intractable without additional assumptions on the underlying data.  We consider geometric models for data that allow for mathematical performance guarantees and efficient algorithms that break the curse.  The first part of the talk develops a family of data-driven metrics that balance between density and geometry in the underlying data.  We consider discrete graph operators based on these metrics, and prove performance guarantees for clustering with them in the spectral graph paradigm.  Fast algorithms based on Euclidean nearest-neighbor graphs are proposed and connections with continuum operators on manifolds are developed. 
 
In the second part of the talk, we move away from Euclidean spaces and focus on representation learning of probability measures in Wasserstein space.  We introduce a general barycentric coding model in which data are represented as Wasserstein barycenters of a set of fixed reference measures.  Leveraging the geometry of Wasserstein space, we develop a tractable optimization program to learn the barycentric coordinates when given access to the densities of the underlying measures.  We provide a consistent statistical procedure for learning these coordinates when the measures are accessed only by i.i.d. samples.  Our consistency results and algorithms exploit entropic regularization of optimal transport maps, thereby allowing our barycentric modeling approach to scale efficiently.  Extensions to learning suitable reference measures and linearizations of our barycentric coding model will be discussed.  Throughout the talk, applications to synthetic and real data demonstrate the efficacy of our methods.

Nonlocal Attention Operator: Understanding Attention Mechanism for Physical Responses

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 9, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yue YuLehigh University

While foundation models have gained considerable attention in core AI fields such as natural language processing (NLP) and computer vision (CV), their application to learning complex responses of physical systems from experimental measurements remains underexplored. In physical systems, learning problems are often characterized as discovering operators that map between function spaces, using only a few samples of corresponding function pairs. For instance, in the automated discovery of heterogeneous material models, the foundation model must be capable of identifying the mapping between applied loading fields and the resulting displacement fields, while also inferring the underlying microstructure that governs this mapping. While the former task can be seen as a PDE forward problem, the later task frequently constitutes a severely ill-posed PDE inverse problem.

In this talk, we will explore the attention mechanism towards a foundation model for physical systems. Specifically, we show that the attention mechanism is mathematically equivalent to a double integral operator, enabling nonlocal interactions among spatial tokens through a data-dependent kernel that characterizes the inverse mapping from data to the hidden PDE parameter field of the underlying operator. Consequently, the attention mechanism captures global prior information from training data generated by multiple systems and suggests an exploratory space in the form of a nonlinear kernel map. Based on this theoretical analysis, we introduce a novel neural operator architecture, the Nonlocal Attention Operator (NAO). By leveraging the attention mechanism, NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and enhancing generalizability. To demonstrate the applicability of NAO to material modeling problems, we apply it to the development of a foundation constitutive law across multiple materials, showcasing its generalizability to unseen data resolutions and system states. Our work not only suggests a novel neural operator architecture for learning an interpretable foundation model of physical systems, but also offers a new perspective towards understanding the attention mechanism.

Approximation of intrinsic Hölder functions on manifolds by ambient Gaussian kernels

Series
Applied and Computational Mathematics Seminar
Time
Monday, February 16, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Xiuyuan ChengDuke University

We study approximation properties of Gaussian reproducing kernel Hilbert spaces restricted to low-dimensional manifolds embedded in Euclidean space. Using only ambient Gaussian kernels, and without assuming any smooth ambient extensions or estimating geometric quantities of the manifold, we show that intrinsically defined Hölder functions on the manifold can be approximated at rates governed by intrinsic dimension and smoothness. The construction is based on a small-scale expansion in real space rather than a spectral representation. As an application, we obtain adaptive nonparametric convergence rates for Gaussian process regression on manifolds, where the regression procedure itself is unchanged and intrinsic adaptivity results from the approximation analysis.

The Uzawa Method: Historical Perspectives, Current Advances, and Future Directions

Series
Applied and Computational Mathematics Seminar
Time
Friday, January 23, 2026 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Professor Xiaoming YuanThe University of Hong Kong

Abstract:
This talk explores the Uzawa method, tracing its development from early applications in partial differential equations (PDEs) to modern advancements in optimization, image processing, and scientific computing. We will examine recent refinements for developing GPU-adaptive solvers for huge-scale linear programming and its extension to semidefinite programming arising in quantum information science. The discussion will also highlight the method's integration with deep learning and unrolling techniques for optimal control problems of PDEs, as well as its applications in industry.

 

Bio:

Xiaoming Yuan is a Professor in the Department of Mathematics at The University of Hong Kong. His research spans optimization, optimal control, scientific machine computing, and artificial intelligence. He is well recognized for his fundamental contributions to first-order optimization algorithms, including the Alternating Direction Method of Multipliers (ADMM), primal-dual methods, and proximal point algorithms. He also collaborates extensively with the AI and cloud computing industries. He led the development of the first automatic bandwidth allocation system for the cloud computing sector. His team was honored as a Franz Edelman Award Finalist in 2023.

Learning geometry from incomplete pairwise distances: Theory, algorithms and applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 12, 2026 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 254 and https://gatech.zoom.us/j/94954654170
Speaker
Abiy TasissaTufts

The advancement of technology has significantly enhanced our capacity to collect data. However, in many real-world applications, certain inherent limitations, such as the precision of measurement devices, environmental conditions, or operating costs, can result in missing data. In this talk, we focus on the setting where the available data consists of pairwise distances between a set of points, with the goal of estimating the configuration of the underlying geometry from incomplete distance measurements. This is known as the Euclidean distance geometry (EDG) problem and is central to many applications.

We first start by describing the solution when all distances are given using the classical multidimensional scaling (MDS) technique and then discuss a constructive approach to interpret the key mathematical objects in MDS. Next, we introduce a mathematical framework to address the EDG problem under two sampling models of the distance matrix: global sampling (uniform sampling of the entries of the distance matrix) and structured local sampling, where the measurements are limited to a subset of rows and columns. We discuss the conditions required for the exact recovery of the point configuration and the associated algorithms. The last part of the talk will illustrate the algorithms using synthetic and real data and discuss ongoing work.

Fantastic Path RND and find them in diffusion control

Series
Applied and Computational Mathematics Seminar
Time
Tuesday, December 9, 2025 - 13:30 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/96503797550
Speaker
Jiajun HeUniversity of Cambridge

Please Note: Note the special time/date. Speaker will be in person.

I will begin by introducing the concept of path Radon–Nikodym derivative (path RND) and explaining how it connects to, and accelerates, classical sampling and estimation algorithms such as parallel tempering and free-energy perturbation. I will then show how path RND offers a unifying perspective on controlling diffusion models using Sequential Monte Carlo. Finally, I will present a new paradigm for inference-time control based on parallel tempering, which enables more robust manipulation of diffusion trajectories.

Opportunities and Challenges of Neural Networks in Partial Differential Equations

Series
Applied and Computational Mathematics Seminar
Time
Monday, December 1, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Yahong YangGeorgia Tech

The use of neural networks for solving partial differential equations (PDEs) has attracted considerable attention in recent years. In this talk, I will first highlight their advantages over traditional numerical methods, including improved approximation rates and the potential to overcome the curse of dimensionality. I will then discuss the challenges that arise when applying neural networks to PDEs, particularly in training. Because training is inherently a highly nonconvex optimization problem, it can lead to poor local minima with large training errors, especially in complex PDE settings. To address these issues, I will demonstrate how incorporating mathematical insight into the design of training algorithms and network architectures can lead to significant improvements in both accuracy and robustness.

Transformers for Learning a Single task and Multi Task Regression on Manifolds: Approximation and Generalization Insights

Series
Applied and Computational Mathematics Seminar
Time
Monday, November 24, 2025 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/94954654170
Speaker
Zhaiming ShenGeorgia Institute of Technology

Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA, and their successors. While empirical studies have shown that real-world data and learning tasks exhibit low-dimensional geometric structures, the theoretical understanding of transformers in leveraging these structures remains largely unexplored. In this talk, we present a theoretical foundation for transformers in two key scenarios: (1) regression tasks with noisy input data lying near a low-dimensional manifold, and (2) in-context learning (ICL) for regression of Hölder functions on manifolds. For the first setting, we prove that approximation and generalization bound that depend crucially on the intrinsic dimension of the manifold, demonstrating that transformers can effectively learn from data perturbed by high-dimensional noise. For the second setting, we derive generalization error bounds for ICL in terms of prompt length and the number of training tasks, revealing that transformers achieve the minimax optimal rate for Hölder regression—scaling exponentially with the intrinsic rather than ambient dimension. Together, these results provide foundational insights into how transformers exploit low-dimensional geometric structures in learning tasks, advancing our theoretical understanding of their remarkable empirical success.

Pages