All Seminars
Title: Joint Athens-Atlanta Number Theory Seminar |
---|
Seminar: Algebra |
Speaker: Jiuya Wang and Andrew Obus of University of Georgia and The City University of New York |
Contact: Andrew Kobin, andrew.jon.kobin@emory.edu |
Date: 2024-04-16 at 4:00PM |
Venue: Atwood 240 |
Download Flyer |
Abstract: |
Title: Are there sparse codes with large convex embedding dimension? |
---|
Seminar: Combinatorics |
Speaker: Amzi Jeffs of Carnegie Mellon University |
Contact: Liana Yepremyan, liana.yepremyan@emory.edu |
Date: 2024-04-11 at 10:00AM |
Venue: MSC E406 |
Download Flyer |
Abstract: How can you arrange a collection of convex sets in Euclidean space? This question underpins the study of "convex codes," a vein of research that began in 2013 motivated by the study of hippocampal place cells in neuroscience. Classifying convex codes is exceedingly difficult, even in the plane, and gives rise to a number of striking examples and neat geometric theorems. We will focus on a particular open question about how the sparsity of a code relates to its embedding dimension, and some recent partial progress. |
Title: The Fermi-Pasta-Ulam-Tsingou paradox: history, numeric, analytical results and some ideas (involving Neural Networks) |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Guido Mazzuca of Tulane University |
Contact: Manuela Girotti, manuela.girotti@emory.edu |
Date: 2024-04-11 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: In this presentation, I tell the story of the Fermi-Pasta-Ulam-Tsingou (FPUT) paradox from its discovery to the present day. While focusing on recent developments, I introduce the concept of adiabatic invariants, a generalization of conserved quantities, as a means to solve the FPUT paradox within a probabilistic framework. Additionally, I shed light on unresolved issues that can be approached through various methodologies, including potential utilization of Neural Networks. Zoom Option: https://emory.zoom.us/j/94678278895?pwd=bDFxK2RaOTZRMjA5bzQ4UUtxNWJsZz09 |
Title: Sensitivity analysis in forward and inverse problems |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: John Darges of North Carolina State University |
Contact: Matthias Chung, matthias.chung@emory.edu |
Date: 2024-04-09 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: Global sensitivity analysis (GSA) offers a flexible framework for understanding the structural importance of uncertain parameters in mathematical models. We focus on forward and inverse problems arising in uncertainty quantification and the computation of measures of variance-based sensitivity. The models involved in these problems are often computationally expensive to evaluate. Traditional methods for sensitivity analysis then come at an unreasonable cost. A preferred workaround is to create a surrogate model that is less cumbersome to evaluate. Surrogate methods that accelerate GSA are proposed and studied. A new class of surrogate models is introduced, using random weight neural networks for surrogate-assisted GSA, presenting analytical formulas for Sobol' indices. The proposed algorithm enhances accuracy through weight sparsity selection, as shown by its application to forward problems derived from ordinary differential equation systems. We also tackle sensitivity analysis in Bayesian inverse problems. A framework for variance-based sensitivity analysis of Bayesian inverse problems with respect to prior hyperparameters is introduced, along with an efficient algorithm combining importance sampling and surrogate modeling. The approach is demonstrated on a nonlinear Bayesian inverse problem from epidemiology, showcasing its effectiveness in quantifying uncertainty in posterior statistics. |
Title: Counting 5-isogenies of elliptic curves over the rationals |
---|
Seminar: Algebra |
Speaker: Santiago Arango-Piñeros of Emory University |
Contact: Andrew Kobin, ajkobin@emory.edu |
Date: 2024-04-09 at 4:00PM |
Venue: MSC W303 |
Download Flyer |
Abstract: We study the asymptotic order of growth of the number of 5-isogenies of elliptic curves over the rationals, with bounded naive height. This is forthcoming work in collaboration with Changho Han, Oana Padurariu, and Sun Woo Park. |
Title: Multifidelity linear regression for scientific machine learning from scarce data |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Elizabeth Qian of Georgia Tech |
Contact: Elizabeth Newman, elizabeth.newman@emory.edu |
Date: 2024-04-04 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: Machine learning (ML) methods have garnered significant interest as potential methods for learning surrogate models for complex engineering systems for which traditional simulation is expensive. However, in many scientific and engineering settings, training data are scarce due to the cost of generating data from traditional high-fidelity simulations. ML models trained on scarce data have high variance and are sensitive to vagaries of the training data set. We propose a new multifidelity training approach for scientific machine learning that exploits the scientific context where data of varying fidelities and costs are available; for example high-fidelity data may be generated by an expensive fully resolved physics simulation whereas lower-fidelity data may arise from a cheaper model based on simplifying assumptions. We use the multifidelity data to define new multifidelity Monte Carlo estimators for the unknown parameters of linear regression models, and provide theoretical analyses that guarantee accuracy and improved robustness to small training budgets. Numerical results show that multifidelity learned models achieve order-of-magnitude lower expected error than standard training approaches when high-fidelity data are scarce. |
Title: Improving Sampling and Function Approximation in Machine Learning Methods for Solving Partial Differential Equations |
---|
Defense: Dissertation |
Speaker: Xingjian Li of Emory University |
Contact: Xingjian Li, xingjian.li@emory.edu |
Date: 2024-03-29 at 9:30AM |
Venue: White Hall 200 |
Download Flyer |
Abstract: Numerical solutions to partial differential equations (PDEs) remain one of the main focus in the field of scientific computing. Deep learning and neural network based methods for solving PDEs have gained much attention and popularity in recent years. The universal approximation property of neural networks allows for a cheaper approximation of functions in high dimensions compared to many traditional numerical methods. Reformulating PDE problems as optimization tasks also enables straightforward implementation and can sometimes circumvent stability concerns common for classic numerical methods that rely on explicit or semi-explicit time discretization. However low accuracy and convergence difficulty stand as challenges to deep learning based schemes, fine-tuning neural networks can also be time-consuming at times.\\ \\ In our work, we present some of our findings using machine learning methods for solving certain PDEs. We divide our work into two sections, in the first half we focus on the popular Physics Informed Neural Networks (PINNs) framework, specifically in problems with dimensions less than or equal to three. We present an alternative optimization based algorithm using a B-spline polynomial function approximator and accurate numerical integration with a grid based sampling scheme. With implementation using popular machine learning libraries, our approach serves as a direct substitute for PINNs, and through performance comparison between the two methods over a wide selection of examples, we find that for low dimensional problems, our proposed method can improve both accuracy and reliability when compared to PINNs. In the second half, we focus on a general class of stochastic optimal control (SOC) problems. By leveraging the underlying theory we propose a neural network solver that solves the SOC problem and the corresponding Hamilton–Jacobi–Bellman (HJB) equation simultaneously. Our method utilizes the stochastic Pontryagin maximum principle and is thus unique in the sampling strategy, this combined with modifying the loss function enables us to tackle high-dimensional problems efficiently. |
Title: Quantitative stability of traveling waves |
---|
Seminar: Analysis and Differential Geometry |
Speaker: Christopher Henderson of University of Arizona |
Contact: Maja Taskovixc, maja.taskovic@emory.edu |
Date: 2024-03-29 at 10:00AM |
Venue: MSC W301 |
Download Flyer |
Abstract: In their original paper, Kolmogorov, Petrovsky, and Piskunov demonstrated stability of the minimal speed traveling wave with an ingenious compactness argument based on, roughly, the decreasing steepness of the profile. This proof is extremely flexible, yet entirely not quantitative. On the other hand, more modern PDE proofs of this fact for general reaction-diffusion equations are highly tailored to the particular equation, fairly complicated, and often not sharp in the rate of convergence. In this talk, which will be elementary and self-contained, I will introduce a natural quantity, the shape defect function, that allows a simple approach to quantifying convergence to the traveling wave for a large class of reaction-diffusion equations. Connections to the calculus of variations and generalizations to other settings will be discussed. This is a joint work with Jing An and Lenya Ryzhik. |
Title: Homogeneous Substructures in Ordered Matchings |
---|
Seminar: Combinatorics |
Speaker: Andrzej Rucinski of Adam Mickiewicz University, Poznan |
Contact: Liana Yepremyan, liana.yepremyan@emory.edu |
Date: 2024-03-29 at 4:00PM |
Venue: MSC W201 |
Download Flyer |
Abstract: An ordered matching M_n is a partition of a linearly ordered set of size 2n into n pairs (called edges). Taking the linear ordering into account, every pair of edges forms one of three patterns: AABB, ABBA, or ABAB. A submatching with all pairs of edges forming the same pattern is called a clique. In my talk, I will first show an Erdos-Szekeres type result guaranteeing a large clique in every matching M_n. Then I will move on to a random (uniform) setting and investigate the largest size of a clique of a given type (pattern) present in almost all matchings. Finally, I will attempt to generalize these results to r-uniform hypermatchings, that is, partitions of a linearly ordered set of size rn into n r-element subsets. This is joint work with Andrzej Dudek and Jarek Grytczuk. |
Title: Degeneracy of eigenvalues and singular values of parameter dependent matrices |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Alessandro Pugliese of Georgia Tech/University of Bary |
Contact: Manuela Manetta, manuela.manetta@emory.edu |
Date: 2024-03-28 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: Hermitian matrices have real eigenvalues and an orthonormal set of eigenvectors. Do smooth Hermitian matrix valued functions have smooth eigenvalues and eigenvectors? Starting from such question, we will first review known results on the smooth eigenvalue and singular values decompositions of matrices that depend on one or several parameters, and then focus on our contribution, which has been that of devising topological tools to detect and approximate parameters' values where eigenvalues or singular values of a matrix valued function are degenerate (i.e. repeated or zero). The talk will be based on joint work with Luca Dieci (Georgia Tech) and Alessandra Papini (Univ. of Florence). |