All Seminars
Title: Structured Low-Rank Approximation and the Proxy Point Method |
---|
Seminar: Computational Math |
Speaker: Mikhail Lepilov of Purdue University |
Contact: Lars Ruthotto, lruthotto@emory.edu |
Date: 2022-02-17 at 1:00PM |
Venue: Online |
Download Flyer |
Abstract: Structured algorithms for large, dense matrices require efficient low-rank approximation methods to obtain their computational cost savings. There are many ways of obtaining such approximations depending on the type of matrix involved. For kernel matrices, analytic approximation methods such as truncated Taylor expansions or the proxy point method have been used in the Fast Multipole Method and other structured matrix algorithms. In this talk, we focus on the proxy point method, in which pairwise interactions between two separated clusters are approximated using the interactions of each cluster with a smaller chosen set of "proxy" points that separate the clusters. We perform a new accuracy analysis of this method when applied to 1D analytic kernels, and we then use it to devise a sublinear-time algorithm for constructing the HSS approximation of certain Cauchy and Toeplitz matrices. Finally, we extend this method its analysis to analytic kernels in several complex variables. |
Title: Euler’s Polyhedron Formula and The Euler Characteristic |
---|
Seminar: Mathematics |
Speaker: Daniel Hess of University of Chicago |
Contact: Bree Ettinger, betting@emory.edu |
Date: 2022-02-04 at 10:00AM |
Venue: MSC W201 and Zoom |
Download Flyer |
Abstract: A standard soccer ball is constructed using 12 regular pentagons and 20 regular hexagons. Is it possible to build one using only pentagons? How about only hexagons? It turns out that one of these is possible and one is not! The key to answering these questions is Euler’s Polyhedron Formula, which expresses a certain relationship between the number of vertices, edges, and faces in any convex polyhedron. In this talk, we will discuss this formula, the more general Euler characteristic, and applications such as the classification of the Platonic solids and triangulations of surfaces. |
Title: Hamiltonian system and spectral inverse problems |
---|
Seminar: Analysis Reading Seminar |
Speaker: Guangqiu Liang of Emory University |
Contact: Yiran Wang, yiran.wang@emory.edu |
Date: 2022-02-04 at 3:00PM |
Venue: MSC E406 |
Download Flyer |
Abstract: |
Title: Geometric Group Theory and Untangling Earphones |
---|
Seminar: Mathematics |
Speaker: Neha Gupta of Georgia Institute of Technology |
Contact: Bree Ettinger, betting@emory.edu |
Date: 2022-02-02 at 10:00AM |
Venue: MSC W201 and Zoom |
Download Flyer |
Abstract: Suppose you get your earphones entangled around a doughnut with two holes... an entirely probable scenario, right?! Then how "big" does your doughnut need to be, for you to successfully untangle your earphones? This is going to be a gentle introduction to groups, and how they connect to geometry. We will slowly build up to answering our original question. No prior experience with groups, geometry, or topology is required or assumed. This is joint work with Ilya Kapovich. |
Title: Riemannian Geometry and Biomedical Data |
---|
Seminar: Mathematics |
Speaker: Sima Ahsani of Emory University |
Contact: Bree Ettinger, betting@emory.edu |
Date: 2022-01-28 at 10:00AM |
Venue: MSC W201 and Zoom |
Download Flyer |
Abstract: Statistics is a science for everyone who wishes to collect, analyze, interpret, and understand data. Over the last few years, due to rapid technological developments, we handle increasingly large and complex data. For example, understanding biomedical data, that lie on matrix manifolds, in order to early detection of disease to prevent, control, or provide improved health care with low costs. Therefore, one of the most important steps in analyzing this type of data is understanding the structure of the surfaces where data live on them. To this end, differential geometry allows us to develop local methods to understand the global properties of surfaces that data lie on them. In this talk, after giving some examples of datasets that lie on curved spaces, I will provide an intuitive definition of Riemannian manifolds and their basic properties. Then, I will describe matrix manifolds and explain how to measure distances between two points and challenges that may arise when we want to calculate the mean of datasets and mention techniques that can be used to tackle these challenges. In the end, some resources and content will be provided to engage undergraduate students to know more about this research area to find the right way to develop their ideas and interest and take steps to build their future research areas. |
Title: Game Theory, AI, and Optimal Transportation |
---|
Seminar: Computational Math |
Speaker: Levon Nurbekyan of University of California, Los Angeles |
Contact: Lars Ruthotto, lruthotto@emory.edu |
Date: 2022-01-24 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: The modern world constitutes a network of complex socioeconomic interactions leading to increasingly challenging decision-making problems for participating agents such as households, governments, firms, and autonomous systems. We consequently need refined mathematical models and solution techniques for addressing these difficulties. In this talk, I will demonstrate how to apply mathematical game theory, optimal control, and statistical physics to model large systems of interacting agents and discuss novel dimension reduction and machine learning techniques for their solution. An intriguing aspect of this research is that the mathematics of interacting agent systems provides a foundation for fast and robust core machine learning algorithms and their analysis. For example, I will demonstrate how to solve the regularity problem in normalizing flows based on their "crowd-motion" or optimal transportation interpretation. Yet another essential utility of optimal transportation in data science is that it provides a metric in the space of probability measures. I will briefly discuss the application of this metric for robust solution methods in inverse problems appearing in physical applications. I will conclude by discussing future research towards socioeconomic applications, data science, and intelligent autonomous systems. |
Title: Zarankiewicz problem, VC-dimension, and incidence geometry |
---|
Seminar: Discrete Mathematics |
Speaker: Cosmin Pohoata of Yale University |
Contact: Dwight Duffus, dwightduffus@emory.edu |
Date: 2022-01-21 at 10:00AM |
Venue: Zoom and W201 |
Download Flyer |
Abstract: The Zarankiewicz problem is a central problem in extremal graph theory, which lies at the intersection of several areas of mathematics. It asks for the maximum number of edges in a bipartite graph on $2n$ vertices, where each side of the bipartition contains $n$ vertices, and which does not contain the complete bipartite graph $K_{s,t}$ as a subgraph. One of the many reasons this problem is rather special among Tur\'an-type problems is that the extremal graphs in question, whenever available, always seem to have to be of an algebraic nature, in particular witnesses to basic intersection theory phenomena. The most tantalizing case is by far the diagonal problem, for which the answer is unknown for most values of $s = t$, and where it is a complete mystery what the extremal graphs could look like. In this talk, we will discuss a new phenomenon related to an important variant of this problem, which is the analogous question in bipartite graphs with bounded VC-dimension. We will present several new consequences in incidence geometry, which improve upon classical results. \\ \\ This is based on joint work with Oliver Janzer. |
Title: Coloring hypergraphs of small codegree, and a proof of the Erdős–Faber–Lovász conjecture |
---|
Seminar: Discrete Mathematics |
Speaker: Tom Kelly of The University of Birmingham |
Contact: Dwight Duffus, dwightduffus@emory.edu |
Date: 2022-01-19 at 10:00AM |
Venue: Zoom |
Download Flyer |
Abstract: A long-standing problem in the field of graph coloring is the Erd$\ddot{\mathrm{o}}$s–Faber–Lovász conjecture (posed in 1972), which states that the chromatic index of any linear hypergraph on n vertices is at most n, or equivalently, that a nearly disjoint union of n complete graphs on at most n vertices has chromatic number at most n. In joint work with Dong Yeap Kang, Daniela Kühn, Abhishek Methuku, and Deryk Osthus, we proved this conjecture for every sufficiently large n. Recently, we also solved a related problem of Erd$\ddot{\mathrm{o}}$s from 1977 on the chromatic index of hypergraphs of small codegree. In this talk, I will survey the history behind these results and discuss some aspects of the proofs. |
Title: Tensors and Training: Optimal Multidimensional Representations and Efficient Deep Learning |
---|
Seminar: Computational Math |
Speaker: Elizabeth Newman of Emory University |
Contact: Lars Ruthotto, lruthotto@emory.edu |
Date: 2022-01-18 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: The explosion of available data and the revolution in computing technologies have created a critical need for both compressed representations of large, real-world data and powerful data-driven algorithms. In this talk, we will address these needs in two distinct ways: by obtaining optimal multidimensional approximations, and by designing efficient deep learning algorithms. The traditional approach to dimensionality reduction and feature extraction is the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. In the first half of this talk, we will show that high-dimensional datasets are more compressible when treated as tensors (multiway arrays). We will obtain these representations using a tensor algebra under which notions of rank and the tensor SVD are consistent with their matrix counterparts. This framework yields provably optimal approximations, and we will support this theory with empirical studies. Deep neural networks (DNNs), flexible models composed of simple layers parameterized by weights, have been successful high-dimensional function approximators in countless applications. However, training DNNs (i.e., finding a good set of weights) is notoriously challenging, requiring significant time and computational resources. In the second half of this talk, we will describe two approaches for training separable DNNs, the commonly-used architecture where the weights of the final layer are applied linearly. We will leverage this linearity using partial optimization in a deterministic setting and iterative sampling in a stochastic setting. We will demonstrate empirically that both approaches yield faster convergence to more accurate DNN models and less tuning of hyperparameters. We will conclude with a discussion about new ideas to bring these two powerful data-based techniques together. |
Title: Novel Methods for Parameter Estimation and Inverse Problems: from Big Data to Surrogate Data |
---|
Seminar: Computational Math |
Speaker: Matthias Chung of Virginia Tech |
Contact: Lars Ruthotto, lruthotto@emory.edu |
Date: 2022-01-14 at 10:00AM |
Venue: MSC W201 |
Download Flyer |
Abstract: Emerging fields such as data analytics, machine learning, and uncertainty quantification heavily rely on efficient computational methods for solving inverse problems. With growing model complexities and ever-increasing data volumes, inference methods have exceeded their limits of applicability, and novel methods are urgently needed. In this talk, we discuss modern challenges in parameter estimation and inverse problems and examine novel approaches to overcome such challenges. We focus on massive least-squares problems, where the size of the forward process exceeds the storage capabilities of computer memory or the data is simply not available all at once, and inference for dynamical systems with noisy data, model uncertainties, and unknown mechanisms. We present sampled limited memory approaches, where an approximation of the global curvature of the underlying least-squares problem is used to speed-up initial convergence while automatically addressing potential ill-posedness. This research is a fundamental building block for accelerating machine learning approaches. Then, we discuss a novel surrogate data approach that merges mathematical models and stochastic processes to ultimately provide stringent uncertainty estimates. We demonstrate the benefits of our proposed methods for a wide range of application areas, including medical imaging and systems biology. |