All Seminars
Title: Total curvature and the isoperimetric inequality: Proof of the Cartan-Hadamard conjecture |
---|
Seminar: Analysis and Differential Geometry |
Speaker: Mohammad Ghomi of Georgia Institute of Technology |
Contact: Vladimir Oliker, oliker@emory.edu |
Date: 2019-09-17 at 4:00PM |
Venue: PAIS 220 |
Download Flyer |
Abstract: The classical isoperimetric inequality states that in Euclidean space spheres provide unique enclosures of least perimeter for any given volume. In this talk we show that this inequality also holds in spaces of nonpositive curvature, known as Cartan-Hadamard manifolds, as conjectured by Aubin, Gromov, Burago, and Zalgaller. The proof is based on a comparison formula for total curvature of level sets in Riemannian manifolds, and estimates for the smooth approximation of the signed distance function, via inf-convolution and Reilly type formulas among other techniques. Immediate applications include sharp extensions of Sobolev and Faber-Krahn inequalities to spaces of nonpositive curvature. This is joint work with Joel Spruck. |
Title: Structured Matrix Approximation by Separation and Hierarchy |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Difeng Cai of Emory University |
Contact: Yuanzhe Xi, yxi26@emory.edu |
Date: 2019-09-13 at 2:00PM |
Venue: MSC W303 |
Download Flyer |
Abstract: The past few years have seen the advent of big data, which brings unprecedented convenience to our daily life. Meanwhile, from a computational point of view, a central question arises amid the exploding amount of data: how to tame big data in an economic and efficient way. In the context of matrix computations, the question consists in the ability to handle large dense matrices. In this talk, I will first introduce data-sparse hierarchical representations for dense matrices. Then I will present recent development of a versatile algorithm called SMASH to operate dense matrices with optimal complexity in the most general setting. Various applications will be presented to demonstrate the advantage of SMASH over traditional approaches. |
Title: Analytic representations of large discrete structures |
---|
Seminar: Combinatorics |
Speaker: Daniel Kral of Masaryk University and the University of Warwick |
Contact: Dwight Duffus, dwightduffus@emory.edu |
Date: 2019-09-13 at 4:00PM |
Venue: MSC W301 |
Download Flyer |
Abstract: The theory of combinatorial limits aims to provide analytic models representing large graphs and other discrete structures. Such analytic models have found applications in various areas of computer science and mathematics, for example, in relation to the study of large networks in computer science. We will provide a brief introduction to this rapidly developing area of combinatorics and we will then focus on several questions motivated by problems from extremal combinatorics and computer science. The two topics that we will particularly discuss include quasirandomness of discrete structures and a counterexample to a a conjecture of Lovasz, which was was one of the two most cited conjectures in the area and which informally says that optimal solutions to extremal graph theory problems can be made asymptotically unique by introducing finitely many additional constraints. |
Title: Computing unit groups |
---|
Seminar: Algebra |
Speaker: Justin Chen of Georgia Tech |
Contact: David Zureick-Brown, dzb@mathcs.emory.edu |
Date: 2019-09-10 at 4:00PM |
Venue: MSC W303 |
Download Flyer |
Abstract: The group of units of a ring is one of the most basic, yet mysterious, invariants of the ring. Little is known about the structure of the unit group in general, much less explicit algorithms for computation, although the need for these do arise in applications such as tropical geometry. I will discuss some general questions about unit groups, and then specialize to the case of coordinate rings of classical algebraic varieties - in particular, describing explicit algorithms for computation in the case of smooth curves of low genus (rational and elliptic). This is based on joint work with Sameera Vemulapalli and Leon Zhang. |
Title: A Step in the Right Dimension: Tensor Algebra and Applications |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Elizabeth Newman of Emory University |
Contact: Yuanzhe Xi, yxi26@emory.edu |
Date: 2019-09-06 at 2:00PM |
Venue: MSC W303 |
Download Flyer |
Abstract: As data have become more complex to reflect multi-way relationships in the real world, tensors have become essential to reveal latent content in multidimensional data. In this talk, we will focus on a tensor framework based on the M-product, a general class of tensor-tensor products which imposes algebraic structure in a high-dimensional space (Kilmer and Martin, 2011; Kernfeld et al., 2015). The induced M-product algebra inherits matrix-mimetic properties and offers provably optimal, compressed representations. To demonstrate the efficacy of working in an algebraic tensor framework, we will explore two applications: classifying data using tensor neural networks and forming sparse representations using tensor dictionaries. |
Title: Spanning subgraphs in uniformly dense and inseparable graphs |
---|
Seminar: Combinatorics |
Speaker: Mathias Schacht of The University of Hamburg and Yale University |
Contact: Dwight Duffus, dwightduffus@emory.edu |
Date: 2019-09-06 at 4:00PM |
Venue: MSC W301 |
Download Flyer |
Abstract: We consider sufficient conditions for the existence of k-th powers of Hamiltonian cycles in n-vertex graphs G with minimum degree cn for arbitrarily small c >0 . About 20 years ago Komlós, Sarközy, and Szemerédi resolved the conjectures of Pósa and Seymour and obtained optimal minimum degree conditions for this problem by showing that c=k/k+1 suffices for large n. For smaller values of c the given graph G must satisfy additional assumptions. We show that inducing subgraphs of density d>0 on linear subsets of vertices and being inseparable, in the sense that every cut has density at least c, are sufficient assumptions for this problem and, in fact, for a variant of the bandwidth theorem. This generalises recent results of Staden and Treglown. |
Title: Modular linear differential equations |
---|
Seminar: Algebra |
Speaker: Kiyokazu Nagatomo of Osaka University |
Contact: David Zureick-Brown, dzb@mathcs.emory.edu |
Date: 2019-09-03 at 4:00PM |
Venue: MSC W303 |
Download Flyer |
Abstract: The most naive definition of \textit{modular linear differential equations} (MLDEs) would be linear differential equations whose space of solutions are invariant under the weight $k$ slash action of $\Gamma_1=SL_2(\mathbb{Z})$, for some $k$. Then under an analytic condition for coefficients functions and the Wronskians of a~basis of the space of solutions of equations, we have (obvious) expressions of MLDEs as: \[ L(f) \,=\,\mathfrak{d}_k^n(f)+\sum_{i=2}^nP_{2i}\mathfrak{d}_k^{n-i}(f) \] where $P_{2i}$ is a modular form of of weight $2i$ on $SL_2(\mathbb{Z})$ and $\mathfrak{d}_k(f)$ is the \textit{Serre derivative}. (We could replace $\Gamma$ by a Fuchsian subgroup of $SL_2(\mathbb{R})$ and allow the modular forms $P_{2i}$ to be meromorphic.) However, the iterated Serre derivative $\mathfrak{d}_k^n(f)$ (called a ``higher Serre derivation'' because as an operator it preserves modulality) is very complicated since it involves the Eisenstein series $E_2$. MLDEs, of course, can be given in the form % \[ % \mathsf{L}(f) \,=\, D^n(f)+\sum_{i=1}^nQ_iD^i(f)\quad\text{where $D=\frac{1}{2\pi\sqrt{-1}}\frac{d}{d\tau}$.} % \] \[ \mathsf{L}(f) \,=\, D^n(f)+\sum_{i=1}^nQ_iD^i(f) \] where \[ D=\frac{1}{2\pi\sqrt{-1}}\frac{d}{d\tau}. \] Then it is not easy to know if the equation above is an MLDE except the fact that $Q_i$ are quasimodular forms. Very recently, Y.~Sakai and D.~Zagier (my collaborators) found formulas of $\mathsf{L}(f)$ by using the Rankin--Cohen products between $f$ and $g_i$. This is a modular form of weight $2i$, which is a linear function of the differential of~$Q_{j}$. Moreover, there are \textit{inversion formulas} which express $Q_i$ as a linear function of the derivatives of $g_{j}$. The most important fact is that the order $n$ and $n-1$ parts are equal to the so-called higher Serre derivative in the sense of Kaneko and Koike, where the group is $\Gamma_1$. (This holds for any Fuchsian group.) \\ Finally, the most important nature of my talk is that I will use a \textbf{blackboard} instead of \textbf{slides}ss. |
Title: Iterative regularization methods for large-scale linear inverse problems |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Silvia Gazzola of University of Bath |
Contact: James Nagy, jnagy@emory.edu |
Date: 2019-08-27 at 2:00PM |
Venue: MSC W301 |
Download Flyer |
Abstract: Inverse problems are ubiquitous in many areas of Science and Engineering and, once discretized, they lead to ill-conditioned linear systems, often of huge dimensions: regularization consists in replacing the original system by a nearby problem with better numerical properties, in order to find a meaningful approximation of its solution. After briefly surveying some standard regularization methods, both iterative (such as many Krylov methods) and direct (such as Tikhonov method), this talk will introduce a recent class of methods that merge an iterative and a direct approach to regularization. In particular, strategies for choosing the regularization parameter and the regularization matrix will be emphasized, eventually leading to the computation of approximate solutions of Tikhonov problems involving a regularization term expressed in some p-norms. |
Title: Learning from data through the lens of mathematical models: Bayesian Inverse Problems and Uncertainty Quantification |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Umberto Villa, Ph.D. of Washington University in St Louis |
Contact: Alessandro Veneziani, ale@mathcs.emory.edu |
Date: 2019-06-24 at 2:00PM |
Venue: MSC W301 |
Download Flyer |
Abstract: Recent years have seen rapid growth in the volume of observational and experimental data acquired from physical, biological or engineering systems. A fundamental question in several areas of science, engineering, medicine, and beyond is how to extract insight and knowledge from all of those available data. This process of learning from data is at its core a mathematical inverse problem. That is, given (possibly noisy) data and a (possibly uncertain) forward model describing the map from parameters to data, we seek to reconstruct or infer the parameters that characterize the model. Inverse problems are often ill-posed, i.e. their solution may not exist or may not be unique or may be unstable to perturbation in the data. Simply put, there may not be enough information in the data to fully determine the model parameters. In these cases, uncertainty is a fundamental feature of the inverse problem. The goal then is to both reconstruct the model parameters and quantify the uncertainty in such reconstruction. The ability to quantify these uncertainties is crucial to reliably predict the future behavior of the physical, biological or engineering systems, and to make informed decisions under uncertainty. This talk will illustrate the mathematical concepts and computational tools necessary for the solution of inverse problems in a deterministic and probabilistic (Bayesian) framework. Examples of inverse problems arising in imaging, geoscience, material engineering, and other fields of science will be presented. https://engineering.wustl.edu/Profiles/Pages/Umberto-Villa.aspx |
Title: Discretize-Optimize Methods for Residual Neural Networks |
---|
Seminar: Numerical Analysis and Scientific Computing |
Speaker: Derek Onken of Emory University |
Contact: Lars Ruthotto, lruthotto@emory.edu |
Date: 2019-04-26 at 2:00PM |
Venue: MSC W301 |
Download Flyer |
Abstract: Neural networks (discrete universal approximators) demonstrate impressive performance in myriad tasks. Specifically, Residual Neural Networks (ResNets) have won numerous image classification contests since they were introduced a few years ago. Deep learning centers around the addition of more and more layers (and thus parameters) to these networks in efforts to improve performance. In this talk, we interpret ResNets as a discretization of an ordinary differential equation (ODE). This viewpoint exposes the similarity between the learning problem and problems of optimal control of the ODE. We use a discretize-optimize approach for training the weights of the ResNet and study the impact of the particular discretization strategy on the network performance. Varying the discretization of the features and parameters allows us to determine if the improved accuracy from deeper architectures stems from the larger number of parameters or more layers. |