All Seminars

Title: Importance Sampling for Approximating High-Dimensional Kernel Matrices
Seminar: Numerical Analysis and Scientific Computing
Speaker: Christopher Musco of New York University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2021-02-19 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Randomized algorithms for compressing large matrices and accelerating linear algebraic computations have received significant attention in recent years. Unfortunately, popular methods based on Johnson-Lindenstrauss random projections and related techniques cannot be efficiently applied to dense implicit matrices that appear in many data applications -- specifically they are difficult to apply to large kernel Gram matrices constructed from high-dimensional data. In this talk, I will discuss recent efforts to address this issue by instead compressing large kernel matrices using randomized importance sampling methods, and specifically those based on leverage scores. I will describe a recursive algorithm for leverage score sampling that provably computes a near-optimal rank-k approximation to any n x n kernel Gram matrix in roughly O(nk^2) time, which is sublinear in the size of that matrix. Random projection methods on the other hand require at least O(n^2) time. I will also touch on connections between leverage score sampling and the practically popular "random Fourier features method", including insights on how to improve this method using tools from Randomized Numerical Linear Algebra.

Bio: Christopher Musco is an Assistant Professor at New York University's Tandon School of Engineering. He received his Ph.D. from the theoretical computer science group at MIT in 2018. Christopher's research focuses on scalable algorithms for matrix problems that arise in machine learning and data science, with a focus on randomized approximation algorithms.
Title: Divisors on Non-Generic Curves
Seminar: Algebra and Number Theory
Speaker: Kaelin Cook-Powell of University of Kentucky
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-02-18 at 1:00PM
Venue: https://emory.zoom.us/j/92174650436
Download Flyer
Abstract:
In algebraic geometry the study of divisors on curves, known as Brill-Noether theory, has been a rich field of study for decades. When a curve C is general in $M_g$, the moduli space parameterizing all curves of genus g, much is known about the spaces of divisors of prescribed rank r and degree d, denoted $W^r_d(C)$. However, when C is not general, the loci $W^r_d(C)$ can exhibit bizarre and pathological behavior. Divisors on a curve are intimately related to line bundles on that curve, so afterwards we will introduce the idea of the splitting type of a line bundle, a more refined invariant than the rank and degree. The main goal of this talk will be to define and analyze the spaces of line bundles with a given splitting type and argue that these are a ``correct" generalization of the spaces $W^r_d(C)$. All of this can be done from a purely combinatorial standpoint and involves an in-depth study of certain special families of Young tableaux that only depend on a given splitting type.
Title: Direct Sampling Algorithms in Inverse Scattering
Seminar: Mathematics Colloquium
Speaker: Isaac Harris of Purdue University
Contact: James Nagy, jnagy@emory.edu
Date: 2021-02-15 at 12:30PM
Venue: https://emory.zoom.us/j/93146817322?pwd=WTVrOEdnTzg3UVpYYlNOYWcrMHJGZz09
Download Flyer
Abstract:
In this talk, we will discuss the Transmission Eigenvalue problem that is obtained by the inverse acoustic scattering for a material with a coating on the boundary. In general, the transmission eigenvalue problem is a non-selfadjoint and non-linear eigenvalue problem for a system of PDEs. This makes their investigation difficult and interesting Mathematically. Some of the questions we will discuss are the existence and discreteness of the eigenvalues as well as how they depend on the material parameters. Numerical examples will be given to show how the refractive index may be estimated using these eigenvalues.
Title: Direct Sampling Algorithms in Inverse Scattering
Seminar: Numerical Analysis and Scientific Computing
Speaker: Isaac Harris of Purdue University
Contact: James Nagy, jnagy@emory.edu
Date: 2020-11-20 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
In this talk, we will discuss a recent qualitative imaging method referred to as the Direct Sampling Method for inverse scattering. This method allows one to recover a scattering object by evaluating an imaging functional that is the inner-product of the far-field data and a known function. It can be shown that the imaging functional is strictly positive in the scatterer and decays as the sampling point moves away from the scatterer. The analysis uses the factorization of the far-field operator and the Funke-Hecke formula. This method can also be shown to be stable with respect to perturbations in the scattering data. We will discuss the inverse scattering problem for both acoustic and electromagnetic waves.
Title: Rayleigh Quotient Optimizations and Eigenvalue Problems
Seminar: Numerical Analysis and Scientific Computing
Speaker: Zhaojun Bai of UC Davis
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-11-13 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Many computational science and data analysis techniques lead to optimizing Rayleigh quotient (RQ) and RQ-type objective functions, such as computing excitation states (energies) of electronic structures, robust classification to handle uncertainty and constrained data clustering to incorporate domain knowledge. We will discuss emerging RQ optimization problems, variational principles, and reformulations to algebraic linear and nonlinear eigenvalue problems. We will show how to exploit underlying properties of these eigenvalue problems for designing fast eigensolvers, and illustrate the efficacy of these solvers in applications.
Title: Data-Driven Methods for Image Reconstruction
Seminar: Numerical Analysis and Scientific Computing
Speaker: Jeff Fessler of University of Michigan
Contact: James Nagy, jnagy@emory.edu
Date: 2020-11-06 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Inverse problems are usually ill-conditioned or ill-posed, meaning that there are multiple candidate solutions that all fit the measured data equally or reasonably well. Modeling assumptions are needed to distinguish among candidate solutions. This talk will focus on contemporary adaptive signal models and their use as regularizers for solving inverse problems, including methods based on machine-learning tools. Applications illustrated will include MRI and CT.
Title: Recent Advances in Ptychography
Seminar: Numerical Analysis and Scientific Computing
Speaker: Wendy Di of Argonne National Lab
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-30 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Phase retrieval has been recognized as an applied mathematician’s dream problem due to its simple form, yet interesting and challenging properties to be efficiently solved. Ptychography, a special type of phase retrieval imaging technique, offers an oversampling justification to fix the non-uniqueness of traditional phase retrieval problem. The technique consists of a coherent beam that is scanned across an object in a series of overlapping positions, leading to reliable and improved reconstructions. Furthermore, ptychographic microscopes allow for large fields to be imaged at high resolution, however, at the cost of additional computational expense. In this talk, I will discuss the mathematically interesting properties of ptychography in ways that solving linear inverse problem will never be, and pose potential remedies to numerically accelerate the ptychographic reconstruction
Title: Bayesian Sparse Learning With Preconditioned Stochastic Gradient MCMC and its Applications
Seminar: Numerical Analysis and Scientific Computing
Speaker: Guang Lin of Purdue University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-23 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding over-fitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. In this work, we propose a Bayesian type sparse deep leaning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using appropriate stochastic gradient Markov Chain Monte Carlo method (SG-MCMC), and optimize the latent variables using stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in non-convex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply computational manageable preconditioner in the updating rule, which provides step size adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix, our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in preconditioned SGLD. According to existing theoretical framework, we show that the proposed method can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning the solutions of elliptic PDE, which demonstrate the accuracy and efficiency of present work.
Title: Numerical Linear Algebra Methods in Recurrent Neural Networks
Seminar: Numerical Analysis and Scientific Computing
Speaker: Qiang Ye of University of Kentucky
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-09 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Deep neural networks have emerged as one of the most powerful machine learning methods. Recurrent neural networks (RNNs) are special architectures designed to efficiently model sequential data by exploiting temporal connections within a sequence and handling variable sequence lengths in a dataset. However, they suffer from so-called vanishing or exploding gradient problems. Recent works address this issue by using a unitary/orthogonal recurrent matrix. In this talk. we will present some numerical linear algebra based methods to improve RNNs. We first introduce a simpler and novel RNN that maintains orthogonal recurrent matrix using a scaled Cayley transform. We then develop a complex version with a unitary recurrent matrix that allows direct training of the scaling matrix in the Cayley transform. We further extend our architecture to use a block recurrent matrix with a spectral radius bounded by one to effectively model both long-term and short-term memory in RNNs. Our methods achieve superior results with fewer trainable parameters than other variants of RNNs in a variety experiments.
Title: The Extremal Number of Tight Cycles
Seminar: Combinatorics
Speaker: Istvan Tomon of ETH Zurich
Contact: Dr. Hao Huang, hao.huang@emory.edu
Date: 2020-10-02 at 10:00AM
Venue: https://emory.zoom.us/j/96323787117
Download Flyer
Abstract:
A tight cycle in an $r$-uniform hypergraph $\mathcal{H}$ is a sequence of $\ell\geq r+1$ vertices $x_1,\dots,x_{\ell}$ such that all $r$-tuples $\{x_{i},x_{i+1},\dots,x_{i+r-1}\}$ (with subscripts modulo $\ell$) are edges of $\mathcal{H}$. An old problem of V. S\'os, also posed independently by J. Verstra\"ete, asks for the maximum number of edges in an $r$-uniform hypergraph on $n$ vertices which has no tight cycle. Although this is a very basic question, until recently, no good upper bounds were known for this problem for $r\geq 3$. In my talk, I will present a brief outline of the proof of the upper bound $n^{r-1+o(1)}$, which is tight up to the $o(1)$ error term. This is based on a joint work with Benny Sudakov.