All Seminars

Title: Applications of Fractional Operators from Optimal Control to Machine Learning
Seminar: Numerical Analysis and Scientific Computing
Speaker: Harbir Antil of George Mason University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2021-03-19 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Fractional calculus and its application to anomalous diffusion has recently received a tremendous amount of attention. In complex/heterogeneous material mediums, the long-range correlations or hereditary material properties are presumed to be the cause of such anomalous behavior. Owing to the revival of fractional calculus, these effects are now conveniently modeled by fractional-order differential operators and the governing equations are reformulated accordingly.

In the first part of the talk, we plan to introduce both linear and nonlinear, fractional-order differential equations. As applications, we will develop new physical models for geophysical electromagnetism and a new notion of optimal control will be discussed.

In the second part of the talk, we will focus on novel Deep Neural Networks (DNNs) based on fractional operators. We plan to discuss the approximation properties and apply them to image denoising and tomographic reconstruction problems. We will establish that these DNNs are also excellent surrogates to PDEs and inverse problems with multiple advantages over the traditional methods. If time permits, we will conclude the talk by showing some of our initial results on chemically reacting flows using DNNs which clearly shows the effectives of the proposed approach.
Title: Randomized Fast Subspace Descent Methods
Seminar: Numerical Analysis and Scientific Computing
Speaker: Long Chen of University of California at Irvine
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2021-03-12 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
In this talk, we propose randomized fast subspace descent (rFASD) methods and derive its convergence analysis. An outline of rFASD is as follows. Randomly choose a subspace according to some sampling distribution and find a search direction in that subspace. The update is giving by the subspace correction with such search direction and appropriated step-size. Convergence analysis for convex function and strongly convex function will be given. SGD, Coordinate Descent (CD), Block CD, and Block CD with Newton solver on each block can be viewed as examples in our framework.

This is a joint work with Xiaozhe Hu and Huiwen Wu.
Title: ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Seminar: Numerical Analysis and Scientific Computing
Speaker: Michael W. Mahoney of ICSI and Department of Statistics, UC Berkeley
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2021-03-05 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Second order optimization algorithms have a long history in scientific computing, but they tend not to be used much in machine learning. This is in spite of the fact that they gracefully handle step size issues, poor conditioning problems, communication-computation tradeoffs, etc., all problems which are increasingly important in large-scale and high performance machine learning. A large part of the reason for this is that their implementation requires some care, e.g., a good implementation isn't possible in a few lines of python after taking a data science boot camp, and that a naive implementation typically performs worse than heavily parameterized/hyperparameterized stochastic first order methods. We describe ADAHESSIAN, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the Hessian. ADAHESSIAN includes several novel performance-improving features, including: (i) a fast Hutchinson based method to approximate the curvature matrix with low computational overhead; (ii) a spatial averaging to reduce the variance of the second derivative; and (iii) a root-mean-square exponential moving average to smooth out variations of the second-derivative across different iterations. Extensive tests on natural language processing, computer vision, and recommendation system tasks demonstrate that ADAHESSIAN achieves state-of-the-art results. The cost per iteration of ADAHESSIAN is comparable to first-order methods, and ADAHESSIAN exhibits improved robustness towards variations in hyperparameter values.
Title: Moduli of Wildly Ramified Covers of Curves
Seminar: Algebra and Number Theory
Speaker: Andrew Kobin of University of California-Santa Cruz
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-02-22 at 1:00PM
Venue: https://emory.zoom.us/j/94062245183
Download Flyer
Abstract:
One of many hazards in the jungle of characteristic p algebraic geometry is the presence of wild ramification, which is in a loose sense what happens when objects possess automorphisms of order divisible by p. In this talk, I will tell an incomplete story of wild ramification for algebraic curves in characteristic p, starting with the fundamental example of Artin-Schreier curves. I will also describe some work in progress towards a description of the moduli stack of Artin-Schreier covers of curves, part of which relies on results in a recent preprint (arXiv:1910.03146).
Title: Importance Sampling for Approximating High-Dimensional Kernel Matrices
Seminar: Numerical Analysis and Scientific Computing
Speaker: Christopher Musco of New York University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2021-02-19 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Randomized algorithms for compressing large matrices and accelerating linear algebraic computations have received significant attention in recent years. Unfortunately, popular methods based on Johnson-Lindenstrauss random projections and related techniques cannot be efficiently applied to dense implicit matrices that appear in many data applications -- specifically they are difficult to apply to large kernel Gram matrices constructed from high-dimensional data. In this talk, I will discuss recent efforts to address this issue by instead compressing large kernel matrices using randomized importance sampling methods, and specifically those based on leverage scores. I will describe a recursive algorithm for leverage score sampling that provably computes a near-optimal rank-k approximation to any n x n kernel Gram matrix in roughly O(nk^2) time, which is sublinear in the size of that matrix. Random projection methods on the other hand require at least O(n^2) time. I will also touch on connections between leverage score sampling and the practically popular "random Fourier features method", including insights on how to improve this method using tools from Randomized Numerical Linear Algebra.

Bio: Christopher Musco is an Assistant Professor at New York University's Tandon School of Engineering. He received his Ph.D. from the theoretical computer science group at MIT in 2018. Christopher's research focuses on scalable algorithms for matrix problems that arise in machine learning and data science, with a focus on randomized approximation algorithms.
Title: Divisors on Non-Generic Curves
Seminar: Algebra and Number Theory
Speaker: Kaelin Cook-Powell of University of Kentucky
Contact: David Zureick-Brown, dzureic@emory.edu
Date: 2021-02-18 at 1:00PM
Venue: https://emory.zoom.us/j/92174650436
Download Flyer
Abstract:
In algebraic geometry the study of divisors on curves, known as Brill-Noether theory, has been a rich field of study for decades. When a curve C is general in $M_g$, the moduli space parameterizing all curves of genus g, much is known about the spaces of divisors of prescribed rank r and degree d, denoted $W^r_d(C)$. However, when C is not general, the loci $W^r_d(C)$ can exhibit bizarre and pathological behavior. Divisors on a curve are intimately related to line bundles on that curve, so afterwards we will introduce the idea of the splitting type of a line bundle, a more refined invariant than the rank and degree. The main goal of this talk will be to define and analyze the spaces of line bundles with a given splitting type and argue that these are a ``correct" generalization of the spaces $W^r_d(C)$. All of this can be done from a purely combinatorial standpoint and involves an in-depth study of certain special families of Young tableaux that only depend on a given splitting type.
Title: Direct Sampling Algorithms in Inverse Scattering
Seminar: Mathematics Colloquium
Speaker: Isaac Harris of Purdue University
Contact: James Nagy, jnagy@emory.edu
Date: 2021-02-15 at 12:30PM
Venue: https://emory.zoom.us/j/93146817322?pwd=WTVrOEdnTzg3UVpYYlNOYWcrMHJGZz09
Download Flyer
Abstract:
In this talk, we will discuss the Transmission Eigenvalue problem that is obtained by the inverse acoustic scattering for a material with a coating on the boundary. In general, the transmission eigenvalue problem is a non-selfadjoint and non-linear eigenvalue problem for a system of PDEs. This makes their investigation difficult and interesting Mathematically. Some of the questions we will discuss are the existence and discreteness of the eigenvalues as well as how they depend on the material parameters. Numerical examples will be given to show how the refractive index may be estimated using these eigenvalues.
Title: Direct Sampling Algorithms in Inverse Scattering
Seminar: Numerical Analysis and Scientific Computing
Speaker: Isaac Harris of Purdue University
Contact: James Nagy, jnagy@emory.edu
Date: 2020-11-20 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
In this talk, we will discuss a recent qualitative imaging method referred to as the Direct Sampling Method for inverse scattering. This method allows one to recover a scattering object by evaluating an imaging functional that is the inner-product of the far-field data and a known function. It can be shown that the imaging functional is strictly positive in the scatterer and decays as the sampling point moves away from the scatterer. The analysis uses the factorization of the far-field operator and the Funke-Hecke formula. This method can also be shown to be stable with respect to perturbations in the scattering data. We will discuss the inverse scattering problem for both acoustic and electromagnetic waves.
Title: Rayleigh Quotient Optimizations and Eigenvalue Problems
Seminar: Numerical Analysis and Scientific Computing
Speaker: Zhaojun Bai of UC Davis
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-11-13 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Many computational science and data analysis techniques lead to optimizing Rayleigh quotient (RQ) and RQ-type objective functions, such as computing excitation states (energies) of electronic structures, robust classification to handle uncertainty and constrained data clustering to incorporate domain knowledge. We will discuss emerging RQ optimization problems, variational principles, and reformulations to algebraic linear and nonlinear eigenvalue problems. We will show how to exploit underlying properties of these eigenvalue problems for designing fast eigensolvers, and illustrate the efficacy of these solvers in applications.
Title: Data-Driven Methods for Image Reconstruction
Seminar: Numerical Analysis and Scientific Computing
Speaker: Jeff Fessler of University of Michigan
Contact: James Nagy, jnagy@emory.edu
Date: 2020-11-06 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Inverse problems are usually ill-conditioned or ill-posed, meaning that there are multiple candidate solutions that all fit the measured data equally or reasonably well. Modeling assumptions are needed to distinguish among candidate solutions. This talk will focus on contemporary adaptive signal models and their use as regularizers for solving inverse problems, including methods based on machine-learning tools. Applications illustrated will include MRI and CT.