All Seminars

Title: Data-Driven Methods for Image Reconstruction
Seminar: Numerical Analysis and Scientific Computing
Speaker: Jeff Fessler of University of Michigan
Contact: James Nagy, jnagy@emory.edu
Date: 2020-11-06 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Inverse problems are usually ill-conditioned or ill-posed, meaning that there are multiple candidate solutions that all fit the measured data equally or reasonably well. Modeling assumptions are needed to distinguish among candidate solutions. This talk will focus on contemporary adaptive signal models and their use as regularizers for solving inverse problems, including methods based on machine-learning tools. Applications illustrated will include MRI and CT.
Title: Recent Advances in Ptychography
Seminar: Numerical Analysis and Scientific Computing
Speaker: Wendy Di of Argonne National Lab
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-30 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Phase retrieval has been recognized as an applied mathematician’s dream problem due to its simple form, yet interesting and challenging properties to be efficiently solved. Ptychography, a special type of phase retrieval imaging technique, offers an oversampling justification to fix the non-uniqueness of traditional phase retrieval problem. The technique consists of a coherent beam that is scanned across an object in a series of overlapping positions, leading to reliable and improved reconstructions. Furthermore, ptychographic microscopes allow for large fields to be imaged at high resolution, however, at the cost of additional computational expense. In this talk, I will discuss the mathematically interesting properties of ptychography in ways that solving linear inverse problem will never be, and pose potential remedies to numerically accelerate the ptychographic reconstruction
Title: Bayesian Sparse Learning With Preconditioned Stochastic Gradient MCMC and its Applications
Seminar: Numerical Analysis and Scientific Computing
Speaker: Guang Lin of Purdue University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-23 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding over-fitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. In this work, we propose a Bayesian type sparse deep leaning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using appropriate stochastic gradient Markov Chain Monte Carlo method (SG-MCMC), and optimize the latent variables using stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in non-convex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply computational manageable preconditioner in the updating rule, which provides step size adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix, our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in preconditioned SGLD. According to existing theoretical framework, we show that the proposed method can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning the solutions of elliptic PDE, which demonstrate the accuracy and efficiency of present work.
Title: Numerical Linear Algebra Methods in Recurrent Neural Networks
Seminar: Numerical Analysis and Scientific Computing
Speaker: Qiang Ye of University of Kentucky
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-10-09 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Deep neural networks have emerged as one of the most powerful machine learning methods. Recurrent neural networks (RNNs) are special architectures designed to efficiently model sequential data by exploiting temporal connections within a sequence and handling variable sequence lengths in a dataset. However, they suffer from so-called vanishing or exploding gradient problems. Recent works address this issue by using a unitary/orthogonal recurrent matrix. In this talk. we will present some numerical linear algebra based methods to improve RNNs. We first introduce a simpler and novel RNN that maintains orthogonal recurrent matrix using a scaled Cayley transform. We then develop a complex version with a unitary recurrent matrix that allows direct training of the scaling matrix in the Cayley transform. We further extend our architecture to use a block recurrent matrix with a spectral radius bounded by one to effectively model both long-term and short-term memory in RNNs. Our methods achieve superior results with fewer trainable parameters than other variants of RNNs in a variety experiments.
Title: The Extremal Number of Tight Cycles
Seminar: Combinatorics
Speaker: Istvan Tomon of ETH Zurich
Contact: Dr. Hao Huang, hao.huang@emory.edu
Date: 2020-10-02 at 10:00AM
Venue: https://emory.zoom.us/j/96323787117
Download Flyer
Abstract:
A tight cycle in an $r$-uniform hypergraph $\mathcal{H}$ is a sequence of $\ell\geq r+1$ vertices $x_1,\dots,x_{\ell}$ such that all $r$-tuples $\{x_{i},x_{i+1},\dots,x_{i+r-1}\}$ (with subscripts modulo $\ell$) are edges of $\mathcal{H}$. An old problem of V. S\'os, also posed independently by J. Verstra\"ete, asks for the maximum number of edges in an $r$-uniform hypergraph on $n$ vertices which has no tight cycle. Although this is a very basic question, until recently, no good upper bounds were known for this problem for $r\geq 3$. In my talk, I will present a brief outline of the proof of the upper bound $n^{r-1+o(1)}$, which is tight up to the $o(1)$ error term. This is based on a joint work with Benny Sudakov.
Title: Imputing Missing Data with the Gaussian Copula
Seminar: Numerical Analysis and Scientific Computing
Speaker: Madeleine Udell of Cornell University
Contact: James Nagy, jnagy@emory.edu
Date: 2020-10-02 at 2:40PM
Venue: https://emory.zoom.us/j/95900585494
Download Flyer
Abstract:
Missing data imputation forms the first critical step of many data analysis pipelines. The challenge is greatest for mixed data sets, including real, Boolean, and ordinal data, where standard techniques for imputation fail basic sanity checks: for example, the imputed values may not follow the same distributions as the data. This talk introduces a new semiparametric algorithm to impute missing values, with no tuning parameters. The algorithm models mixed data as a Gaussian copula. This model can fit arbitrary marginals for continuous variables and can handle ordinal variables with many levels, including Boolean variables as a special case. We develop an efficient approximate EM algorithm to estimate copula parameters from incomplete mixed data, and low rank and online extensions of the method that can handle extremely large datasets. The resulting model reveals the statistical associations among variables. Experimental results on several synthetic and real datasets show the superiority of the proposed algorithm to state-of-the-art imputation algorithms for mixed data.
Title: Scientific Machine Learning: Learning from Small Data
Seminar: Numerical Analysis and Scientific Computing
Speaker: Dr. Lu Lu of Brown University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-04-24 at 2:00PM
Venue: https://emory.zoom.us/j/313230176
Download Flyer
Abstract:
Deep learning has achieved remarkable success in diverse applications; however, its use in scientific applications has emerged only recently. I have developed multi-fidelity neural networks to extract mechanical properties of solid materials (including 3D printing materials) from instrumented indentation. I have improved the physics-informed neural networks (PINNs) and developed the library DeepXDE for solving forward and inverse problems for differential equations, including partial differential equations (PDEs), fractional PDEs, and stochastic PDEs. I have also developed the deep operator network (DeepONet) based on the universal approximation theorem of operators to learn nonlinear operators (e.g., dynamical systems) accurately and efficiently from a relatively small dataset. In addition, I will present my work on the deep learning theory of optimization and generalization.
Title: Recent Development of Multigrid Solvers in HYPRE on Modern Heterogeneous Computing Platforms
Seminar: Numerical Analysis and Scientific Computing
Speaker: Dr. Ruipeng Li of Lawrence Livermore National Lab
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-04-17 at 2:00PM
Venue: https://emory.zoom.us/j/313230176
Download Flyer
Abstract:
Modern many-core processors such as the graphics processing units (GPUs) are becoming an integral part of many high performance computing systems nowadays. These processors yield enormous raw processing power in the form of massive SIMD parallelism. Accelerating multigrid methods on GPUs has drawn a lot of research attention in recent years. For instance, in recent releases of the HYPRE package, the structured multigrid solvers (SMG, PFMG) have full GPU-support for both the setup and the solve phases, whereas the algebraic multigrid (AMG) solver, namely BoomerAMG, has only its solve phase been ported and the setup can still be computed on CPUs only. In this talk, we will provide an overview of the available GPU-acceleration in HYPRE and present our current work on the algorithms in the AMG setup that are suitable for GPUs including the parallel coarsening algorithms, the interpolation methods and the triple-matrix multiplications. The recent results as well as the future work will also be included.
Title: A discussion on the Log-Brunn-Minkowski Conjecture and Related Questions
Seminar: Analysis and Differential Geometry
Speaker: Professor Galyna Livshytz of Georgia Institute of Technology
Contact: Vladimir Oliker, oliker@emory.edu
Date: 2020-04-07 at 4:00PM
Venue: https://emory.zoom.us/j/352530072
Download Flyer
Abstract:
We shall discuss the Log-Brunn-Minkowski conjecture, a conjectured strengthening of the Brunn-Minkowski inequality proposed by Boroczky, Lutwak, Yang and Zhang. The discussion will involve introduction and explanation of how the local version of the conjecture arises naturally, a collection of ‘’hands on’’ examples and elementary geometric tricks leading to various related partial results, statements of related questions as well as a discussion of more technically involved approaches and results. Based on work with Johannes Hosle and Alexander Kolesnikov, as well as on previous joint results with Colesanti, Marsiglietti, Nayar, Zvavitch.
Title: Applied differential geometry and harmonic analysis in deep learning regularization
Seminar: Numerical Analysis and Scientific Computing
Speaker: Dr. Wei Zhu of Duke University
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2020-04-03 at 2:00PM
Venue: https://emory.zoom.us/j/313230176
Download Flyer
Abstract:
With the explosive production of digital data and information, data-driven methods, deep neural networks (DNNs) in particular, have revolutionized machine learning and scientific computing by gradually outperforming traditional hand-craft model-based algorithms. While DNNs have proved very successful when large training sets are available, they typically have two shortcomings: First, when the training data are scarce, DNNs tend to suffer from overfitting. Second, the generalization ability of overparameterized DNNs still remains a mystery despite many recent efforts. In this talk, I will discuss two recent works to “inject” the “modeling” flavor back into deep learning to improve the generalization performance and interpretability of DNNs. This is accomplished by deep learning regularization through applied differential geometry and harmonic analysis. In the first part of the talk, I will explain how to improve the regularity of the DNN representation by imposing a “smoothness” inductive bias over the DNN model. This is achieved by solving a variational problem with a low-dimensionality constraint on the data-feature concatenation manifold. In the second part, I will discuss how to impose scale-equivariance in network representation by conducting joint convolutions across the space and the scaling group. The stability of the equivariant representation to nuisance input deformation is also proved under mild assumptions on the Fourier-Bessel norm of filter expansion coefficients