Upcoming Seminars

Title: Large-Scale Parameter Estimation in Geophysics and Machine Learning
Defense: Dissertation
Speaker: Samy Wu Fung of Emory University
Contact: Samy Wu Fung, samy.wu@emory.edu
Date: 2019-03-20 at 1:00PM
Venue: W301
Download Flyer
Abstract:
The ability to collect large amounts of data with relative ease has given rise to new opportunities for scientific discovery. It has led to a new class of large-scale parameter estimation problems in geophysics, machine learning, and numerous other applications. Traditionally, parameter estimation aims to infer parameters in a physical model from indirect measurements, where the model is often given by a partial differential equation (PDE). Here, we also associate parameter estimation with machine learning, where rather than having a PDE as the model, we have a hypothesis function, e.g., a neural network, and the parameters of interest correspond to the weights. A common thread in these problems is their massive computational expense. The underlying parameter space in both applications is typically very high-dimensional. This makes the optimization computationally demanding, sometimes intractable, when large amounts of data are available. \\ \\ In this thesis, we address two general approaches to reduce the computational burdens of big-data parameter estimation in geophysics and machine learning. The first approach is an adaptive model reduction scheme that reduces the computational complexity of the model while achieving highly accurate solutions. This approach is tailored to problems in geophysics, where PDEs must be solved numerous times throughout the optimization. The second approach consists of novel parallel/distributed methods that lower the time-to-solution through avoided communication and latency, and can be used in both applications. We exemplarily show the potential of our methods on several geophysics and image classification problems.
Title: Reduced Models and Parallel Computing for Uncertainty Quantification in Cardiovascular Mathematics
Defense: Dissertation
Speaker: Sofia Guzzetti of Emory University
Contact: Sofia Guzzetti, sofia.guzzetti@emory.edu
Date: 2019-03-21 at 10:00AM
Venue: E308A
Download Flyer
Abstract:
Computational fluid dynamics (CFD) has been progressively adopted in the last decade for studying the role of blood flow in the development of arterial diseases. While computational $(in silico)$ investigations - compared to more traditional $in$ $vitro$ and $in$ $vivo$ studies - are generally more flexible and cost-effective, the adoption of CFD for computer-aided clinical trials and surgical planning is still an open challenge. The computational time to accurately and reliably solve mathematical models can be too long for the fast-paced clinical environment - especially in emergency scenarios, and quantifying the reliability of the results comes at an even higher computational cost. Moreover, the $in$ $silico$ analysis of large numbers of patients calls for significant computational resources. Hospitals and healthcare institutions are expected to outsource numerical simulations, which, however, raises concerns about privacy, data protection, and efficiency in terms of cost and performance. In such an articulated and complex scenario, this work addresses the challenges described above by (i) introducing a novel reduced model that guarantees levels of accuracy comparable to those achieved by high-fidelity 3D models, roughly at the same computational cost as the inexpensive yet inaccurate 1D models, by combining the Finite Element Method to describe the main stream dynamics with Spectral Methods to retrieve the transverse components; (ii) designing a new method for uncertainty quantification in large-scale networks that greatly enhances parallelism by performing uncertainty quantification at the subsystem level, and propagating uncertainty information encoded as polynomial chaos coefficients via overlapping domain decomposition techniques; (iii) providing an objective criterion to measure the performance of different parallel architectures based on the user's priorities in terms of budget and tolerance to delay, and reducing the execution time by choosing a task-worker mapping strategy ahead of simulation time, and optimizing the amount of overlap in the domain decomposition phase.
Title: Attacking neural networks with poison frogs: a theoretical look at adversarial examples in machine learning
Seminar: Numerical Analysis and Scientific Computing
Speaker: Thomas Goldstein of University of Maryland
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2019-03-22 at 2:00PM
Venue: W301
Download Flyer
Abstract:
Neural networks solve complex computer vision problems with human-like accuracy. However, it has recently been observed that neural nets are easily fooled and manipulated by "adversarial examples," in which an attacker manipulates the network by making tiny changes to its inputs. In this talk, I give a high-level overview of adversarial examples, and then discuss a newer type of attack called "data poisoning," in which a network is manipulated at train time rather than test time. Then, I explore adversarial examples from a theoretical viewpoint and try to answer a fundamental question: "Are adversarial examples inevitable?" Bio: Tom is an Assistant Professor at University of Maryland. His research lies at the intersection of optimization and distributed computing, and targets applications in machine learning and image processing. He designs optimization methods for a wide range of platforms. This includes powerful cluster/cloud computing environments for machine learning and computer vision, in addition to resource limited integrated circuits and FPGAs for real-time signal processing. Before joining the faculty at Maryland, he completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. He has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.
Title: TBA
Seminar: Algebra
Speaker: Sonny Arora of Emory
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2019-03-26 at 4:00PM
Venue: W201
Download Flyer
Abstract:
Title: Matrix Computations and Optimization for Spectral Computed Tomography
Defense: Dissertation
Speaker: Yunyi Hu of Emory University
Contact: Yunyi Hu, yunyi.hu@emory.edu
Date: 2019-03-29 at 3:00PM
Venue: W201
Download Flyer
Abstract:
In the area of image science, the emergence of spectral computed tomography (CT) detectors highlights the concept of quantitative imaging, in which not only reconstructed images are offered, but also weights of different materials that compose the object are provided. For distinct types of detectors and noise, various models and techniques are produced to capture different features. In this thesis, we focus on optimization, preconditioning and model development of spectral CT. For simple energy discriminating detectors, a nonlinear optimization framework is built on a Poisson likelihood estimator and bound constraints. A nonlinear interior-point trust region method is implemented to compute the solution. For energy-windowed spectral CT, a nonlinear least squares approach is proposed to describe the problem and under bound constraints, a two-step method using the projected line search and the trust region approach, incorporated with a stepwise preconditioner, is used to solve the problem. In addition, a weighted least squares formulation is derived from the Gaussian noise assumption and another preconditioner that is based on rank-1 approximation is inserted to obtain robust reconstruction. The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), along with a projection step, is used to calculate the solution iteratively. Compared with a direct solver, a two-step model is developed using an ancillary variable. With this two-step model, a row-wise computational method is proposed, which further reduces memory requirements and improves solution accuracy. Numerous numerical experiments are conducted to indicate the strength of methods and real-life examples are presented to show possible applications.
Title: TBA
Seminar: Algebra
Speaker: Darren Glass of Gettysburg College
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2019-04-02 at 4:00PM
Venue: W201
Download Flyer
Abstract:
Title: On an Eigenvector-Dependent Nonlinear Eigenvalue Problem
Seminar: Numerical Analysis and Scientific Computing
Speaker: Ren-Cang Li of University of Texas at Arlington
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2019-04-05 at 2:00PM
Venue: W301
Download Flyer
Abstract:
We first establish existence and uniqueness conditions for the solvability of an algebraic eigenvalue problem with eigenvector nonlinearity. We then present a local and global convergence analysis for a self-consistent field (SCF) iteration for solving the problem. The well-known sin? theorem in the perturbation theory of Hermitian matrices plays a central role. The near-optimality of the local convergence rate of the SCF iteration is demonstrated by examples from the discrete Kohn-Sham eigenvalue problem in electronic structure calculations and the maximization of the trace ratio in the linear discriminant analysis for dimension reduction. This is a joint work with Yunfeng Cai (Peking University), Lei-Hong Zhang (Shanghai University of Finance and Economics), Zhaojun Bai (University of California at Davis).
Title: Modular linear differential equations
Seminar: Algebra
Speaker: Kiyokazu Nagatomo of Osaka University
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2019-04-16 at 4:00PM
Venue: W201
Download Flyer
Abstract:
The most naive definition of \textit{modular linear differential equations} (MLDEs) would be linear differential equations whose space of solutions are invariant under the slash action of the weight $k$ of $\Gamma_1=SL_2(\mathbb{Z})$, where $k$ is fixed. Then under an analytic condition for coefficients functions and the Wronskians of a~basis of the space of solutions of equations, we have (obvious) expressions of MLDEs as: \[ L(f) \,=\,\mathfrak{d}_k^n(f)+\sum_{i=2}^nP_{2i}\mathfrak{d}_k^{n-i}(f) \] where $P_{2i}$ is a modular form of of weight $2i$ on $SL_2(\mathbb{Z})$ and $\mathfrak{d}_k(f)$ is the \textit{Serre derivative}. (Of course, we could replace $\Gamma$ as a Fuchsian group of $SL_2(\mathbb{R})$ and modular forms $P_{2i}$ as being meromorphic.) However, the iterated Serre derivative $\mathfrak{d}_k^n(f)$ (which is also called ``the higher Serre derivation'' because this operator preserves the modulality.) is very complicated since it involves the Eisenstein series $E_2$. MLDEs, of course, can be given in the form % \[ % \mathsf{L}(f) \,=\, D^n(f)+\sum_{i=1}^nQ_iD^i(f)\quad\text{where $D=\frac{1}{2\pi\sqrt{-1}}\frac{d}{d\tau}$.} % \] \[ \mathsf{L}(f) \,=\, D^n(f)+\sum_{i=1}^nQ_iD^i(f) \] where \[ D=\frac{1}{2\pi\sqrt{-1}}\frac{d}{d\tau}. \] Then it is not easy to know if the equation above is an MLDE except the fact that $Q_i$ are quasimodular forms. (It seems hopeless that we verify if $\mathsf{L}(f)=0$ is a MLDE.) Very recently, Y.~Sakai and D.~Zagier (my collaborators) found formulas of $\mathsf{L}(f)$ by using the Rankin-Cohen products between $f$ and $g_i$. The latter is a modular form of weight $2i$, which is a linear function of the differential of~$Q_{j}$. Moreover, there is an \textit{inversion formulas} which express $Q_i$ as a linear function of the differential of $g_{j}$. The most important fact is that the order $n$ and $n-1$ parts are equal to the so-called higher Serre derivative in the sense of Kaneko and Koike, where the group is $\Gamma_1$. (It can be proved for any Fuchsian group.)