All Seminars

Title: Harmonic measure, reduced extremal length and quasicircles
Defense: Dissertation
Speaker: Huiqiang Shi of Emory University
Contact: Huiqiang Shi, huiqiang.shi@emory.edu
Date: 2016-08-10 at 12:00PM
Venue: W302
Download Flyer
Abstract:
This paper is devoted to the study of some fundamental properties of the sewing homeomorphism induced by a Jordan domain. In chapter 2, we mainly study two important conformal invariants: the extremal distance and the reduced extremal distance. Gives the estimate of extremal distance in the unit disk and the comparison of these two conformal invariants. In chapter 3 and 4, we give several necessary and sufficient conditions for the sewing homeomorphism of a Jordan domain to be bi-Lipschitz or bi-Holder, by using harmonic measure, extremal distance and reduced extremal distance. Furthermore, in chapter 5, we obtain some equivalent conditions for a Jordan curve to be a quasicircle. In chapter 6, we use the Robin capacity to define a new index and use this new index to characterize unit circle.
Title: Optimal Investment Strategies Based on Financial Crisis Indicators
Seminar: Quantitative Finance
Speaker: Antoine Kornprobst of University of Paris I, Pantheon Sorbonne
Contact: Michele Benzi, benzi@mathcs.emory.edu
Date: 2016-06-15 at 1:00PM
Venue: MSC W303
Download Flyer
Abstract:
The main objective of this study is to build successful investment strategies and devise optimal portfolio structures by exploiting the power of forecast of our financial crisis indicators based on random matrix theory. While using daily data constituted of the components of a major equity index like the Standard and Poor’s 500 or the Shanghai-Shenzhen CSI 300, the financial crisis indicators used in this paper are of two kinds. Firstly we consider the financial crisis indicators based on measuring the Hellinger distance between the empirical distribution of the eigenvalues of the correlation matrix of those index components and a distribution of reference built to either reflect a calm or agitated market situation. Secondly, we consider the financial crisis indicators based on the study of the spectral radius of the correlation matrix of the index components where the coefficients have been weighted in order to give more importance to the stock components that satisfy a chosen characteristic related to the structure of the index, market conditions or the nature of the companies which are part of the index. For example, we will attempt to give more importance in the computation of the indicators to the most traded stocks, the stocks from the companies with the highest market capitalization or from the companies with an optimal debt to capital ratio (financial leverage). Our optimal investment strategies exploit the forecasting power of the financial crisis indicators described above in order to produce a ‘buy’, ‘sell’ or ‘stay put’ signal every day that is able to anticipate most of the market downturns while keeping the number of false positives at an acceptable level. Such tools are very valuable for investors who can use them to anticipate market evolution in order to maximize their profit and limit their losses as well as for market regulators who can use those tools to anticipate systemic events and therefore attempt to mitigate their effects.
Title: Can Compressed Sensing Accelerate High-Resolution Photoacoustic Tomography?
Seminar: Numerical Analysis and Scientific Computing
Speaker: Dr. Felix Lucka of University College London
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2016-05-20 at 1:00PM
Venue: W306
Download Flyer
Abstract:
The acquisition time of current high-resolution 3D photoacoustic tomography (PAT) devices limits their ability to image dynamic processes in living tissue (4D PAT). In our work, we try to overcome this limitation by combining recent advances in spatio-temporal sub-sampling schemes, variational regularization and convex optimization with the development of tailored data acquisition systems. We first show that images with good spatial resolution can be obtained from suitably sub-sampled PAT data if sparsity-constrained image reconstruction techniques such as total variation regularization enhanced by Bregman iterations are used. A further increase of the dynamic frame rate can be achieved by exploiting the temporal redundancy of the data through the use of sparsity-constrained dynamic models. While simulated data from numerical phantoms will be used to illustrate the potential of the developed methods, we will also discuss the results of their application to different measured data sets. Furthermore, we will outline how to combine GPU computing and state-of-the-art optimization approaches to cope with the immense computational challenges imposed by 4D PAT.. Joint work with Marta Betcke, Simon Arridge, Ben Cox, Nam Huynh, Edward Zhang and Paul Beard.
Title: Improving PDE approximation via anisotropic mesh adaptation
Seminar: Numerical Analysis and Scientific Computing
Speaker: Simona Perotto of MOX, Dept. Mathematics, Politecnico di Milano, Italy
Contact: Alessandro Veneziani, ale@mathcs.emory.edu
Date: 2016-05-06 at 1:00PM
Venue: W306
Download Flyer
Abstract:
Anisotropic meshes have proved to be a powerful tool for improving the quality and the efficiency of numerical simulations in scientific computing, especially when dealing with phenomena characterized by directional features such as, for instance, sharp fronts in aerospace applications or steep boundary or internal layers in viscous flows around bodies. In these contexts, standard isotropic meshes often turn out to be inadequate since they allow one to tune only the size of the mesh elements while completely missing the directional features of the phenomenon at hand. On the contrary, via an anisotropic mesh adaptation it is possible to control the size as well as the orientation and the shape of the mesh elements. In this presentation we focus on an anisotropic setting based on the concept of metric. In particular, to generate the adapted mesh, we derive a proper metric stemming from an error estimator. This procedure leads to optimal grids which minimize the number of elements for an assigned accuracy. After introducing the theoretical context, several test cases will be provided to emphasize the numerical benefits led by an anisotropic approach. An overview of the ongoing research will complete the presentation.
Title: Geometrically unfitted finite elements for PDEs posed on surfaces and in the bulk
Seminar: Numerical Analysis and Scientific Computing
Speaker: Maxim Olshanskii of University of Houston
Contact: Michele Benzi, benzi@mathcs.emory.edu
Date: 2016-04-28 at 4:00PM
Venue: W306
Download Flyer
Abstract:
Geometrically unfitted finite element methods are known in the literature under different names, e.g., XFEM, cut FEM, trace FEM, etc. These discretizations are mainly developed for efficient numerical treatment of differential equations posed in domains of complex geometry and/or having propagating interfaces. Unlike immersed boundary methods these discretizations typically treat interfaces in a `sharp' way, but avoid fitting the mesh. The talk will discuss some recent analysis and developments of unfitted FEM.
Title: The Kolchin Irreducibility Theorem
Seminar: Algebra
Speaker: Taylor Dupuy of University of Vermont
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2016-04-26 at 4:00PM
Venue: W304
Download Flyer
Abstract:
A jet bundle is a higher order version of a tangent bundle (one for each positive integer) and the points correspond to truncated power series on your original variety. It turns out that if you have a singular variety these spaces get all messed up---they have extra irreducible components above the singular locus (and encode interesting singularity invariants). Magically, if we take the limit of these spaces, where the points correspond to full power series, these spaces become irreducible again! This is Kolchin's Irreducibility theorem. We will talk about this theorem and what happens when power series are replaced by Witt vectors. This talk is based on joint work with Lance Edward Miller and James Freitag.
Title: Life is Better with Liberal Arts
Evans Hall Awards Ceremony and Lecture: N/A
Speaker: Chris Schoettle of Enservio
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2016-04-25 at 4:00PM
Venue: MSC E208
Download Flyer
Abstract:
Title: A parallel solver for inverse tranport problems
Seminar: Numerical Analysis and Scientific Computing
Speaker: Dr. Andreas Mang of The University of Texas at Austin
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2016-04-22 at 1:00PM
Venue: W306
Download Flyer
Abstract:
In this talk we will discuss fast algorithms for large-scale inverse transport problems. These type of problems appear in numerous areas like weather prediction, ocean physics, or the reconstruction of porous media flows. In this talk we will consider the problem of diffeomorphic image registration with applications in medical imaging. We use a PDE constrained optimization formulation, where the constraints are the transport equations for a given scalar field, which in our case is a grayscale image. We will compare semi-Lagrangian and spectral collocation schemes, discuss a two-level Hessian preconditioner, and showcase a parallel implementation on distributed memory architectures.. We invert for a velocity field that governs the transport equation for the image deformation. The objective functional consists of an L^2 mismatch term and a regularization functional that penalizes the H^k-norm of the velocity field and its divergence. We discretize the optimality conditions using a pseudospectral discretization in space with a Fourier basis. We use a globalized, preconditioned, inexact, matrix-free, reduced space Newton-Krylov method to solve for an optimal, diffeomorphic flow field. The reduced Hessian is an ill-conditioned, dense, and compact operator; efficiently solving this system represents a significant challenge. We introduce and analyze a semi-Lagrangian formulation that, together with a nested two-level Hessian preconditioner, yields a 20x speedup compared to the state of the art. We will showcase convergence results, and assess strong and weak scalability of our solver for problem sizes of up to 25 billion unknowns on up to 1024 compute nodes on the Texas Advanced Computing Center's systems. As a highlight: We can invert for half a billion unknowns in 140 seconds on one node with 16 MPI tasks. We can reduce the time to solution by 20x for 32 compute nodes with 512 MPI tasks. This is joint work with George Biros and Amir Gholami
Title: Computational Phenotyping using Tensor Factorization and Tensor Network
Seminar: Computer Science
Speaker: Dr. Jimeng Sun of Georgia Institute of Technology
Contact: Li Xiong, lxiong@emory.edu
Date: 2016-04-22 at 3:00PM
Venue: MSC W301
Download Flyer
Abstract:
Computational phenotyping is the process of converting heterogeneous electronic health records (EHRs) into meaningful clinical concepts (phenotypes). Tensor factorization has been shown as a successful unsupervised approach for discovering phenotypes. However, tensor methods have some major limitations for phenotyping: 1) unable to incorporate existing medical knowledge; 2) fail to handle high-order tensors (e.g., order > 5) . We will talk about two of our recent developments in addressing these challenges: First, we proposed Rubik, a constrained non-negative tensor factorization and completion method for phenotyping. Rubik incorporates 1) guidance constraints to align with existing medical knowledge, and 2) pairwise constraints for obtaining distinct, non-overlapping phenotypes. Rubik also has built-in tensor completion that can significantly alleviate the impact of noisy and missing data. We evaluate Rubik on two large EHR datasets. Our results show that Rubik can discover more meaningful and distinct phenotypes than the baselines. Second, we extended a theoretical framework called tensor networks for analyzing high-order tensors. We developed an efficient sparse hierarchical Tucker model (Sparse H-Tucker) for finding interpretable tree-structured factorizations from sparse high-order tensor. Sparse H-Tucker scales nearly linearly in the number of non-zero tensor elements. We applied Sparse H-Tucker on a real EHR dataset for learning a disease hierarchy. The resulting tree structure provides an interpretable disease hierarchy, which is confirmed by a clinical expert. Bio Jimeng Sun is an Associate Professor of School of Computational Science and Engineering at College of Computing in Georgia Institute of Technology. Prior to joining Georgia Tech, he was a research staff member at IBM TJ Watson Research Center. His research focuses on health analytics using electronic health records and data mining, especially in designing novel tensor analysis and similarity learning methods and developing large-scale predictive modeling systems. He has published over 80 papers, filed over 20 patents (5 granted). He has received ICDM best research paper award in 2008, SDM best research paper award in 2007, and KDD Dissertation runner-up award in 2008. Dr. Sun received his B.S. and M.Phil. in Computer Science from Hong Kong University of Science and Technology in 2002 and 2003, and PhD in Computer Science from Carnegie Mellon University in 2007.
Title: Totaro's Question for Tori of Low Rank
Seminar: Algebra
Speaker: Reed Sarney of Emory University
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2016-04-19 at 4:00PM
Venue: W304
Download Flyer
Abstract:
Let k be a field, let G/k be a smooth connected linear algebraic group, and let X be a G-torsor over k. Generalizing a question of Serre, Totaro asked if the existence of a zero-cycle on X of degree d greater than or equal to 1 implies the existence of closed etale point on X of degree dividing d. This question is entirely unexplored in the literature for algebraic tori. We settle Totaro's question affirmatively for algebraic tori of rank less than or equal to 2.