Seminars: Sparse Estimation via Approximated Information Criteria

2015-06-30  Xiaodong Pan Hits:[]

Topic one Sparse Estimation via Approximated Information Criteria

Speaker: Dr. Xiaogang Su

Abstract:

We propose a new method for sparse estimation by directly optimizing an approximated information criterion. The main idea involves approximation of the l0 norm with a continuous or smooth unit dent function (as exemplified by the hyperbolic tangent function). The proposed method bridges the best subset selection and regularization by borrowing strength from both; it mimics best subset selection using a penalized likelihood approach yet with no need of a tuning parameter. We fur-

ther reformulate the problem with a reparameterization step so that it reduces to one unconstrained nonconvex yet smooth programming problem, which can be solved efficiently as in computing MLE. The reparameterization tactic yields an additional advantage in circumventing post-selection inference. The asymptotic properties of the proposed method are explored for both fixed and diverging dimensions. Both simulated experiments and empirical examples are provided for assessment and illustration.

Time:9:00--10:00am, July 3, 2015

Room:X2511

Profile:Dr. Xiaogang Su is an associate Professor at Department of Mathematical Sciences, University of Texas at El Paso (UTEP). He got his Ph.D. in Statistics from University of California at Davis in 2001. He has authored more than 60 scientific research articles in indexed journals, refereed conferences and books. He has been an associate editor for Journal of Computational and Graphical Statistics (JCGS), (2010–Present) and Editorial Board of Nursing Research (2012-Present).


Topic two Statistical aggregation in big data

Speaker: Dr. Nan Lin

Abstract:

Big data problems present great challenges to statistical analyses, especially from the computational side. We consider a wide range of statistical inference problems in big data problems. The statistical aggregation strategy is a divide-and-conquer approach that aims to achieve asymptotic equivalence. In addition to solve memory and storage difficulties appeared in big data, it may also provide a computational efficient strategy in a non-big data context.  Through both theoretical proof and simulations, we show that our method significantly reduces the computational time and meanwhile maintains the asymptotic efficiency. 

Time:10:15--11:15, July 3, 2015

Room:X2511

Profile:Dr. Nan Lin is an associate Professor at Department of Mathematics Washington University in St. Louis (USA). He got his Ph.D. from Department of Statistics, University of Illinois at Urbana-Champaign in 2003. His research interests lies in Statistical computing in massive data, bioinformatics, Bayesian regularization, longitudinal and functional data analysis, and statistical applications in anesthesiology and cognition. He has been as associate editor for J. Computational Statistics and Data Analysis (2011-present).

 

Pre:Geometry , Algebra and Topology Seminar Next:Seminar: Testing Multiple Hypothesis in Big Data Analysis

Close