题目一：High Dimensional Elliptical Sliced Inverse Regression in non-Gaussian Distributions、
题目二：Sufficient Dimension Reduction for Classification
时 间： 2018年7月21日（周六） 上午8:30-12:00
地 点： 新数学楼415
Sliced inverse regression (SIR) is the most widely-used sufficient dimension reduction method due to its simplicity, generality and computational efficiency. However, when the distribution of the covariates deviates from the multivariate normal distribution, the estimation efficiency of SIR is rather low. In this paper, we propose a robust alternative to SIR - called elliptical sliced inverse regression (ESIR) for analysing high dimensional, elliptically distributed data. There are wide applications of the elliptically distributed data, especially in finance and economics where the distribution of the data is often heavy-tailed. To tackle the heavy-tailed elliptically distributed covariates, we novelly utilize the multivariate Kendall's tau matrix in a framework of so-called generalized eigenvector problem for sufficient dimension reduction. Methodologically, we present a practical algorithm for our method. Theoretically, we investigate the asymptotic behavior of the ESIR estimator under high dimensional setting. Quantities of simulation results show that ESIR significantly improves the estimation efficiency in heavy-tailed scenarios. Analysis of two real data sets also demonstrates the effectiveness of our method. Moreover, ESIR can be easily extended to most other sufficient dimension reduction methods and applied to non-elliptical heavy-tailed distributions.
We propose a new sufficient dimension reduction approach designed deliberately for high-dimensional classification. This novel method is named maximal mean variance (MMV) stimulated by the mean variance index first proposed by Cui, Li and Zhong (2015) which measures the dependence between a categorical random variable with multiple classes and a continuous random variable. Our method requires quite mild restrictions on the predicting variables and keeps the model-free advantage without the need to estimate the link function. Consistency of the MMV estimator is established under regularity conditions for both fixed and diverging dimension (p) cases and the number of the response classes can also be allowed to diverge with the sample size n. We also construct the asymptotic normality for the estimator when the dimension of the predicting vector is fixed. Furthermore, although without any definite theoretical proof, our method works pretty well when p << n. Surprising classification efficiency gain of the proposed method is verified by quantities of simulation studies and real data analysis.