Robust boosting for regression problems

Abstract

Gradient boosting algorithms construct a regression predictor using a linear combination of base learners. Boosting also offers an approach to obtaining robust non-parametric regression estimators that are scalable to applications with many explanatory variables. The robust boosting algorithm is based on a two-stage approach, similar to what is done for robust linear regression. It first minimizes a robust residual scale estimator, and then improves it by optimizing a bounded loss function. Unlike previous robust boosting proposals this approach does not require computing an ad hoc residual scale estimator in each boosting iteration. Since the loss functions involved in this robust boosting algorithm are typically non-convex, a reliable initialization step is required, such as an regression tree, which is also fast to compute. A robust variable importance measure can also be calculated via a permutation procedure. Thorough simulation studies and several data analyses show that, when no atypical observations are present, the robust boosting approach works as well as the standard gradient boosting with a squared loss. Furthermore, when the data contain outliers, the robust boosting estimator outperforms the alternatives in terms of prediction error and variable selection accuracy.

Publication
Computational Statistics & Data Analysis 153 (2021)
Xiaomeng Ju
Xiaomeng Ju
PhD candidate in Statistics

I am a PhD student in Statistics at the University of British Columbia, advised by Professor Matias Salibian-Barrera. She received her BSc in Statistics from Renmin University of China, and MA in Statistics from University of Michigan. Xiaomeng’s research is centred on computational statistics with a special focus on robust statistics and functional data. Her ongoing thesis work develops gradient boosting methods for regression problems with complex data.