Chapter 18: Boosting

This chapter introduces boosting as a sequential ensemble method that creates powerful committees from different kinds of base learners."

§18.01: Introduction to Boosting / AdaBoost


AdaBoost


§18.02: Boosting Concept



§18.03: Boosting Illustration



§18.04: Boosting Regularization


Since GB can easily overfit, it is important to regularise it. Three main options for regularisation:

  1. Limit number of iterations M
  2. Limit depth of trees
  3. Use small learning rate $\alpha$

§18.05: Boosting for Classification



§18.06: Gradient Boosting with Trees 1



§18.07: Gradient Boosting with Trees 2



§18.08: XGBoost


\[R_{reg}^{[m]} = \Sigma_{i=1}^n L(y^{(i)}, f^{[m-1]}(x^{(i)})+b^{[m]}(x^{(i)})) + \lambda_1 J_1(b^{[m]}) + \lambda_2 J_2 (b^{[m]}) + \lambda_3 J_3(b^{[m]})\]

where $J_1 = T^{[m]}$ : Number of leaves to peanlise tree depth, $J_2 = \mid \mid c^{[m]} \mid \mid_2^2$: $L2$ penalty over leaf values and $J_3 = \mid \mid c^{[m]}\mid \mid_1$: $L1$ penalty over leaf values.


§18.09: Component Wise Boosting Basics 1



§18.10: Component Wise Boosting Basics 2



§18.11: CWB and GLMs



§18.12: Advanced CWB