Yes, stacking different gradient boosting algorithms works well on practice. One example is a just finished kaggle-like competition: http://mlbootcamp.ru/round/12/sandbox/ where mpershin stacked CatBoost with LGBM and got the 7th place.
The kernel for this solution can be found in one of our tutorials: https://github.com/catboost/catboost/blob/master/catboost/tu...
CatBoost team here.
CatBoost is currently single host, the version of training distributed on cluster will be open-sourced later.
Yes, stacking different gradient boosting algorithms works well on practice. One example is a just finished kaggle-like competition: http://mlbootcamp.ru/round/12/sandbox/ where mpershin stacked CatBoost with LGBM and got the 7th place.
The kernel for this solution can be found in one of our tutorials: https://github.com/catboost/catboost/blob/master/catboost/tu...