Forests are ensembles of classification or regression trees. They can be constructed by different algorithms (random forests, extra trees, gradient boosting). They are rather robust to different data types, they are unsensitive to any monotonic rescaling to the features, and they scale well with data size. They can be used for any data size ranging from tiny through small and medium size to big data. They are the typical default choice for nonlinear classification or regression on heterogeneous tabular data. We list here challenges where forests turned to be competitive.