admin管理员组

文章数量:1650776

增添:这篇博文讲的也特别好

正文:

Sk-learn作者的答案:

There are indeed several ways to get feature “importances”. As often, there is no strict consensus about what this word means.
In scikit-learn, we implement the importance as described in [1] (often cited, but unfortunately rarely read…). It is sometimes called “gini importance” or “mean decrease impurity” and is defined as the total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble.
In the literature or in some other packages, you can also find feature importances implemented as the “mean decrease accuracy”. Basically, the idea is to measure the decrease in accuracy on OOB data when you randomly permute the values for that feature. If the decrease is low, then the feature is not important, and vice-versa.
(Note that both algorithms are available in the randomForest R package.)
[1]: Breiman, Friedman, “Classification and regression trees”, 1984.

所以,共有两种比较流行的特征重要性评估方法:
这一篇文中有两种方法的代码:Selecting good features – Part III: random forests

  1. Mean decrease impurity
    这个方法的原理其实就是Tree-Model进行分类、回归的原理:特征越重要,对节点的纯度增加的效果越好。而纯度的判别标准有很多,如GINI、信息熵、信息熵增益。
    这也是Sklearn的feature_importances_的意义。
    代码:
from sklearn.datasets import load_boston
from sklearn.ensemble imp

本文标签: 含义sklearn