Skip to content

アプリケーション内で をクリックすると、お使いのDataRobotバージョンに関する全プラットフォームドキュメントにアクセスできます。

個々の予測の説明

備考

Prediction Explanations has been renamed Individual Prediction Explanations in Workbench to better communicate the feature’s functionality as a local explanation method that calculates SHAP values for each individual row. Where DataRobot Classic supports both XEMP and SHAP explanations, Workbench supports only SHAP explanations because they provide more transparency due to their open source nature.

SHAPベースの説明は、特定の予測が平均とは異なることに各特徴量がどの程度関与しているかを推定するため、何が予測の根拠となっているかを行単位で理解するのに役立ちます。 They answer why a model made a certain prediction—What drives a customer's decision to buy—age? gender? buying habits? Then, they help identify the impact on the decision for each factor. これらは直感的で、制限がなく(すべての機能について計算されます)、高速で、SHAPのオープンソースの性質上、透過的です。 SHAPは、モデルの動作をより深く、迅速に理解できるという利点があるだけでなく、モデルがビジネスルールに準拠しているかどうかを簡単に検証することもできます。

本機能の提供について

Support for the new Individual Prediction Explanations in Workbench is on by default.

機能フラグ: NextGenでユニバーサルSHAP

Hover on a bin to see the range of predictions represented by the bin and the number of predictions in the bin.

Insight filters

Use the controls in the insight to change the prediction distribution chart:

オプション 説明
データ選択 Set the partition and source of data to compute explanations for.
データスライス Select, or create (by selecting Manage slices), a data slice to view a subpopulation of a model's data based on feature value.
予測範囲 In the Predictions to sample table, view only predictions within a set range.
エクスポート Download individual prediction explanations, in CSV format, based on the settings in the export modal.

For more details about working with Individual Prediction Explanations, see the related considerations and the SHAP reference.

Set the data source

Change the data source from the Data selection dropdown when you want to use alternate data for computing explanations. The data selection is comprised of a dataset and, when using the current training set, a selected partition.

You can change either:

  • A partition in the current training dataset, either training, validation, or holdout. By default, the chart represents the validation partition of the training dataset.

  • An additional, perhaps external, dataset. Use this when you want to use the same model to see explanations for rows that were not in your experiment's training data. DataRobot lists all datasets associated with your Use Case (up to 100), but you can also upload external datasets. 次のいずれかを選択します。

    • The same dataset again when you want to see a different random sample of rows.
    • A different dataset (be sure to choose a dataset that the model can predict on successfully).

Note that the prediction distribution chart is not available for the training dataset's training partition.

説明をダウンロードする

To download explanations in CSV format, click Export, set each limit, and click Download. You can change the settings and download each new version; click Done to dismiss the modal when you are finished.

オプション When checked それ以外の場合
予測ごとに特徴量数を制限する Only the specified number of top features are included in the CSV. Enter a value between 1 and the number of computed explanations, with a maximum of 100. Download predictions for all rows.
フィルターを適用して、ダウンロードされる説明を制限する Only those explanations meeting the filters set in the prediction distribution chart controls are included in the CSV. All explanations (up to 25,000) are included.

サンプリングする予測

The sampled rows below the prediction distribution chart are chosen according to percentiles. The display for each sampled row includes a preview of the single most impactful feature for that row. Expand the row to see the top several most impactful features for that row.

Click the pencil icon to change the samples to return. By default, DataRobot returns five samples of predictions, uniformly sampled from across the range of predictions as defined by the filters.

備考

The table of predictions to sample is an on-demand feature; when you click Compute, DataRobot returns details of each individual explanation. Changes to any of the settings (data source, partition, or data slice) will require recomputing the table.

Simple table view

The summary entries provide:

  • A prediction ID (for example, Prediction #1117).
  • A prediction value with colored dot corresponding to the coloring of that value in the prediction distribution chart.
  • The top contributing feature to that prediction result.

Expanded row view

Click on any row in the simple table view to display additional information for its prediction. The expanded view lists, for each prediction, the features that were most impactful, ordered by SHAP score. DataRobot displays the top 10 contributing features by default but you can click Load more explanations to load an additional 10 features with each click.

The expanded view display reports:

フィールド 説明
SHAPスコア The SHAP value assigned to this feature with respect to the prediction for this row, with both a visual representation and numeric score.
特徴量 The name of the contributing feature from the dataset.
The value of the feature in this row.
分布 A histogram representation of a feature, showing the distribution of the feature's values. Hover on a bar in the histogram to see bin details.

Set prediction range

The prediction range control defines both the prediction distribution chart display and the predictions to sample output. Click the pencil icon to open a modal for setting the criteria, based on prediction value:

Changes to the displays update immediately:

SHAPに関する注意事項

Consider the following when working with SHAP Individual Prediction Explanations in Workbench:

  • Multiclass classification experiments are not supported. That is, they do not return SHAP Individual Prediction Explanations

  • SHAP-based explanations for models trained into Validation and Holdout are in-sample, not stacked.

  • SHAP Individual Prediction Explanations are not supported for any project type not supported in Workbench, as well as:

    • Time-aware (OTV and time series) experiments
  • SHAP does not fully support image feature types. You can use images as features and DataRobot returns SHAP values and SHAP impacts for them. However, the SHAP explanations chart will not show activation maps ("image explanations"); instead, it shows an image thumbnail.

  • リンク機能を使用する場合、SHAPは余裕空間で加法性です(sum(shap) = link(p)-link(p0))。 推奨事項を以下に示します。

    • SHAPの付加的品質が必要な場合は、リンク関数を使用しないブループリント(一部のツリーベースのブループリントなど)を使用します。
    • ログをリンク関数として使用する場合は、exp(shap)を使用して予測を説明することもできます。
  • When the training partition is chosen as the data selection, the prediction distribution chart is not available. Once explanations are computed, however, the predictions table populates with explanations.


更新しました May 2, 2024