Skip to content

shapiq

get_tabpfn_explainer

get_tabpfn_explainer(
    model: Union[TabPFNRegressor, TabPFNClassifier],
    data: Union[DataFrame, ndarray],
    labels: Union[DataFrame, ndarray],
    index: str = "k-SII",
    max_order: int = 2,
    class_index: Optional[int] = None,
    **kwargs
)

Get a TabPFNExplainer from shapiq.

This function returns the TabPFN explainer from the shapiq[1]_ library. The explainer uses a remove-and-recontextualize paradigm of model explanation[2][3] to explain the predictions of a TabPFN model. See shapiq.TabPFNExplainer documentation for more information regarding the explainer object.

Parameters:

Name Type Description Default
model TabPFNRegressor or TabPFNClassifier

The TabPFN model to explain.

required
data DataFrame or ndarray

The background data to use for the explainer.

required
labels DataFrame or ndarray

The labels for the background data.

required
index str

The index to use for the explanation. See shapiq documentation for more information and an up-to-date list of available indices. Defaults to "k-SII" and "SV" (Shapley Values like SHAP) with max_order=1.

'k-SII'
max_order int

The maximum order of interactions to consider. Defaults to 2.

2
class_index int

The class index of the model to explain. If not provided, the class index will be set to 1 per default for classification models. This argument is ignored for regression models. Defaults to None.

None
**kwargs

Additional keyword arguments to pass to the explainer.

{}

Returns:

Type Description

shapiq.TabPFNExplainer: The TabPFN explainer.

References

.. [1] shapiq repository: https://github.com/mmschlk/shapiq .. [2] Muschalik, M., Baniecki, H., Fumagalli, F., Kolpaczki, P., Hammer, B., Hüllermeier, E. (2024). shapiq: Shapley Interactions for Machine Learning. In: The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. url: https://openreview.net/forum?id=knxGmi6SJi .. [3] Rundel, D., Kobialka, J., von Crailsheim, C., Feurer, M., Nagler, T., Rügamer, D. (2024). Interpretable Machine Learning for TabPFN. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2154. Springer, Cham. https://doi.org/10.1007/978-3-031-63797-1_23

get_tabpfn_imputation_explainer

get_tabpfn_imputation_explainer(
    model: Union[TabPFNRegressor, TabPFNClassifier],
    data: Union[DataFrame, ndarray],
    index: str = "k-SII",
    max_order: int = 2,
    imputer: str = "marginal",
    class_index: Optional[int] = None,
    **kwargs
)

Gets a TabularExplainer from shapiq with using imputation.

This function returns the TabularExplainer from the shapiq[1][2] library. The explainer uses an imputation-based paradigm of feature removal for the explanations similar to SHAP[3]_. See shapiq.TabularExplainer documentation for more information regarding the explainer object.

Parameters:

Name Type Description Default
model TabPFNRegressor or TabPFNClassifier

The TabPFN model to explain.

required
data DataFrame or ndarray

The background data to use for the explainer.

required
index str

The index to use for the explanation. See shapiq documentation for more information and an up-to-date list of available indices. Defaults to "k-SII" and "SV" (Shapley Values like SHAP) with max_order=1.

'k-SII'
max_order int

The maximum order of interactions to consider. Defaults to 2.

2
imputer str

The imputation method to use. See shapiq.TabularExplainer documentation for more information and an up-to-date list of available imputation methods.

'marginal'
class_index int

The class index of the model to explain. If not provided, the class index will be set to 1 per default for classification models. This argument is ignored for regression models. Defaults to None.

None
**kwargs

Additional keyword arguments to pass to the explainer.

{}

Returns:

Type Description

shapiq.TabularExplainer: The TabularExplainer.

References

.. [1] shapiq repository: https://github.com/mmschlk/shapiq .. [2] Muschalik, M., Baniecki, H., Fumagalli, F., Kolpaczki, P., Hammer, B., Hüllermeier, E. (2024). shapiq: Shapley Interactions for Machine Learning. In: The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. url: https://openreview.net/forum?id=knxGmi6SJi .. [3] Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems 30 (pp. 4765--4774).