Explainability
Explainers
MLExplainer
- MLExplainer ( model , X , y = None , task = 'classification' , target_names = None , col_types = None , selected_features = 'auto' , random_state = 7 , explain_preprocessed_features = False )
-
Return a
TabularExplainer
. It is implemented as a factory method.- Parameters :
-
-
model ( ModelObject ) – The model to explain. Must implement the predict method (and the predict_proba method, if the task is classification).
-
X ( pandas.DataFrame or List [ str ] ) – A reference dataset drawn from the same distribution as the training dataset and the future test dataset. May be the training dataset. If the dataset type is tabular it must be a pandas DataFrame. If the dataset type is text then it can be a list of str or a pandas DataFrame with a single column and col_types must be [‘text’].
-
y ( pandas.Series or None , default=None ) – Dataset targets for X, if available.
-
task ( str or None , default='classification' ) – The type of machine learning task performed by the model {‘classification’, ‘regression’, ‘anomaly_detection’}.
-
target_names ( List [ str ] or None , default=None ) – A list of names for the targets. If
None
anddataset_type='tabular'
, they will be inferred fromy
, if provided. Otherwise,target_names
are required. -
col_types ( List [ str ] or None , default=None ) – List of length
X.shape[1]
with string values indicating type of features. Supported types are: [‘categorical’, ‘numerical’, ‘text’]. If ‘text’, there should only be one column inX
. -
selected_features ( str , List [ str ] or None , default='auto' ) – List of features/tokens that have been internally selected by the model. If the model is an AutoML pipeline and set to
'auto'
, the list will be automatically populated from the model. IfNone
, all features/tokens will be used by the explainers. -
random_state ( int , default=7 ) – Random seed used by the explainers.
-
explain_preprocessed_features ( bool or None , default=False ) – If False (default), the features to be explained are the exact ones passed to the corresponding explainer method. If True and the model is an AutoML pipeline, the features to be explained are the ones at the output of preprocessing stages.
-
- Raises :
-
-
AutoMLxTypeError – Will be raised When the passed dataset isn’t DataFrame or list
-
AutoMLxNotImplementedError – Will be raised if no explainer is implemented for a valid dataset.
-
- Returns :
-
An instance of the appropriate BaseExplainer
- Return type :
-
automlx.mlx.interface.BaseExplainer
TabularExplainer
- class TabularExplainer ( model , X , y = None , task = 'classification' , target_names = None , col_types = None , selected_features = 'auto' , random_state = 7 , explain_preprocessed_features = False , ** kwargs )
-
Automatic ML model explanation object.
See
MLExplainer
for argument documentation.- aggregate ( explanations )
-
Aggregate the list of pre-computed explanations in a single explanation.
- Parameters :
-
explanations ( List [ automl.mlx.explanation.BaseLFIExplanation ] ) – List of at least two explanations to be summarized
- Raises :
-
-
AutoMLxValueError – If the Aggregator is applied on a single explanation.
-
AutoMLxTypeError – If the Aggregator is not applied on the local feature importance.
-
- Returns :
-
An Aggregate Local Feature Importance object (ALFI)
- Return type :
-
automl.mlx.explanation.BaseALFIExplanation
- configure_explain_counterfactual ( strategy = 'auto' , dice_posthoc_sparsity_algorithm = 'binary' , dice_posthoc_sparsity_param = 0.1 , stopping_threshold = 0.5 , target_name = 'target' , target_precision = None , ** kwargs )
-
Configure the counterfactual explainer. If a parameter is not provided, the default value will be used.
- Parameters :
-
-
strategy ( str , default='auto' ) –
Determines the strategy to be used to generate counterfactuals. Currently, AutoMLx supports two strategies: - If ‘auto’: ACE will be used for classification and anomaly detection,
and DiCE for regression.
-
If ‘ace’: AutoMLx Counterfactuals Explainer will be used. this explainer uses KDTree structures to find a set of nearby and diverse counterfactuals for each sample.
-
If ‘dice_random’: Diverse Counterfactual Explanations tools will be used to find the counterfactuals ( https://github.com/interpretml/DiCE ). The DiCE idea is to change features’ values randomly one by one until finding the target counterfactuals. The sparsity of the discovered counterfactuals will be enhanced by doing a binary/linear search over changed features.
-
-
dice_posthoc_sparsity_algorithm ( str , default='binary' ) – Perform either linear or binary search {‘binary’, ‘linear’} Only used if strategy=’dice_random’.
-
stopping_threshold ( float ) – Minimum threshold for counterfactuals target class probability between 0.5 and 1.(Default = 0.5)
-
dice_posthoc_sparsity_param ( float , default=0.1 ) – Parameter for the post-hoc operation on continuous features to enhance sparsity between 0 and 1. For each feature, the
dice_posthoc_sparsity_param
quantile of Absolute Deviations is computed and if the difference between query instance and counterfactual is less than this quantile, the posthoc sparsity algorithm is executed for the corresponding feature. Only used if strategy=’dice_random’. -
target_name ( str , default='target' ) – The target’s column name.
-
target_precision ( int or None , default=None ) – The decimal precision of target.
-
kwargs ( dict ) –
Optional parameters passed to the DiCE Data interface. Accepted parameters are as following:
- permitted_range: dict or None, default=None
-
Dictionary with feature names as keys and permitted range in list as values. Defaults to the range inferred from training data.
- continuous_features_precision: dict or None, default=None
-
Dictionary with feature names as keys and precisions as values.
- data_name: str or None, default=None
-
Dataset name.
-
- Raises :
-
AutoMLxValueError – If strategy is not supported in AutoMLx!
- configure_explain_feature_dependence ( explanation_type = None , encoder_type = None )
-
Configure the Feature Dependence explainer. This configuration will be used either to specify which FD explainer is selected or/and which encoder is used for ALE explanation.
- Parameters :
-
-
explanation_type ( str or None , default=None ) – If ‘pdp’,
explain_feature_dependence
will use the partial dependence plot (PDP) explainer. If ‘ale’, the accumulated local effects (ALE) explainer will be used instead. If None,explain_feature_dependence
will use by default ‘pdp’ if the explainer is not configured yet, otherwise it will use the most-recently provided. -
encoder_type ( str or None , default=None ) – The encoder type to be used for ALE to sort categorical features. If no encoder is specified (None), ALE explainer will use ‘distance_similarity’ if the explainer is not configured yet, otherwise it will use the most-recently used.
-
- Raises :
-
AutoMLxValueError – If explanation_type is not in [‘pdp’, ‘ale’] If encoder_type is not None and not in [“jamesstein”, “distance_similarity”]
- configure_explain_model ( ** kwargs )
-
Update one or more parameters of the model explainer. If a parameter is not provided, the most-recently provided value from the last time configure_explain_model was called will be used. If no value has been previously provided, then the defaults will be used.
- Parameters :
-
**kwargs –
- Additional arguments:
-
- explanation_type: ‘interventional’ or ‘observational’
-
- If ‘interventional’ (default for classification & regression),
-
then the explanation that is computed is as faithful to the model as possible. That is, features that are ignored by the model will not be considered important. This setting should be preferred if the primary goal is to learn about the machine learning model itself. Technically, this setting is called ‘interventional’, because the method will intervene on the data distribution when assessing the importance of features.
- If ‘observational’ (default for anomaly_detection),
-
then the explanation that is computed is more faithful to the dataset than the model. For example, a feature that is ignored by the model may have a non-zero feature importance if it could have been used by some model to predict the target. This setting should be preferred if the model is merely a means to learn more about the relationships that exist within the data. Technically, this setting is called ‘observational’, because it observes the relationships in the data without breaking the existing data distribution.
- tabulator_type: ‘permutation’, ‘kernel_shap’, ‘shapley’ or ‘shap_pi’, default=’permutation’ # noqa: E501
-
- If ‘permutation’ (default),
-
then the feature importance attributions will be calculated as in the classic feature permutation importance algorithm (assuming that explanation_type is set to ‘interventional’). Technically, this measures the importance of each feature independently from all others, and therefore it runs in linear time with respect to the number of features in the dataset.
- If ‘kernel_shap’,
-
then the feature importance attributions will be calculated using an approximation of the Shapley value method. Until reaching the budget of the algorithm, it will look for more coalitions. The downside is not having confidence intervals.
- If ‘shapley’,
-
then the feature importance attributions will be calculated using the popular game-theoretic Shapley value method. Technically, this measures the importance of each feature while includes the effect of all feature interactions. As a result, it runs in exponential time with respect to the number of features in the dataset.
- If ‘shap_pi’,
-
then the feature importance attributions wil be calculated using an approximation of the Shapley value method. It will run in linear time; however, because of this, it may miss the effect of the interactions between some features and may therefore produce lower-quality results. Most likely, you will notice that this method yields larger confidence intervals than the other three.
- sampling dict or None, default=None
-
If not
None
, the samples will be clustered or sampled according to the provided technique. sampling is a dictionary containing the information about which technique to use and the corresponding parameters. Format is described below.
- configure_explain_model_fairness ( ** kwargs )
-
Update one or more parameters of the model fairness explainer. If a parameter is not provided, the most-recently provided value from the last time configure_explain_model was called will be used. If no value has been previously provided, then the defaults will be used.
- Parameters :
-
**kwargs –
- Additional arguments:
-
- explanation_type: ‘interventional’, ‘observational’, default=’interventional’
-
- If ‘interventional’ (default),
-
then the explanation that is computed is as faithful to the model as possible. That is, features that are ignored by the model will not be considered important. This setting should be preferred if the primary goal is to learn about the machine learning model itself. Technically, this setting is called ‘interventional’, because the method will intervene on the data distribution when assessing the importance of features.
- If ‘observational’,
-
then the explanation that is computed is more faithful to the dataset than the model. For example, a feature that is ignored by the model may have a non-zero feature importance if it could have been used by some model to predict the target. This setting should be preferred if the model is merely a means to learn more about the relationships that exist within the data. Technically, this setting is called ‘observational’, because it observes the relationships in the data without breaking the existing data distribution.
- tabulator_type: ‘permutation’
-
- If ‘permutation’ (only supported value currently),
-
then the feature importance attributions will be calculated as in the classic feature permutation importance algorithm (assuming that explanation_type is set to ‘interventional’). Technically, this measures the importance of each feature independently from all others, and therefore it runs in linear time with respect to the number of features in the dataset.
- sampling dict or None, default=None
-
If not
None
, the samples will be clustered or sampled according to the provided technique. sampling is a dictionary containing the information about which technique to use and the corresponding parameters. Format is described below.
- configure_explain_prediction ( ** kwargs )
-
Update one or more parameters of the prediction explainer. If a parameter is not provided, the most-recently provided value from the last time configure_explain_prediction was called will be used. If no value has been previously provided, then the defaults will be used.
- Parameters :
-
**kwargs –
Additional arguments:
explainer_type: ‘perturbation’ or ‘surrogate’, default=’perturbation’
-
If ‘perturbation’ (default),
then the explanation(s) will be computed by perturbing the features of the indicated data instance(s) and measuring the impact on the model predictions. Values for ‘explanation_type’, ‘tabulator_type’ and ‘confidence_interval’ will be ignored unless the ‘perturbation option is selected. - If ‘surrogate’, then the LIME-style explanation(s) will be computed by fitting a surrogate model to the predictions of the original model in a small region around the indicated data instance(s) and measuring the importance of the features to the interpretable surrogate model. Only recommended for advanced users
- explanation_type: ‘interventional’ or ‘observational’
-
Only used if
explainer_type
is ‘perturbation’.-
If ‘interventional’ (default for classification & regression),
then the explanation that is computed is as faithful to the model as possible. That is, features that are ignored by the model will not be considered important. This setting should be preferred if the primary goal is to learn about the machine learning model itself. Technically, this setting is called ‘interventional’, because the method will intervene on the data distribution when assessing the importance of features. - If ‘observational’ (default for anomaly detection), then the explanation that is computed is more faithful to the dataset than the model. For example, a feature that is ignored by the model may have a non-zero feature importance if it could have been used by some model to predict the target. This setting should be preferred if the model is merely a means to learn more about the relationships that exist within the data. Technically, this setting is called ‘observational’, because it observes the relationships in the data without breaking the existing data distribution.
-
- tabulator_type: ‘permutation’, ‘kernel_shap’, ‘shapley’, ‘shap_pi’ or ‘tree_shap’, default=’permutation’ # noqa: E501
-
Only used if explainer_type is ‘perturbation’.
-
If ‘permutation’ (default),
then the feature importance attributions will be calculated as in the classic feature permutation importance algorithm (assuming that explanation_type is set to ‘interventional’). Technically, this measures the importance of each feature independently from all others, and therefore it runs in linear time with respect to the number of features in the dataset. - If ‘kernel_shap’, then the feature importance attributions will be calculated using an approximation of the Shapley value method. Until reaching the budget of the algorithm, it will look for more coalitions. The downside is not having confidence intervals. - If ‘shapley’, then the feature importance attributions will be calculated using the popular game-theoretic Shapley value method. Technically, this measures the importance of each feature while includes the effect of all feature interactions. As a result, it runs in exponential time with respect to the number of features in the dataset. - If ‘shap_pi’, then the feature importance attributions will be calculated using an approximation of the Shapley value method. It will run in linear time; however, because of this, it may miss the effect of the interactions between some features and may therefore produce lower-quality results. Most likely, you will notice that this method yields larger confidence intervals than the other three. - If ‘tree_shap’ then the feature importance attributions will be calculated using a tree-model-specific method that represents the background distribution based on the number of training examples that went down each leaf. In this case the explanation_type argument is ignored. This option is only available for (most) tree-based models. The upside is a much faster computation. The downside is not having confidence intervals.
-
- method: ‘systematic’ or ‘lime’
-
Only used if explainer_type is ‘surrogate’.
-
If ‘systematic’, uses an Oracle AutoMLx proprietary method that improves
the quality of LIME explanations with a systematic sampling and a custom sampling weighting technique. - If ‘lime’, uses an in-house implementation of the traditional local model-agnostic explanations (LIME) algorithm.
-
- sampling dict or None, default=None
-
If not
None``ne``ne
, the samples will be clustered or sampled according to the provided technique. sampling is a dictionary containing the information about which technique to use and the corresponding parameters. Format is described below.
-
- Raises :
-
AutoMLxValueError –
-
if the explanation method is tree_shap and the explain_preprocessed_features is false - if the selected features number is less than the total number of features - if the explanation type is not allowed with the explainer type
-
- explain_counterfactual ( X , n_counterfactuals = 1 , desired_pred = 'auto' , permitted_range = None , features_to_fix = None , features_to_vary = None , random_seed = None , ** kwargs )
-
Find counterfactuals by finding a minimal set of required changes that would flip the model’s decision (that is, to change the model’s prediction). Currently, it supports two strategies for creating counterfactual examples. To switch between the strategies, the explainer should be configured with the appropriate strategy:
-
If ‘ace’: The AutoMLx Counterfactuals Explainer will be used. this explainer uses KDTree structures to find a set of nearby counterfactuals that are diverse, for each example. This strategy only supports ‘classification’ and ‘Anomaly Detection’ tasks.
-
If ‘dice_random’: The Diverse Counterfactual Explanations tools will be used to find the counterfactuals ( https://github.com/interpretml/DiCE ).
Dataset for which counterfactuals are to be generated.
- n_counterfactuals int, default=1
-
Total number of counterfactuals required. It should be greater than 0.
- desired_pred str, list, int or tuple, default=’auto’
-
The desired outcome class or range based on the settings.
-
If classification / anomaly_detection:
-
If a list: list of desired counterfactual class for each row of query instances.
-
If an int: desired counterfactual class for every query instance.
-
If ‘auto’: Searches for counterfactuals with the opposite of the predicted outcome. However, the desired_pred value is necessary for multi-class classification.
-
-
If regression:
-
If a list: list of tuples, where each tuple is the outcome range to generate counterfactuals in.
-
If a tuple: the outcome range in which all counterfactuals should be.
-
-
- permitted_range dict or None, default=None
-
Dictionary with feature names as keys and permitted range in list as values. For numeric features, a list of two values specifying the permitted range (inclusive), and for categorical features, a list of permitted values (e.g.
permitted_range={'age': [20, 30], 'education': ['Doctorate', 'Prof-school']}
). Defaults to the range inferred from training data. - features_to_fix list, default=[]
-
A list of feature names to fix. These are immutable features.
- features_to_vary list or None, default=None
-
A list of feature names to vary. If both
features_to_vary
andfeatures_to_fix
areNone
or both are notNone
, raise an error that just one of them should beNone
. - random_seed int or None, default=None
-
Random seed for reproducibility.
- limit_steps_ls int, default=10000
-
Defines an upper limit for the linear search step in the
dice_posthoc_sparsity_algorithm
.
- Returns :
-
explanations – A list of
CFExplanation
objects that contains the explanation for eachquery_instance
. - Return type :
- Raises :
-
-
AutoMLxValueError – If both of the
features_to_vary
andfeatures_to_fix
parameters areNone
, If neither offeatures_to_vary
andfeatures_to_fix
areNone
, IfX
length is zero, Ifdesired_pred
is set to ‘auto’ or int for regression tasks. -
AutoMLxTypeError – X should be either pandas.DataFrame or pandas.Series.
-
-
- explain_feature_dependence ( feature_names , feature_range = 'auto' , n_feature_values = 'auto' , X = None , y = None , sampling = None , confidence_interval = 'auto' )
-
Compute a Feature Dependence explanation for how the model predictions depend on the specified features.
If the explainer is configured with
explanation_type == 'pdp'
, or it’s not configured yet, a Partial Dependence Plot (PDP) explanation is computed, and an individual conditional expectation explanation (ICE) in case only one feature is provided. Otherwise, compute Accumulated Local Effects (ALE) explanation.- Parameters :
-
-
feature_names ( list , str or None ) – A list that contains the feature names for which the explanation is to be computed, or a single feature name. For PDP, if more than four features are provided, visualizations will not be available; however, a tabular form of the explanation will still be computed. For ALE, only two-features explanations are supported.
-
feature_range ( tuple [ float , float ] or str , default='auto' ) – Determines the range of feature values evaluated in the explanation. If ‘auto’ is provided, use the default
(0.05, 0.95)
for PDP and(0, 1)
for ALE. The feature_range should be a sorted tuple of two floats, in[0, 1]
. The range of values of the features used is determined by taking these percentiles of the marginal distribution(s) of the feature(s). -
n_feature_values ( int or str , default='auto' ) – Max number of values to include between the feature range. If a categorical feature, selects all possible categorical values. If not an int, then it must be ‘auto’, in which case the number of values will be automatically chosen based on the number of features in order to maximize the interpretability of the explanations when plotted by show_in_notebook.
-
X ( pandas.DataFrame or None , default=None ) – An alternate dataset for which the explanation should be computed. If not provided, the dataset used to initialize the explainer will be used instead. If provided, X must have similar columns to the original dataset that was provided.
-
y ( pandas.Series or None , default=None ) –
Target labels associated to X. If not
None
, the dataset will be downsampled using the indicated technique in the dictionary. Format is described below.-
’technique’: Can be one of ‘kdtree’, ‘random’ or ‘kmeans’.
-
If ‘kdtree’, also requires:
-
’n_samples’: it will uniformly draw n_samples samples from the leaves of each tree.
-
-
If ‘random’, also requires:
-
’n_samples’: Number of samples to return
-
-
If ‘kmeans’, also requires:
-
’n_clusters’: Number of clusters to form
-
’return_centroid’: A boolean flag whether to return cluster centroids
-
’n_samples’: if return_centroid is False, it will uniformly draw n_samples samples from the clusters.
-
-
-
-
sampling ( dict or None , default=None ) –
If not
None
, the samples will be clustered or sampled according to the provided technique. sampling is a dictionary containing the information about which technique to use and the corresponding parameters. Format is described below.-
- ’technique’: Can be one of ‘kd-tree’, ‘dbscan’, ‘kmeans’,
-
’auto’ or ‘random’.
-
If ‘kd-tree’, also requires:
-
- ’n_clusters’: The number of clusters (leaves) in the
-
tree.
-
’n_samples’: The number of samples to return.
-
- ’technique’: Can be one of ‘kd-tree’, ‘dbscan’ or ‘kmeans’,
-
’auto’ or ‘random’.
-
If ‘kd-tree’, also requires:
-
- ’n_clusters’: The number of clusters (leaves) in the
-
tree.
-
’n_samples’: The number of samples to return.
-
If ‘dbscan’, also requires:
-
- ’eps’: Maximum distance between two samples to be
-
considered in the same cluster
-
- ’min_samples’: Minimum number of samples to include
-
in each cluster
-
- ’fast’: if True, the dataset is downsampled first
-
then dbscan is applied to it. Suitable for large datasets.
-
If ‘kmeans’, also requires:
-
n_clusters: The number of clusters to form.
-
If ‘random’, also requires:
-
’n_samples’: Number of samples to return
-
-
-
-
-
confidence_interval ( dict or str , default='auto' ) – If not ‘auto’, the confidence intervals will be determined according to the provided technique. confidence_interval is a dictionary containing the information about which technique to use and the corresponding parameters. If ‘auto’, use the default ‘student-t’ for PDP, and ‘none’ for ALE.
-
- Raises :
-
AutoMLxValueError – If y is not provided but the encoder_type is Jamesstein for ALE, andX is not None.
- Returns :
-
The feature dependence explanation.
- Return type :
- explain_model ( n_iter = 'auto' , scoring_metric = None )
-
Compute a global feature importance explanation for the model and dataset.
To configure how the explanation is computed, see
configure_explain_model
.- Parameters :
-
-
n_iter ( str or int , default='auto' ) – The number of iterations used to evaluate the global importance of the model. Increasing n_iter will require a linear increase in compute; however, it will provide more accurate importance estimates, thereby decreasing the variance in repeated calls to explain_model with identical inputs and decreasing the size of the confidence intervals. If ‘auto’, it will be determined later on based on the type of explainer.
-
scoring_metric ( str , callable or None , default=None ) –
- The scoring_metric to compute explanations on.
-
-
If str, it is the name of the scoring metric.
-
If callable, it has to have the
scoring_metric(model, X, y)
signature. -
If ‘auto’, will default to the metric used to train the model argument.
-
If None, will default to the last scoring metric used or ‘auto’ if no metric was used yet.
-
-
- Raises :
-
AutoMLxValueError – If attempting to provide explanations for preprocessed features.
- Returns :
-
explanation – An explanation object that contains the global explanation.
- Return type :
-
automl.mlx.explanation.GFIExplanation
- explain_model_fairness ( scoring_metric = None , protected_attributes = None , supplementary_features = 'auto' , n_iter = 20 , ** fairness_kwargs )
-
Compute a global feature importance explanation for the model and dataset according to a fairness metric.
The first call made to explain_model_fairness() has to initialize a valid fairness metric. This can be done one of two ways:
-
Specifying a scoring_metric str and passing protected_attributes , with any other kwargs for the initialization taken in as a ** fairness_kwargs.
-
Passing a fairness scorer object as scoring_metric , without setting protected_attributes or fairness_metric_kwargs .
Input values received are stored as attributes to be reused in subsequent calls. One can always override previous values used by passing in newer values to explain_model_fairness .
To avoid any possible ambiguity, cases 1 and 2 above are mutually exclusive: one cannot have protected_attributes set and use a fairness scorer object as scoring_metric and vice-versa.
- Parameters :
-
-
scoring_metric ( str , callable or None , default=None ) –
- The scoring_metric to compute explanations on.
-
-
If str, it is the name of the scoring metric. Supported strings are: %s.
-
If callable, it has to be a model fairness metric.
-
If None, will default to the last scoring metric used.
-
-
protected_attributes ( pandas.Series , numpy.ndarray , list or str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.
-
supplementary_features ( pandas.DataFrame , str or None , default='auto' ) – Array of supplementary features for each instance. Used in case one attribute in
protected_attributes
is not contained byX
(e.g. if the protected attribute is not used by the model). Default is'auto'
, default to the last supplementary_features used, or None if no call was done before. -
n_iter ( str or int , default=20 ) – The number of iterations used to evaluate the global importance of the model. Increasing n_iter will require a linear increase in compute; however, it will provide more accurate importance estimates, thereby decreasing the variance in repeated calls to explain_model with identical inputs and decreasing the size of the confidence intervals. If ‘auto’, it will be determined later on based on the type of explainer.
-
**fairness_kwargs ( dict or None , default=None ) – Any kwarg accepted by the fairness scorer’s constructor (e.g. distance_measure ).
-
- Raises :
-
AutoMLxValueError –
-
If using fairness feature importance, it’s important to be aware that the explanations only support ‘permutation’ as tabulator_type. - If the fairness scoring metric is unspecified.
-
- Returns :
-
explanation – An explanation object that contains the global fairness explanation.
- Return type :
-
automl.mlx.explanation.GFIExplanation
-
- explain_prediction ( X , y = None , labels = 'auto' , n_iter = 'auto' )
-
Report the local explanation for the given samples by providing contribution score per feature
- Parameters :
-
-
X ( pandas.DataFrame or pandas.Series ) – One or more dataset rows to be explained
-
y ( pandas.Series ) – Dataset target for the rows of
X
. (Deprecated: This attribute will be removed in future versions.) -
labels ( str , tuple [ int ] or int ) – If ‘auto’, the explanation will be predicted for the label with the highest probability (for a classification model). Otherwise, the index or indices of specific labels can be passed to compute the explanation with respect to those labels instead.
-
n_iter ( str or int , default='auto' ) – The number of iterations used to evaluate the importance of each of the data instances. Increasing n_iter will require a linear increase in compute; however, it will provide more accurate importance estimates, thereby decreasing the variance in repeated calls to explain_prediction with identical inputs and decreasing the size of the confidence intervals. If ‘auto’, it will be determined later on based on the type of explainer.
-
- Raises :
-
AutoMLxValueError – if length of X < 1 than X should have at least one sample to explain the model prediction if length of inferred_col_types != length of X.columns than The given samples’s column numbers does not match the reference data column numbers if samples_inferred_col_types != inferred_col_types than The given samples’s column types does not match the reference data column types
- Returns :
-
explanations – A list of object that contain the local feature importance explanations, one for each instance in X.
- Return type :
-
List[automl.mlx.explanation.BaseLFIExplanation]
- explore_whatif ( X , y , train = None , target_title = None , row_index = None , features = None , max_features = 32 , x_axis = None , y_axis = None , label = None , plot_type = 'scatter' , discretization = None )
-
Provides UI/API to explore how a sample prediction changes by modifying the sample feature values and the relationship between feature values and the model predictions.
- Parameters :
-
-
X ( pandas.DataFrame ) – Data to explore
-
y ( list , pandas.DataFrame or numpy.ndarray ) – Ground truth labels for
X
. -
train ( pandas.DataFrame or None , default=None ) – Data used to train the ML model.
-
target_title ( str or None , default=None ) – Title of the dataset’s target.
-
row_index ( int or None , default=None ) – Index of the sample to explore. If None, the first sample in
X
is selected. -
features ( List [ str , int ] or None , default=None ) – Feature columns to explore.
-
max_features ( int , default=32 ) – Maximum number of features to make available for modification.
-
x_axis ( str or None , default=None ) – Feature column on x-axis. If None is provided, then the first column of X is selected.
-
y_axis ( str or None , default=None ) – Feature column or model prediction column on the y-axis. If None, model prediction column is selected.
-
label ( str or None , default=None ) – Target label to explore.
-
discretization ( str or None , default=None ) – Discretization method to apply to the x-axis if continuous. Can be chosen from [‘quartile’, ‘decile’, ‘percentile’]. If the axis’s cardinality > 100, and none of these method is selected, ‘decile’ is used.
-
plot_type ( str or None , default='scatter' ) – Visualization plot type. Could be from [‘scatter’, ‘box’, ‘bar’] if classification or [‘scatter’, ‘box’] if regression.
-
TextExplainer
- class TextExplainer ( model , X , y = None , task = 'classification' , target_names = None , selected_tokens = None )
-
Automatic NLP ML model explanation object.
See
MLExplainer
for argument documentation.- aggregate ( explanations )
-
Return an explanation that aggregates the list of pre-computed explanations.
- Parameters :
-
explanations ( List [ automl.mlx.explanation.BaseLFIExplanation ] ) – List of at least two explanations to be summarized
- Raises :
-
-
AutoMLxValueError – If the Aggregator is applied on a single explanation.
-
AutoMLxTypeError – If the Aggregator is not applied on the local feature importance.
-
- Returns :
-
An Aggregate Local Feature Importance object (ALFI)
- Return type :
-
automl.mlx.explanation.BaseALFIExplanation
- configure_explain_model ( ** kwargs )
-
Configure the model-level explainer’s parameters to values other than their defaults.
- Parameters :
-
**kwargs ( dict ) –
- Additional arguments:
-
- replacement str or None, default=None
-
The token to use when replacing tokens in the dataset. Usually this is a token that has never been seen before by the model, e.g., ‘UNTK’. If None, un-important tokens from the dataset will be automatically determined.
- n_tokens int, default=60
-
The number of tokens to evaluate in depth. A heuristic procedure is used to identify, predict and rank which tokens are most likely to be important. From those, only the top n_tokens tokens will be evaluated in depth.
- n_iter int, default=3
-
The number of iterations used to evaluate the importance of each of the tokens. Increasing n_iter will require a linear increase in compute; however, it will provide more accurate importance estimates, thereby decreasing the variance in repeated call to
explain_model
with identical inputs and decreasing the size of the confidence intervals.
- configure_explain_prediction ( explainer_type = None , ** kwargs )
-
Update one or more parameters of the prediction explainer.
- Parameters :
-
-
explainer_type ( str or None , default=None ) –
-
If
None
, does not modify the currently configured explainer. -
If ‘surrogate’, then the LIME-style explanation(s) will be computed by fitting a surrogate model to the predictions of the original model in a small region around the indicated data instance(s) and measuring the importance of the features to the interpretable surrogate model.
-
-
**kwargs ( dict ) – Additional arguments
-
- explain_model ( X = None , y = None , target = None )
-
Identify what tokens are most important to the model using the Text Perturbation Importance explainer.
- Parameters :
-
-
X ( List [ str ] , pandas.DataFrame or None , default=None ) – The dataset for which the explanations are computed.
-
y ( numpy.ndarray or None , default=None ) – Ground truth target values for
X
. If None, unsupervised TextPI will be activated. IfX
is None, this input will be ignored and the computation will be based on the dataset provided when the explainer was initialized. -
target ( int or None , default=None ) – Indicates the target index to be explained.
-
- explain_prediction ( X , y = None , labels = 'auto' )
-
Report the local explanation for the given samples by providing contribution score per token
- Parameters :
-
-
X ( pandas.DataFrame or pandas.Series ) – One or more dataset rows to be explained
-
y ( pandas.Series ) – Dataset target for the rows of
X
. -
labels ( str , Tuple [ int ] or int , default='auto' ) – If ‘auto’, the explanation will be predicted for the label with the highest probability (for a classification model). Otherwise, the index or indices of specific labels can be passed to compute the explanation with respect to those labels instead.
-
- Raises :
-
AutoMLxValueError – If X has more than one sample.
- Returns :
-
explanations – A list of object that contain the local feature importance explanations, one for each instance in
X
. - Return type :
-
List[automl.mlx.explanation.BaseLFIExplanation]
Explanation objects
BaseLFIExplanation
- class BaseLFIExplanation ( explanation , sample , inference_time = None , task = 'classification' , explanation_type = 'tabular' , target_names = None , y_prediction = None )
-
Wrapper class for local feature importance based explanations (e.g., LIME, Systematic local explainer, or SHAP).
- show_in_notebook ( label = 'auto' )
-
Return the explanation as a plotly graph object figure for the specified target label.
- Parameters :
-
label ( int , str or None , default='auto' ) – Label index to visualize. If ‘auto’ and the model is either classification or anomaly detection, then the label that was predicted by the model is used. If None, label 0 is used.
- Raises :
-
AutoMLxValueError – If you provide a target label name that is not available, it indicates that either you didn’t select it during the explanation computation or the provided label name is invalid.
- Returns :
-
Generate a bar plot displaying the local importance of features for a given data sample.
- Return type :
-
plotly.graph_objs._figure.Figure
- to_dataframe ( labels = None )
-
Return the explanation in dataframe format for the specified labels. There will be at least three columns “Feature”, “Attribution”, and “Target”, but there may also be “Upper Bound” and “Lower Bound”, which corresponds to the confidence intervals for the attributions, if they are calculated by the given method.
FDExplanation
- class FDExplanation ( total_preds , mean_preds , std_preds , mean_preds_lower , mean_preds_upper , partial_feature_values , samples_per_feature , feature_names , target_names , is_categorical_feature , mode = 'classification' , explanation_type = 'pdp' , feature_distribution = None , feature_distribution_percentage_column = None , feature_priority = None , feature_correlations = None )
-
Feature Dependence Explanation, that is,sampling= PDP, ALE and ICE. PDP supports N-feature partial dependence, ALE supports two-feature and ICE supports only single feature. ICE is optional, and it can be computed only alongside PDP for a single feature.
- show_in_notebook ( ice = False , labels = (0,) , prefer_widescreen = True , show_distribution = True , clip_distribution = True , force_heatmap = False , centered = False , show_median = True , sampling = None )
-
Create accumulated local effects (ALE), partial dependence plot (PDP) and individual conditional expectation (ICE) visualizations for the explanation.
- Parameters :
-
-
ice ( bool , default=False ) –
Determines whether or not an ICE plot or and PD plot is returned.
if ice == False:
Plots a partial dependence explanation for up to four feature explanations
-
1 feature: Generates a line graph or bar chart over the distribution of feature values evaluated.
-
2 features: Depending on the cardinality of the feature grid, either generates a heat map or a colored line/bar chart over the distribution of both feature values evaluated.
-
3 features: Uses the same encoding strategy as with 2 features; however, the third feature is encoded by plotting multiple small versions of the plot that are horizontally or vertically aligned (using row- or column-facetting) – one for plot for each value of the third feature in the grid.
-
4 features: Uses the same encoding strategy as with 2 features; however, the third and fourth features are encoded by plotting multiple small versions of the plot that are both horizontally and vertically algined in a grid (using row-and column-facetting) – one plot for each unique combination of values for the third and fourth features.
elif ice == True:
Plots an ICE plot:
-
Numerical features: line chart
-
Categorical features: violin plot
-
-
labels ( tuple [ int , ... ] , list [ int ] or int , default=0 ) – Index(ices) of the labels (targets) to visualize.
-
prefer_widescreen ( bool , default = True ) – If True, the shape of the returned figure will tend to be wider than it is tall when using row-or column-facetting for 3- and 4-feature PDPs.
-
show_distribution ( bool , default = True ) – If True, a histogram of the features’ value distributions (from the provided dataset) will be shown along the corresponding axis. When plotting heatmaps, the marginal distributions of the features on the x-and y-axes will be shown (conditioned on the values of any features encoded using row-or column-facets). In all other cases joint feature distributions will be displayed.
-
clip_distribution ( bool , default = True ) – If True, then portions of the feature distributions that extend beyond the domain of the feature value grid will be clipped (although they may still be viewed by zooming out).
-
force_heatmap ( bool , default = False ) – If True, and a PDP with more than 1 feature is computed, this will force a heatmap plot. If False, a heatmap is only chosen for high-cardinality data.
-
centered ( bool , default = False ) – If true, ICE plots will be centered based on the first value of each sample (i.e., all values are subtracted from the first value).
-
show_median ( bool , default = True ) – If true, a median line is included in the ICE explanation plot.
-
sampling ( dict , default = {'technique: 'kdtree' , 'n_samples': 50} ) – For all datasets of non-trivial size, plotting all of the data instances in an ICE plot produces an un-interpretable explanation due to overcrowding. Plotting too many lines at once also requires substantial time to render the figure. Therefore, by default we use a space-filling sampling technique to sample 50 random instances to plot. However, this can be configured to use any valid sampling strategy. See automl.mlx.sample_cluster.create_down_sampler for valid options.
-
- Raises :
-
AutoMLxRuntimeError – If trying to visualize a PDP with more than four features, or an ICE explanation with more than one feature, or an ALE with two categorical features or with more than 2 features.
- Returns :
-
Plotly figure object containing a line chart, bar chart, heat map, or violin plot for this feature dependence explanation.
- Return type :
-
plotly.graph_object.figure
- to_dataframe ( ice = False , show_median = True , centered = True , sampling = None )
-
Return a pandas DataFrame representation of the PDP, ALE or ICE explanation from this FDExplanation object. ICE is optional, and it can be set to True only if a single-feature PDP was computed.
- Parameters :
-
-
ice ( bool ) –
If ice == False, the columns contain:
-
Each of the feature names evaluated in this PDP/ALE.
-
The corresponding mean prediction from the ML model for the given feature values for each target (“Mean”).
-
The corresponding standard deviation of the predictions from the ML model for the given feature values (“Standard Deviation”) – only applies for PDPs.
-
The 95% confidence interval lower bound for the prediction from the ML model for the given feature values (“Lower Bound”).
-
The 95% confidence interval upper bound for the prediction from the ML model for the given feature values for each target (“Upper Bound”).
-
The name of the current target (“Target”)
If mode == True, the columns contain:
-
Each of the feature names evaluated in this ICE explanation.
-
A column containing the names of the targets being predicted.
-
The index of the data instance explained in the original dataset.
-
The corresponding prediction from the ML model for the given feature values, target and data index.
-
-
show_median ( bool ) – Determines whether or not the median is included in the dataframe. Only used if ice == True. Adds an additional column to the dataframe that specifies whether or not the current row corresponds to a data instance (“datum”) or the median (“median”). The “Index” column will also contain -1 for all occurences of the median.
-
centered ( bool ) – Determines whether or not the ICE explanation is centered. Only used if ice == True.
-
sampling ( dict or None , default=None ) – For all datasets of non-trivial size, plotting all of the data instances in an ICE plot produces an un-interpretable explanation due to overcrowding. Plotting too many lines at once also requires substantial time to render the figure. Therefore, by default we use a space-filling sampling technique to sample 100 random instances to plot. However, this can be configured to use any valid sampling strategy. See automl.mlx.sample_cluster.create_down_sampler for valid options.
-
- Returns :
-
Pandas DataFrame representation of the PDP/ICE or ALE explanation.
- Return type :
GFIExplanation
- class GFIExplanation ( feature_names = None , explanations = None , scoring_metric_name = None )
-
Generic wrapper class for global explanation models (e.g., PermutationImportance).
- show_in_notebook ( n_features = None , mode = 'bar' , box_points = 'suspectedoutliers' )
-
Generate a visualization for this global perturbation-based feature attribution explanation object.
- Parameters :
-
-
n_features ( int or None , default=None ) – If n_features is not None, only show the top n_features most important features.
-
mode ( str , default='bar' ) –
Visualization mode. Can be one of:
-
’bar’: Returns a plotly figure object that visualizes the feature attributions using a bar chart. Each bar represents the average feature attribution from each iteration. If available, confidence intervals for the mean will be included in the plot. (default)
-
’box_plot’: Returns a plotly figure object that visualizes the feature attributions using a box plot of the raw feature attributions from each iteration of the algorithm. The box_points parameter can be used to further configure the visualization of the data.
-
-
box_points ( str , default='suspectedoutliers' ) –
Sets the type of BoxPlot.
-
’suspectedoutliers’: Only shows the suspected anomalies. (default)
-
’outliers’: Shows all the outliers.
-
’all’: Shows all the points.
Only used if mode = ‘box_plot’.
-
-
- Raises :
-
NotImplementedError – If the detailed method is not yet supported in AutoMLx.
- Returns :
-
A figure containing a visualization of the explanation.
- Return type :
-
plotly.graph_objects.Figure
- to_dataframe ( n_features = None , mode = 'normal' )
-
Return a representation of the explanation as a dataframe. If n_features is not None, return only the top n_features most important features.
- Parameters :
-
-
n_features ( int or None , default=None ) – If n_features is not None, returns only the top n_features most important features.
-
mode ( {'normal' , 'detailed'} , default='normal' ) – If mode == ‘detailed’ then the dataframe contains the feature attributions from each individual iteration. Otherwise it contains the mean feature attributions and their corresponding upper and lower bounds.
-
- Returns :
-
explanation – The explanation as a dataframe.
- Return type :
GTIExplanation
- class GTIExplanation ( tokens = None , explanations = None , target = None , target_names = None )
-
Generic wrapper class for text global explanation models (e.g., TextPerturbationImportance).
- Parameters :
-
-
tokens ( List [ str ] or None ) – List of token names (e.g., [‘A..’, ‘B..’, ‘C..’]).
-
explanations ( List [ dict ] or None ) –
List (tokens within an explanation) of Dictionaries (token explanation). Explanations are in the form,:
{'token': <token_value>, 'attribution': <token_importance>, 'std': <importance_standard_deviation>, 'flat_attributions': <token_importance_per_iteration>, 'target_impact_rate': <target_based_importance>, 'target_coverage_rate': <target_coverage_rate_of_token>, 'coverage_rate': <coverage_rate_of_token>, }
For example,:
[{'token': 'B', 'attribution': 0.6', 'std': 0.4, 'flat_attributions': [0.5, 0.7, 0.6], 'target_impact_rate': -0.2, 'target_coverage_rate':0.08 , 'coverage_rate':0.1, }, {'token': 'A', 'attribution': 0.4', 'std': 0.2, 'flat_attributions': [0.6, 0.35, 0.35], 'target_impact_rate': 0.3, 'target_coverage_rate':0.1 , 'coverage_rate':0.25, }, ]
-
target ( int or None , default=None ) – Indicates the target index to be explained.
-
target_names ( List [ str , int ] or None , default=None ) – List of target names corresponding to the output targets of the black box model.
-
- show_in_notebook ( n_tokens = None )
-
Generate a model-level explanation visualization for this explanation object.
- Parameters :
-
n_tokens ( int or None , default=None ) – Top-k tokens to show.
- Returns :
-
A figure that shows the relative importance of the top
n_tokens
tokens. If the explanation was computed with more than one iteration, the figure will include 95% confidence intervals for the estimates of the tokens’ importance. - Return type :
-
plotly.graph_objs.Figure
- to_dataframe ( n_tokens = None )
-
Return a representation of the explanation as a dataframe. If
n_tokens
is notNone
, returns only the topn_tokens
most important tokens.- Parameters :
-
n_tokens ( int or None , default=None ) – If
n_tokens
is notNone
, returns only the topn_tokens
most important tokens. - Returns :
-
explanation – The explanation as a dataframe.
- Return type :
CFExplanation
- class CFExplanation ( explanation , target_name , X_train , y_train , m_wrapper , method , desired_pred , category_map , task = 'classification' )
-
Wrapper class for counterfactual explanations.
- show_in_notebook ( interactive = True )
-
Generate a visualization to show a counterfactual based on What-IF explainer. Note that
show_in_notebook
can not show diverse counterfactuals and only the first counterfactual will be shown.- Parameters :
-
interactive ( bool , default=True ) – If
True
, returns a what-if explainer to explore and change the counterfactual features values. Otherwise, returns tables and model predictions related to counterfactual. - Returns :
-
result – one widget instance containing select boxes, tables, and plots to explore the sample counterfactual. If not interactive, it doesn’t show the select boxes that allow the user to change the counterfactual features values.
- Return type :
-
ipywidgets.widgets.VBox
- to_dataframe ( )
-
Return a representation of the explanation as a dataframe.
- Returns :
-
The explanation as a dataframe.
- Return type :