graph LR
ExplainerBase["ExplainerBase"]
DiceKD["DiceKD"]
DiceGenetic["DiceGenetic"]
DiceRandom["DiceRandom"]
DicePyTorch["DicePyTorch"]
DiceTensorFlow1["DiceTensorFlow1"]
DiceTensorFlow2["DiceTensorFlow2"]
DiceXGBoost["DiceXGBoost"]
DiceKD -- "inherits from" --> ExplainerBase
DiceGenetic -- "inherits from" --> ExplainerBase
DiceRandom -- "inherits from" --> ExplainerBase
DicePyTorch -- "inherits from" --> ExplainerBase
DiceTensorFlow1 -- "inherits from" --> ExplainerBase
DiceTensorFlow2 -- "inherits from" --> ExplainerBase
DiceXGBoost -- "inherits from" --> ExplainerBase
The dice_ml.explainer_interfaces subsystem is central to the DiCE project, providing a unified framework for generating diverse counterfactual explanations. At its core is the ExplainerBase abstract component, which defines the common interface for all counterfactual generation methods, embodying the Strategy design pattern. Various concrete explainer implementations, including DiceKD, DiceGenetic, DiceRandom, DicePyTorch, DiceTensorFlow1, DiceTensorFlow2, and DiceXGBoost, inherit directly from ExplainerBase. Each concrete explainer is responsible for implementing the counterfactual generation logic tailored to specific model types or algorithmic approaches, such as KD-trees, genetic algorithms, random sampling, or gradient-based optimization for deep learning and tree-based models. This clear inheritance structure ensures extensibility and consistency across different counterfactual explanation techniques, with helper functionalities encapsulated within each specific explainer's implementation.
The abstract base component that defines the common interface (find_counterfactuals) for all counterfactual generation methods. It orchestrates the overall process and ensures a consistent API, embodying the Strategy pattern's interface.
Related Classes/Methods:
A concrete implementation of ExplainerBase that uses a KD-tree based approach for generating diverse counterfactual explanations, suitable for tabular data. Its internal logic encapsulates KD-tree specific operations.
Related Classes/Methods:
A concrete implementation of ExplainerBase that employs a Genetic Algorithm for generating diverse counterfactual explanations, offering a heuristic search approach. Its internal logic handles genetic algorithm specific functionalities.
Related Classes/Methods:
A concrete implementation of ExplainerBase that uses a random sampling approach for generating counterfactual explanations, serving as a baseline or for simple cases. Its internal logic manages random sampling utilities.
Related Classes/Methods:
A concrete implementation of ExplainerBase specifically designed for PyTorch models, utilizing a gradient-based optimization approach for counterfactual generation. Its internal logic handles PyTorch-specific gradient calculations.
Related Classes/Methods:
A concrete implementation of ExplainerBase tailored for TensorFlow 1.x models, employing a gradient-based approach for counterfactual generation. Its internal logic handles TensorFlow 1.x-specific gradient calculations.
Related Classes/Methods:
A concrete implementation of ExplainerBase for TensorFlow 2.x models, using a gradient-based optimization strategy for counterfactual generation. Its internal logic handles TensorFlow 2.x-specific gradient calculations.
Related Classes/Methods:
A concrete implementation of ExplainerBase for XGBoost models, likely utilizing a gradient-based or tree-specific optimization approach for counterfactual generation. Its internal logic handles XGBoost-specific counterfactual generation.
Related Classes/Methods: