Comments (2)
Hi @GISH123,
Sorry for the late response. If you are using the LightGBM model, then hopefully you will have "feature_importances_" that can give you ordering of the features in your trained lightgbm model.
However, the explainers under mimic explainer also give you local feature importance per sample of your evaluation data. So you could train lightgbm model and explain it using the mimic explainer explainer with lightgbm as the surrogate model to get both local and global feature importance. This is a perfectly valid scenario. However, the surrogate model might be lossy but for the scenario that you described the explanations provided by lightgbm surrogate model for your lightgbm model should be very close.
You can read more about the blackbox explainers and surrogate models on the link https://christophm.github.io/interpretable-ml-book/global.html.
Let us know if you have more questions.
Regards,
from interpret-community.
@GISH123, I hope my above response helped with your understanding of blackbox explainers. Since there is no response on this issue for sometime, I am closing this issue. IF you have further questions, please feel free to open another issue.
Regards
from interpret-community.
Related Issues (20)
- Calibrating a classifier HOT 2
- WrappedClassificationModel() usage HOT 4
- ImportError: cannot import name 'TabularExplainer' HOT 3
- Issue when initializing explainer through TabularExplainer and KernelExplainer HOT 1
- Log scale combo box for Dependence plot's x axis HOT 2
- Replace load_boston with alternate regression dataset
- 'Expecting data to be a DMatrix object, got: ', <class 'pandas.core.frame.DataFrame'> HOT 7
- Question. How is the global explanation measured? HOT 7
- Question. How good is my surrogate model? HOT 8
- Explainers do not work with Keras models with multiple inputs HOT 10
- Calculate r2_score for PFIExplainer HOT 6
- Interpret explainer module question HOT 7
- Code Formatting Standards HOT 5
- TabularExplainer doesn't work with bias-mitigated model from fairlearn HOT 5
- How can I use MimicExplainer with Voting Classifier? [Question] HOT 5
- Dimension errors when using sklearn OneHotEncoder with min_frequency parameter HOT 1
- How can I get the data points of Aggregate feature importance ?
- Converting from NumPy array to list in mimic_explainer.py:_save()
- cannot import MimicExplainer HOT 7
- package `interpret-community` is incompatible with current `shap` version
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from interpret-community.