Interpretability Callbacks for Tensorflow 2.0
Project description
tf-explain
tf-explain implements interpretability methods as Tensorflow 2.0 callbacks to ease neural network's understanding.
Installation
tf-explain is available on PyPi as an alpha release. To install it:
virtualenv venv -p python3.6
pip install tf-explain
tf-explain is compatible with Tensorflow 2. It is not declared as a dependency to let you choose between CPU and GPU versions. Additionally to the previous install, run:
# For CPU version
pip install tensorflow==2.0.0-beta1
# For GPU version
pip install tensorflow-gpu==2.0.0-beta1
Available Methods
Activations Visualization
Visualize how a given input comes out of a specific activation layer
from tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback
model = [...]
callbacks = [
ActivationsVisualizationCallback(
validation_data=(x_val, y_val),
layers_name=["activation_1"],
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
Occlusion Sensitivity
Visualize how parts of the image affects neural network's confidence by occluding parts iteratively
from tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback
model = [...]
callbacks = [
OcclusionSensitivityCallback(
validation_data=(x_val, y_val),
class_index=0,
patch_size=4,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)
Grad CAM
Visualize how parts of the image affects neural network's output by looking into the activation maps
From Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
layer_name="activation_1",
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
SmoothGrad
Visualize stabilized gradients on the inputs towards the decision
From SmoothGrad: removing noise by adding noise
from tf_explain.callbacks.smoothgrad import SmoothGradCallback
model = [...]
callbacks = [
SmoothGradCallback(
validation_data=(x_val, y_val),
class_index=0,
num_samples=20,
noise=1.,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
Visualizing the results
When you use the callbacks, the output files are created in the logs
directory.
You can see them in tensorboard with the following command: tensorboard --logdir logs
Roadmap
- Subclassing API Support
- Additional Methods
- Auto-generated API Documentation & Documentation Testing
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tf_explain-0.0.2a0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8dcefe762ab9ed70e6999953bdc3a6181540de549cec871b45231849d7319f36 |
|
MD5 | 731568d430d3a4856b85f3f02035bafb |
|
BLAKE2b-256 | af54cb2869ba0da4cf282af546441bab3e1d7a611bfc8811acd2345b41295038 |