Jupyter Notebook extension for Apache Spark integration
Project description
# jupyter-spark
[![Build Status](https://travis-ci.org/mozilla/jupyter-spark.svg?branch=master)](https://travis-ci.org/mozilla/jupyter-spark)
Jupyter Notebook extension for Apache Spark integration.
Includes a progress indicator for the current Notebook cell if it invokes a Spark job. Queries the Spark UI service on the backend to get the required Spark job information.
![Alt text](/screenshots/ProgressBar.png?raw=true “Spark progress bar”)
To view all currently running jobs, click the “show running Spark jobs” button, or press `Alt+S`.
![Alt text](/screenshots/SparkButton.png?raw=true “show running Spark jobs button”)
![Alt text](/screenshots/Dialog.png?raw=true “Spark dialog”)
A proxied version of the Spark UI can be accessed at http://localhost:8888/spark.
## Installation
To install, simply run:
` pip install jupyter-spark jupyter serverextension enable --py jupyter_spark jupyter nbextension install --py jupyter_spark jupyter nbextension enable --py jupyter_spark `
You may also have to widgetsnbextension extension if it hasn’t been enabled before (check by running jupyter nbextension list):
` jupyter nbextension enable --py widgetsnbextension `
To double-check if the extension was correctly installed run:
` jupyter nbextension list jupyter serverextension list `
Pleaes feel free to install [lxml](http://lxml.de/) as well to improve performance of the server side communication to Spark using your favorite package manager, e.g.:
` pip install lxml `
For development and testing, clone the project and run from a shell in the project’s root directory:
` pip install -e . jupyter serverextension enable --py jupyter_spark jupyter nbextension install --py jupyter_spark jupyter nbextension enable --py jupyter_spark `
To uninstall the extension run:
` jupyter serverextension disable --py jupyter_spark jupyter nbextension disable --py jupyter_spark jupyter nbextension uninstall --py jupyter_spark pip uninstall jupyter-spark `
## Configuration
To change the URL of the Spark API that the job metadata is fetched from override the Spark.url config value, e.g. on the command line:
` jupyter notebook --Spark.url="http://localhost:4040" `
## Changelog
### 0.2.0 (2016-06-30)
Refactored to fix a bunch of Python packaging and code quality issues
Added test suite for Python code
Set up continuous integration: https://travis-ci.org/mozilla/jupyter-spark
Set up code coverage reports: https://codecov.io/gh/mozilla/jupyter-spark
Added ability to override Spark API URL via command line option
IMPORTANT Requires manual step to enable after running pip install (see installation docs)!
To update:
Run pip uninstall jupyter-spark
Delete spark.js from your nbextensions folder.
Delete any references to jupyter_spark.spark in jupyter_notebook_config.json (in your .jupyter directory)
Delete any references to spark in notebook.json (in .jupyter/nbconfig)
Follow installation instructions to reinstall
### 0.1.1 (2016-05-03)
Initial release with a working prototype
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for jupyter_spark-0.2.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0c853640bfc33533a7dc230fc8ff95e8f721b32ec78ae451166fb2a4d24b70c0 |
|
MD5 | c999f28429bd2e0b09c4d2bc89760fc0 |
|
BLAKE2b-256 | 9e35e8cbd26831237ef3e8c2f2cafd9e285e7b2754d154cd55890c62efefca83 |