Elastic-net regularized generalized linear models.
Project description
A python implementation of elastic-net regularized generalized linear models
Generalized linear models are well-established tools for regression and classification and are widely applied across the sciences, economics, business, and finance. They are uniquely identifiable due to their convex loss and easy to interpret due to their point-wise non-linearities and well-defined noise models.
In the era of exploratory data analyses with a large number of predictor variables, it is important to regularize. Regularization prevents overfitting by penalizing the negative log likelihood and can be used to articulate prior knowledge about the parameters in a structured form.
Despite the attractiveness of regularized GLMs, the available tools in the Python data science eco-system are highly fragmented. More specifically,
statsmodels provides a wide range of link functions but no regularization.
scikit-learn provides elastic net regularization but only for linear models.
lightning provides elastic net and group lasso regularization, but only for linear and logistic regression.
Pyglmnet is a response to this fragmentation. Here are some highlights.
Pyglmnet provides a wide range of noise models (and paired canonical link functions): 'gaussian', 'binomial', 'multinomial', ‘poisson’, and 'softplus'.
It supports a wide range of regularizers: ridge, lasso, elastic net, group lasso, and Tikhonov regularization.
Pyglmnet’s API is designed to be compatible with scikit-learn, so you can deploy Pipeline tools such as GridSearchCV() and cross_val_score().
We follow the same approach and notations as in Friedman, J., Hastie, T., & Tibshirani, R. (2010) and the accompanying widely popular R package.
We have implemented a cyclical coordinate descent optimizer with Newton update, active sets, update caching, and warm restarts. This optimization approach is identical to the one used in R package.
A number of Python wrappers exist for the R glmnet package (e.g. here and here) but in contrast to these, Pyglmnet is a pure python implementation. Therefore, it is easy to modify and introduce additional noise models and regularizers in the future.
Benchmarks
Here is table comparing pyglmnet against scikit-learn’s linear_model, statsmodels, and R.
The numbers below are run time (in milliseconds) to fit a \(1000\) samples x \(100\) predictors sparse matrix (density \(0.05\)). This was done on a c. 2011 Macbook Pro, so your numbers may vary.
distr |
pyglmnet |
scikit-learn |
statsmodels |
R |
---|---|---|---|---|
gaussian |
6.8 |
1.2 |
29.8 |
10.3 |
binomial |
16.3 |
4.5 |
89.3 |
– |
poisson |
5.8 |
– |
117.2 |
156.1 |
We provide a function called BenchMarkGLM() in pyglmnet.datasets if you would like to run these benchmarks yourself, but you need to take care of the dependencies: scikit-learn, Rpy2, and statsmodels yourself.
Installation
Now pip installable!
$ pip install pyglmnet
Manual installation instructions below:
Clone the repository.
$ git clone http://github.com/glm-tools/pyglmnet
Install pyglmnet using setup.py as follows
$ python setup.py develop install
Getting Started
Here is an example on how to use the GLM estimator.
import numpy as np
import scipy.sparse as sps
from sklearn.preprocessing import StandardScaler
from pyglmnet import GLM
# create an instance of the GLM class
glm = GLM(distr='poisson')
n_samples, n_features = 10000, 100
# sample random coefficients
beta0 = np.random.normal(0.0, 1.0, 1)
beta = sps.rand(n_features, 1, 0.1)
beta = np.array(beta.todense())
# simulate training data
X_train = np.random.normal(0.0, 1.0, [n_samples, n_features])
y_train = glm.simulate(beta0, beta, X_train)
# simulate testing data
X_test = np.random.normal(0.0, 1.0, [n_samples, n_features])
y_test = glm.simulate(beta0, beta, X_test)
# fit the model on the training data
scaler = StandardScaler().fit(X_train)
glm.fit(scaler.transform(X_train), y_train)
# predict using fitted model on the test data
yhat_test = glm.predict(scaler.transform(X_test))
# score the model
deviance = glm.score(X_test, y_test)
Tutorial
Here is an extensive tutorial on GLMs, optimization and pseudo-code.
Here are slides from a recent talk at PyData Chicago 2016, corresponding tutorial notebooks and a video.
How to contribute?
We welcome pull requests. Please see our developer documentation page for more details.
Contributors
Acknowledgments
Konrad Kording for funding and support
Sara Solla for masterful GLM lectures
License
MIT License Copyright (c) 2016 Pavan Ramkumar
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file pyglmnet-1.0.0.tar.gz
.
File metadata
- Download URL: pyglmnet-1.0.0.tar.gz
- Upload date:
- Size: 21.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 54ca3b4e53cc800475f93f16703910c7fad4636654ddb4811188236af264a0a9 |
|
MD5 | 52997300cd8de15d15920200c13b09a4 |
|
BLAKE2b-256 | 6dd83e40e2f88680470443aa61cafc6514b5a8630fcd376d460f8866d3a263ae |