Multiple-target machine learning
Project description
Himalaya: Multiple-target linear models
Himalaya [1] implements machine learning linear models in Python, focusing on computational efficiency for large numbers of targets.
Use himalaya if you need a library that:
estimates linear models on large numbers of targets,
runs on CPU and GPU hardware,
provides estimators compatible with scikit-learn’s API.
Himalaya is stable (with particular care for backward compatibility) and open for public use (give it a star!).
Example
import numpy as np
n_samples, n_features, n_targets = 10, 5, 4
np.random.seed(0)
X = np.random.randn(n_samples, n_features)
Y = np.random.randn(n_samples, n_targets)
from himalaya.ridge import RidgeCV
model = RidgeCV(alphas=[1, 10, 100])
model.fit(X, Y)
print(model.best_alphas_) # [ 10. 100. 10. 100.]
The model RidgeCV uses the same API as scikit-learn estimators, with methods such as fit, predict, score, etc.
The model is able to efficiently fit a large number of targets (routinely used with 100k targets).
The model selects the best hyperparameter alpha for each target independently.
More examples
Check more examples of use of himalaya in the gallery of examples.
Tutorials using himalaya for fMRI
Himalaya was designed primarily for functional magnetic resonance imaging (fMRI) encoding models. In depth tutorials about using himalaya for fMRI encoding models can be found at gallantlab/voxelwise_tutorials.
Models
Himalaya implements the following models:
Ridge, RidgeCV
KernelRidge, KernelRidgeCV
GroupRidgeCV, MultipleKernelRidgeCV, WeightedKernelRidge
SparseGroupLassoCV
See the model descriptions in the documentation website.
Himalaya backends
Himalaya can be used seamlessly with different backends. The available backends are numpy (default), cupy, torch, and torch_cuda. To change the backend, call:
from himalaya.backend import set_backend
backend = set_backend("torch")
and give torch arrays inputs to the himalaya solvers. For convenience, estimators implementing scikit-learn’s API can cast arrays to the correct input type.
GPU acceleration
To run himalaya on a graphics processing unit (GPU), you can use either the cupy or the torch_cuda backend:
from himalaya.backend import set_backend
backend = set_backend("cupy") # or "torch_cuda"
data = backend.asarray(data)
Installation
Dependencies
Python 3
Numpy
Scikit-learn
Optional (GPU backends):
PyTorch (1.9+ preferred)
Cupy
Standard installation
You may install the latest version of himalaya using the package manager pip, which will automatically download himalaya from the Python Package Index (PyPI):
pip install himalaya
Installation from source
To install himalaya from the latest source (main branch), you may call:
pip install git+https://github.com/gallantlab/himalaya.git
Developers can also install himalaya in editable mode via:
git clone https://github.com/gallantlab/himalaya
cd himalaya
pip install --editable .
Cite this package
If you use himalaya in your work, please give it a star, and cite our publication:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file himalaya-0.4.4.tar.gz
.
File metadata
- Download URL: himalaya-0.4.4.tar.gz
- Upload date:
- Size: 70.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 50663d55781b97fba2b5eb71e60889f61b7339165a1b0047d6d64d40715d16c2 |
|
MD5 | de66c2402cae108ad3506b869f0e7174 |
|
BLAKE2b-256 | a8350e5c7507e339706aec84ff46387f7f5d27a1ec04eeb112d0bb19f2ad0a64 |
File details
Details for the file himalaya-0.4.4-py3-none-any.whl
.
File metadata
- Download URL: himalaya-0.4.4-py3-none-any.whl
- Upload date:
- Size: 83.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e415c518273395194cb6aec0b7e848ddc695e175a2582c514217e8da3357200b |
|
MD5 | fd41b80d2b34344ea7308732d008c0b5 |
|
BLAKE2b-256 | 9c60361a23e9c4191ec54222d98c1e6c0d37cb894565000f974d7d2853ef5948 |