A Python toolbox for performing gradient-free optimization
Project description
Nevergrad - A gradient-free optimization platform
nevergrad
is a Python 3.8+ library. It can be installed with:
pip install nevergrad
More installation options, including windows installation, and complete instructions are available in the "Getting started" section of the documentation.
You can join Nevergrad users Facebook group here.
Minimizing a function using an optimizer (here NGOpt
) is straightforward:
import nevergrad as ng
def square(x):
return sum((x - .5)**2)
optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square)
print(recommendation.value) # recommended value
>>> [0.49971112 0.5002944]
nevergrad
can also support bounded continuous variables as well as discrete variables, and mixture of those.
To do this, one can specify the input space:
import nevergrad as ng
def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
# optimal for learning_rate=0.2, batch_size=4, architecture="conv"
return (learning_rate - 0.2)**2 + (batch_size - 4)**2 + (0 if architecture == "conv" else 10)
# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
# a log-distributed scalar between 0.001 and 1.0
learning_rate=ng.p.Log(lower=0.001, upper=1.0),
# an integer from 1 to 12
batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
# either "conv" or "fc"
architecture=ng.p.Choice(["conv", "fc"])
)
optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)
# show the recommended keyword arguments of the function
print(recommendation.kwargs)
>>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}
Learn more on parametrization in the documentation!
Convergence of a population of points to the minima with two-points DE.
Documentation
Check out our documentation! It's still a work in progress, so don't hesitate to submit issues and/or pull requests (PRs) to update it and make it clearer! The last version of our data and the last version of our PDF report.
Citing
@misc{nevergrad,
author = {J. Rapin and O. Teytaud},
title = {{Nevergrad - A gradient-free optimization platform}},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}
License
nevergrad
is released under the MIT license. See LICENSE for additional details about it.
See also our Terms of Use and Privacy Policy.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file nevergrad-1.0.5.tar.gz
.
File metadata
- Download URL: nevergrad-1.0.5.tar.gz
- Upload date:
- Size: 403.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c341c767067543ada280669118cfc1d2db7eb2610bedb4b7cc3d6ae7ea98955d |
|
MD5 | c9e06f0e723e61c27ac0c3d801803507 |
|
BLAKE2b-256 | 00f35f804311ba9c4e617ad419660c7597690123335daf90a0e5eac71fd1c9b5 |
File details
Details for the file nevergrad-1.0.5-py3-none-any.whl
.
File metadata
- Download URL: nevergrad-1.0.5-py3-none-any.whl
- Upload date:
- Size: 495.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bfabc7a45cf172aef551c13892329a2ecd30f18cc5493aef4dc7cb84180140bb |
|
MD5 | a7fa4ecc71afc36b8bc158365db98a84 |
|
BLAKE2b-256 | a94e3d02e74c06ce2eeb6d39b7654e0b527ae4620debf3f85205522c20b9d2ab |