A Python toolbox for performing gradient-free optimization
Project description
Nevergrad - A gradient-free optimization platform
nevergrad
is a Python 3.6+ library. It can be installed with:
pip install nevergrad
More installation options and complete instructions are available in the "Getting started" section of the documentation.
You can join Nevergrad users Facebook group here.
Minimizing a function using an optimizer (here OnePlusOne
) is straightforward:
import nevergrad as ng
def square(x):
return sum((x - .5)**2)
optimizer = ng.optimizers.OnePlusOne(parametrization=2, budget=100)
recommendation = optimizer.minimize(square)
print(recommendation.value) # recommended value
>>> [0.49971112 0.5002944]
nevergrad
can also support bounded continuous variables as well as discrete variables, and mixture of those.
To do this, one can specify the input space:
import nevergrad as ng
def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
# optimal for learning_rate=0.2, batch_size=4, architecture="conv"
return (learning_rate - 0.2)**2 + (batch_size - 4)**2 + (0 if architecture == "conv" else 10)
# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
# a log-distributed scalar between 0.001 and 1.0
learning_rate=ng.p.Log(lower=0.001, upper=1.0),
# an integer from 1 to 12
batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
# either "conv" or "fc"
architecture=ng.p.Choice(["conv", "fc"])
)
optimizer = ng.optimizers.OnePlusOne(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)
# show the recommended keyword arguments of the function
print(recommendation.kwargs)
>>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}
Learn more on parametrization in the documentation!
Convergence of a population of points to the minima with two-points DE.
Documentation
Check out our documentation! It's still a work in progress, don't hesitate to submit issues and/or PR to update it and make it clearer!
Citing
@misc{nevergrad,
author = {J. Rapin and O. Teytaud},
title = {{Nevergrad - A gradient-free optimization platform}},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}
License
nevergrad
is released under the MIT license. See LICENSE for additional details about it.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for nevergrad-0.4.1.post2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dd9fe1065d0f825ffa599808986aaf4ca42283e51d61e57cbebd043646e16a8b |
|
MD5 | 385116f5d179cdb797ce2a3b16cc60ce |
|
BLAKE2b-256 | dae67f54e3a545a314e94ab8648087afe079de1eccd0c9da629fe5d631b4abd6 |