Skip to main content

TF-Agents: A Reinforcement Learning Library for TensorFlow

Project description

# TF-Agents: A library for Reinforcement Learning in TensorFlow

*NOTE:* Current TF-Agents pre-release is under active development and
interfaces may change at any time. Feel free to provide feedback and comments.

The documentation, examples and tutorials will grow over the next few weeks.

## Table of contents

<a href="#Agents">Agents</a><br>
<a href="#Tutorials">Tutorials</a><br>
<a href='#Examples'>Examples</a><br>
<a href="#Installation">Installation</a><br>
<a href='#Contributing'>Contributing</a><br>
<a href='#Principles'>Principles</a><br>
<a href='#Citation'>Citation</a><br>
<a href='#Disclaimer'>Disclaimer</a><br>


<a id='Agents'></a>
## Agents


In TF-Agents, the core elements of RL algorithms are implemented as `Agents`.
An agent encompasses two main responsibilities: defining a Policy to interact
with the Environment, and how to learn/train that Policy from collected
experience.

Currently the following algorithms are available under TF-Agents:

* DQN: __Human level control through deep reinforcement learning__ Mnih et al., 2015 https://deepmind.com/research/dqn/
* DDQN: __Deep Reinforcement Learning with Double Q-learning__ Hasselt et al., 2015 https://arxiv.org/abs/1509.06461
* DDPG: __Continuous control with deep reinforcement learning__ Lilicrap et al. https://arxiv.org/abs/1509.02971
* TD3: __Addressing Function Approximation Error in Actor-Critic Methods__ Fujimoto et al. https://arxiv.org/abs/1802.09477.
* REINFORCE: __Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning__ Williams http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf
* PPO: __Proximal Policy Optimization Algorithms__ Schulman et al. http://arxiv.org/abs/1707.06347
* SAC: __Soft Actor Critic__ Haarnoja et al. https://arxiv.org/abs/1812.05905

<a id='Tutorials'></a>
## Tutorials

See [`tf_agents/colabs/`](https://github.com/tensorflow/agents/tree/master/tf_agents/colabs/)
for tutorials on the major components provided.

<a id='Examples'></a>
## Examples
End-to-end examples training agents can be found under each agent directory.
e.g.:

* DQN: [`tf_agents/agents/dqn/examples/v1/train_eval_gym.py`](https://github.com/tensorflow/agents/tree/master/tf_agents/agents/dqn/examples/v1/train_eval_gym.py)

<a id='Installation'></a>
## Installation

To install the latest version, use nightly builds of TF-Agents under the pip package
`tf-agents-nightly`, which requires you install on one of `tf-nightly` and
`tf-nightly-gpu` and also `tensorflow-probability-nightly`.
Nightly builds include newer features, but may be less stable than the versioned releases.

To install the nightly build version, run the following:

```shell
# Installing with the `--upgrade` flag ensures you'll get the latest version.
pip install --user --upgrade tf-agents-nightly # depends on tf-nightly
```

<a id='Contributing'></a>
## Contributing

We're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)
for a guide on how to contribute. This project adheres to TensorFlow's
[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to
uphold this code.

<a id='Principles'></a>
## Principles

This project adheres to [Google's AI principles](PRINCIPLES.md).
By participating, using or contributing to this project you are expected to
adhere to these principles.

<a id='Citation'></a>
## Citation

If you use this code please cite it as:

```
@misc{TFAgents,
title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},
author = "{Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez,
Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina,
Chris Harris, Vincent Vanhoucke, Eugene Brevdo}",
howpublished = {\url{https://github.com/tensorflow/agents}},
url = "https://github.com/tensorflow/agents",
year = 2018,
note = "[Online; accessed 30-November-2018]"
}
```

<a id='Disclaimer'></a>
## Disclaimer

This is not an official Google product.


Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

tf_agents_nightly-0.2.0.dev20190305-py2.py3-none-any.whl (453.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file tf_agents_nightly-0.2.0.dev20190305-py2.py3-none-any.whl.

File metadata

  • Download URL: tf_agents_nightly-0.2.0.dev20190305-py2.py3-none-any.whl
  • Upload date:
  • Size: 453.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.5.4rc1

File hashes

Hashes for tf_agents_nightly-0.2.0.dev20190305-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 7a605a8e1208d41e172fb61c1f87744c80f22abc4defc67de62a33e00d5ac8ec
MD5 9c8e6bafb3a78fde083bc133a1ea08bf
BLAKE2b-256 c46969b85d104c6a088276f6293477a132930ef96aba42bab5cbe0477abd0c2c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page