Skip to main content

An environment for simulated highway driving tasks.

Project description

highway-env

build Documentation Status Downloads Codacy Badge Coverage GitHub contributors Environments

A collection of environments for autonomous driving and tactical decision-making tasks


An episode of one of the environments available in highway-env.

Try it on Google Colab! Open In Colab

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Examples of agents

Agents solving the highway-env environments are available in the rl-agents and stable-baselines repositories.

pip install --user git+https://github.com/eleurent/rl-agents

Deep Q-Network


The DQN agent solving highway-v0.

This model-free value-based reinforcement learning agent performs Q-learning with function approximation, using a neural network to represent the state-action value function Q.

Deep Deterministic Policy Gradient


The DDPG agent solving parking-v0.

This model-free policy-based reinforcement learning agent is optimized directly by gradient ascent. It uses Hindsight Experience Replay to efficiently learn how to solve a goal-conditioned task.

Value Iteration


The Value Iteration agent solving highway-v0.

The Value Iteration is only compatible with finite discrete MDPs, so the environment is first approximated by a finite-mdp environment using env.to_finite_mdp(). This simplified state representation describes the nearby traffic in terms of predicted Time-To-Collision (TTC) on each lane of the road. The transition model is simplistic and assumes that each vehicle will keep driving at a constant speed without changing lanes. This model bias can be a source of mistakes.

The agent then performs a Value Iteration to compute the corresponding optimal state-value function.

Monte-Carlo Tree Search

This agent leverages a transition and reward models to perform a stochastic tree search (Coulom, 2006) of the optimal trajectory. No particular assumption is required on the state representation or transition model.


The MCTS agent solving highway-v0.

Installation

pip install highway-env

Usage

import gym
import highway_env

env = gym.make("highway-v0")

done = False
while not done:
    action = ... # Your agent code here
    obs, reward, done, info = env.step(action)
    env.render()

Documentation

Read the documentation online.

Citing

If you use the project in your work, please consider citing it with:

@misc{highway-env,
  author = {Leurent, Edouard},
  title = {An Environment for Autonomous Driving Decision-Making},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/eleurent/highway-env}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

highway-env-1.3.tar.gz (70.0 kB view details)

Uploaded Source

Built Distribution

highway_env-1.3-py3-none-any.whl (93.7 kB view details)

Uploaded Python 3

File details

Details for the file highway-env-1.3.tar.gz.

File metadata

  • Download URL: highway-env-1.3.tar.gz
  • Upload date:
  • Size: 70.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/47.3.1.post20200622 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.6.8

File hashes

Hashes for highway-env-1.3.tar.gz
Algorithm Hash digest
SHA256 19bb39defedf2fe3d354476da4d39464179343162544583ebee97edab2966dec
MD5 495d068f790858093d1327ce004c27d0
BLAKE2b-256 1605380523a1cbaf5fdef9421c5c0bd576d0c5ec9e676868284762aadf858d98

See more details on using hashes here.

Provenance

File details

Details for the file highway_env-1.3-py3-none-any.whl.

File metadata

  • Download URL: highway_env-1.3-py3-none-any.whl
  • Upload date:
  • Size: 93.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/47.3.1.post20200622 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.6.8

File hashes

Hashes for highway_env-1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0972a2a16d988af77e2c91e5c533f500679bcaefcb05c0acfae1f789cd6abbd3
MD5 2a0a39413f89b083210efdd8bd5b4413
BLAKE2b-256 237afda7ce4ed68ff914550ce753dae48a871ef77f5dae011a2e6a8f0585b208

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page