Skip to main content

Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.

Project description

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features:

  • tight integration with NumPy: a similar interface to NumPy’s. numpy.ndarrays are also used internally in Theano-compiled functions.

  • transparent use of a GPU: perform data-intensive computations up to 140x faster than on a CPU (support for float32 only).

  • efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs.

  • speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1 + exp(x)) for large values of x.

  • dynamic C code generation: evaluate expressions faster.

  • extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).

Release Notes

Theano 0.9.0beta1 (24th of January, 2017)

This release contains a lot of bug fixes and improvements + new features, to prepare the upcoming release candidate.

Highlight:
  • Many computation and compilation speed up

  • More numerical stability by default for some graph

  • Jenkins (gpu tests run on PR in addition to daily buildbot)

  • Better handling of corner cases for theano functions and graph optimizations

  • More graph optimization (faster execution and smaller graph, so more readable)

  • Less c code compilation

  • Better Python 3.5 support

  • Better numpy 1.12 support

  • Support newer Mac and Windows version

  • Conda packages for Mac, Linux and Windows

  • Theano scripts now works on Windows

  • scan with checkpoint (trade off between speed and memory usage, useful for long sequences)

  • Added a bool dtype

  • New back-end: - float16 storage - better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards - More pooling support on GPU when cuDNN isn’t there. - ignore_border=False is now implemented for pooling.

A total of 112 people contributed to this release, see the list at the bottom.

Interface changes:
  • New pooling interface

  • Pooling parameters can change at run time

  • When converting empty list/tuple, now we use floatX dtype

  • The MRG random generator now try to infer the broadcast pattern of its output

  • Move softsign out of sandbox to theano.tensor.nnet.softsign

  • Roll make the shift be modulo the size of the axis we roll on

  • Merge CumsumOp/CumprodOp into CumOp

  • round() default to the same as NumPy: half_to_even.

Convolution updates:
  • Multi-cores convolution and pooling on CPU

  • New abstract 3d convolution interface similar to the 2d convolution interface

  • Dilated convolution

GPU:
  • cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions

  • Multiple-GPU, synchrone update (via platoon, use NCCL)

  • GpuAdvancedSubtensor in new back-end

  • Gemv(matrix-vector product) speed up for special shape

  • Support for MaxAndArgMax for some axis combination

  • Support for solve (using cusolver), erfinv and erfcinv

  • cublas gemv workaround when we reduce on an axis with a dimensions size of 0

  • Warn user that some cuDNN algorithms may produce unexpected results in certain environments for convolution backward filter operations.

New features:
  • Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU)

  • Add Multinomial Without Replacement

  • conv3d2d support full and half mode (REMOVE?)

  • Add DownsampleFactorMaxGradGrad.grad

  • Allow partial evaluation of compiled function

  • More Rop support

  • Indexing support ellipsis: a[…, 3], a[1,…,3]

  • Added theano.tensor.{tensor5,dtensor5, …}

  • compiledir_format support device

  • Added new Theano flag cmodule.age_thresh_use

Others:
  • Speed up argmax only on gpu (without also needing the max)

  • A few unfrequent bugfix

  • More stack trace in error message

  • Speed up cholesky grad

  • log(sum(exp(…))) now get stability optimized

Other more detailed changes:
  • Allow more then one output to be an destructive inplace

  • Add flag profiling.ignore_first_call, useful to profile the new gpu back-end

  • Doc/error message fixes/updates

  • More support of negative axis

  • Added the keepdims parameter to the norm function

  • Crash fixes

  • Make scan gradient more deterministic

  • Add support for space in path on Windows

  • remove ProfileMode (use Theano flag profile=True instead)

Committers since 0.8.0:
  • Frederic Bastien

  • Arnaud Bergeron

  • Pascal Lamblin

  • Ramana Subramanyam

  • Simon Lefrancois

  • Steven Bocco

  • Gijs van Tulder

  • Cesar Laurent

  • Chiheb Trabelsi

  • Chinnadhurai Sankar

  • Mohammad Pezeshki

  • Reyhane Askari

  • Alexander Matyasko

  • Alexandre de Brebisson

  • Nan Rosemary Ke

  • Pierre Luc Carrier

  • Mathieu Germain

  • Olivier Mastropietro

  • khaotik

  • Saizheng Zhang

  • Thomas George

  • Iulian Vlad Serban

  • Benjamin Scellier

  • Francesco Visin

  • Caglar

  • Harm de Vries

  • Samira Shabanian

  • Jakub Sygnowski

  • Samira Ebrahimi Kahou

  • Mikhail Korobov

  • Faruk Ahmed

  • Fei Wang

  • Jan Schlüter

  • Kv Manohar

  • Jesse Livezey

  • Kelvin Xu

  • Matt Graham

  • Ruslana Makovetsky

  • Sina Honari

  • Bryn Keller

  • Ciyong Chen

  • Nicolas Ballas

  • Vitaliy Kurlin

  • Zhouhan LIN

  • Gokula Krishnan

  • Kumar Krishna Agrawal

  • Ozan Çağlayan

  • Vincent Michalski

  • Ray Donnelly

  • Tim Cooijmans

  • Vincent Dumoulin

  • happygds

  • mockingjamie

  • Amjad Almahairi

  • Christos Tsirigotis

  • Ilya Kulikov

  • RadhikaG

  • Taesup (TS) Kim

  • Ying Zhang

  • Karthik Karanth

  • Kirill Bobyrev

  • Yang Zhang

  • Yaroslav Ganin

  • theano-bot

  • Liwei Cai

  • Morgan Stuart

  • Tim Gasper

  • Xavier Bouthillier

  • p

  • texot

  • Andrés Gottlieb

  • Ben Poole

  • Bhavishya Pohani

  • Carl Thomé

  • Evelyn Mitchell

  • Fei Zhan

  • Fábio Perez

  • Gennadiy Tupitsin

  • Gilles Louppe

  • Greg Ciccarelli

  • He

  • Huan Zhang

  • Jonas Degrave

  • Kaixhin

  • Kevin Keraudren

  • Maltimore

  • Marc-Alexandre Cote

  • Marco

  • Marius F. Killinger

  • Maxim Kochurov

  • Neil

  • Nizar Assaf

  • Rithesh Kumar

  • Rizky Luthfianto

  • Robin Millette

  • Roman Ring

  • Sander Dieleman

  • Sebastin Santy

  • Shawn Tan

  • Wazeer Zulfikar

  • Wojciech Głogowski

  • Yann N. Dauphin

  • gw0 [http://gw.tnode.com/]

  • hexahedria

  • hsintone

  • jakirkham

  • joncrall

  • root

  • superantichrist

  • tillahoffmann

  • wazeerzulfikar

  • you-n-g

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Theano-0.9.0b1.zip (3.4 MB view details)

Uploaded Source

File details

Details for the file Theano-0.9.0b1.zip.

File metadata

  • Download URL: Theano-0.9.0b1.zip
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for Theano-0.9.0b1.zip
Algorithm Hash digest
SHA256 008a7d4eada3e6287d2f55ff0684778be190e5538022a3e9ffaf490b3fd28497
MD5 cb9760f3cdb30e9d30292fac5b3066c5
BLAKE2b-256 4626d7aeb4bc383e3a1184cc5e5b75f140bdc783282eedf8f53140d7c36fe70c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page