Skip to main content

Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.

Project description

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy_. Theano features:

* **tight integration with NumPy:** a similar interface to NumPy's. numpy.ndarrays are also used internally in Theano-compiled functions.
* **transparent use of a GPU:** perform data-intensive computations up to 140x faster than on a CPU (support for float32 only).
* **efficient symbolic differentiation:** Theano can compute derivatives for functions of one or many inputs.
* **speed and stability optimizations:** avoid nasty bugs when computing expressions such as log(1+ exp(x) ) for large values of x.
* **dynamic C code generation:** evaluate expressions faster.
* **extensive unit-testing and self-verification:** includes tools for detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive scientific
research since 2007, but it is also approachable enough to be used in the
classroom (IFT6266 at the University of Montreal).

.. _NumPy: http://numpy.scipy.org/


Theano 0.5rc1

TODO for final 0.5 release:
- test python 2.4
- test theano-cache with "pip install Theano": issue 101
- Re-write this NEWS.txt file!

if time check issue: 98.

Modifications in the trunk since the 0.4.1 release (12 August 2011) up to 2 Dec 2011


Every body is recommented to update Theano to 0.5 when released after
they checked there code don't return deprecation warning. Otherwise,
in one case the result can change. In other case, the warning are
transformed to error. See bellow.


Important change:
* Moved to github: https://github.com/Theano/Theano/
* Old trac ticket moved to assembla ticket: https://www.assembla.com/spaces/theano/tickets
* Theano vision: https://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
*


Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released the 23 Nov 2010):
* The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor.


Interface Feature Removed (was deprecated):
* The string mode FAST_RUN_NOGC and STABILIZE are not accepted. It was accepted only by theano.function(). Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
* tensor.grad(cost, wrt) now return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* a few tag.shape and Join.vec_length left.

* scan interface change: RP
* The use of `return_steps` for specifying how many entries of the output
scan has been deprecated

* The same thing can be done by applying a subtensor on the output
return by scan to select a certain slice
* The inner function (that scan receives) should return its outputs and
updates following this order:

[outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged.
* shared.value is moved, use shared.set_value() or shared.get_value() instead.


New Deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared (Olivier D.)
* You probably want to call theano.shared()!


Interface Bug Fix:
* Rop in some case should have returned a list of 1 theano varible, but returned directly that variable.
* Theano flags "home" is not used anymore as it was a duplicate. If you use it, theano should raise an error.

New features:
* adding 1d advanced indexing support to inc_subtensor and set_subtensor (James
* tensor.{zeros,ones}_like now support the dtype param as numpy (Fred)
* config flags "exception_verbosity" to control the verbosity of exception (Ian
* theano-cache list: list the content of the theano cache(Fred)
* tensor.ceil_int_div FB
* MaxAndArgMax.grad now work with any axis(The op support only 1 axis) FB
* used by tensor.{max,min,max_and_argmax}
* tensor.{all,any} RP
* tensor.roll as numpy: (Matthew Rocklin, DWF)
* on Windows work. Still experimental. (Sebastian Urban)
* IfElse now allow to have a list/tuple as the result of the if/else branches.
* They must have the same length and correspondig type) RP
* argmax dtype as int64. OD



New Optimizations:
* AdvancedSubtensor1 reuse preallocated memory if available(scan, c|py_nogc linker)(Fred)
* tensor_variable.size (as numpy) product of the shape elements OD
* sparse_variable.size (as scipy) the number of stored value.OD
* dot22, dot22scalar work with complex(Fred)
* Doc how to wrap in Theano an existing python function(in numpy, scipy, ...) Fred
* added arccos IG
* sparse dot with full output. (Yann Dauphin)
* Optimized to Usmm and UsmmCscDense in some case (YD)
* Note: theano.dot, sparse.dot return a structured_dot grad(
* Generate Gemv/Gemm more often JB
* scan move computation outside the inner loop when the remove everything from the inner loop RP
* scan optimization done earlier. This allow other optimization to be applied FB, RP, GD
* exp(x) * sigmoid(-x) is now correctly optimized to a more stable form.


GPU:
* GpuAdvancedSubtensor1 support broadcasted dimensions


Bugs fixed:
* On cpu, if the convolution had received explicit shape information, they where not checked at run time. This caused wrong result if the input shape was not the one expected. (Fred, reported by Sander Dieleman)
* Scan grad when the input of scan has sequence of different length. (RP reported by Michael Forbes)
* Scan.infer_shape now work correctly when working with a condition for the number of loop. In the past, it returned n_stepts as the shape, witch is not always true. RP
* Theoritic bug: in some case we could have GPUSum return bad value. Was not able to produce the error..
* pattern affected({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim )
01, 011, 0111, 010, 10, 001, 0011, 0101: FB
* div by zeros in verify_grad. This hidded a bug in the grad of Images2Neibs. (JB)
* theano.sandbox.neighbors.Images2Neibs grad was returning wrong value. The grad is now disabled and return an error. FB



Crash fixed:
* T.mean crash at graph building timeby Ian G.
* "Interactive debugger" crash fix (Ian, Fred)
* "Interactive Debugger" renamed to "Using Test Values"
* Do not call gemm with strides 0, some blas refuse it. (PL)
* optimization crash with gemm and complex.(Fred
* Gpu crash with elemwise Fred
* compilation crash with amdlibm and the gpu. Fred
* IfElse crash Fred
* Execution crash fix in AdvancedSubtensor1 on 32 bits computer(PL)
* gpu compilation crash on MacOS X OD
* gpu compilation crash on MacOS X Fred
* Support for OSX Enthought Python Distribution 7.x (Graham Taylor, OD)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions
* Crash when the step to subtensor was not 1 in conjonction with some optimization


Optimization:
* Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization (GD)
* Scan optimization are executed earlier. This make other optimization being applied(like blas optimization, gpu optimization...)(GD, Fred, RP)
* Make the optimization process faster JB
* Allow fusion of elemwise when the scalar op need support code. JB


Know bug:
* CAReduce with nan in inputs don't return the good output (`Ticket <http://trac-hg.assembla.com/theano/ticket/763>`_).

* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you do grad of grad of scan you can have wrong number in some case.


Sandbox:
* cvm, interface more consistent with current linker (James)
* vm linker have a callback parameter (JB)
* review/finish/doc: diag/extract_diag AB,FB,GD
* review/finish/doc: AllocDiag/diag AB,FB,GD
* review/finish/doc: MatrixInverse, matrix_inverse RP
* review/finish/doc: matrix_dot RP
* review/finish/doc: det PH determinent op
* review/finish/doc: Cholesky David determinent op
* review/finish/doc: ensure_sorted_indices Li Yao
* review/finish/doc: spectral_radius_boud Xavier Glorot
* review/finish/doc: sparse sum Valentin Bisson


Sandbox New features(not enabled by default):
* CURAND_RandomStreams for uniform and normal(not pickable, gpu only)(James)


Documentation:
* Many update by many people: Olivier Delalleau, Fred, RP, David,
* Updates to install doc on MacOS (OD)
* Updates to install doc on Windows(DWF, OD)
* Doc how to use scan to loop with a condition as the number of iteration RP


Others:
* Better error message at many places: David Warde-Farley, Ian, Fred, Olivier D.
* pep8: James,
* min_informative_str to print graph: Ian G.
* Fix catching of exception. (Sometimes we catched interupt): Fred, David, Ian, OD,
* Better support for uft string(David WF)
* Fix pydotprint with a function compiled with a ProfileMode (Fred)
* Was broken with change to the profiler.
* warning when people have old cache entry (OD)
* More test for join on the gpu and cpu.
* Don't request to load the gpu module by default in scan module. RP
* Better opt that lift transpose around dot JB
* Fix some import problem
* Filtering update JB


Reviewers:
James, David, Ian, Fred, Razvan, delallea

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

Theano-0.5.0rc1.zip (1.4 MB view details)

Uploaded Source

Theano-0.5.0rc1.tar.gz (1.2 MB view details)

Uploaded Source

File details

Details for the file Theano-0.5.0rc1.zip.

File metadata

  • Download URL: Theano-0.5.0rc1.zip
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for Theano-0.5.0rc1.zip
Algorithm Hash digest
SHA256 d7859b49c9eda6e55404574f0b6c9ee68644bb656e545dfb9d63b0ce577c3570
MD5 746b32cdf2b309eb77f128b12b5fc4bf
BLAKE2b-256 cc2bb3ebb8d2b9d14f1ee4e56ffa49cabf9952e47f587a1ae9454ef575fad26b

See more details on using hashes here.

Provenance

File details

Details for the file Theano-0.5.0rc1.tar.gz.

File metadata

  • Download URL: Theano-0.5.0rc1.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for Theano-0.5.0rc1.tar.gz
Algorithm Hash digest
SHA256 3580b30ee997b530c0c87258036d8dafb6c6b5fdf15ccfe11a306c4e77499b88
MD5 093e666635e126173c53c355376e3455
BLAKE2b-256 42eeb28868babfe01822a755c9c98098766b06c03fc02783b16027ef1c03fcb7

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page