Skip to main content

Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.

Project description

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features:

  • tight integration with NumPy: a similar interface to NumPy’s. numpy.ndarrays are also used internally in Theano-compiled functions.

  • transparent use of a GPU: perform data-intensive computations up to 140x faster than on a CPU (support for float32 only).

  • efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs.

  • speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1 + exp(x)) for large values of x.

  • dynamic C code generation: evaluate expressions faster.

  • extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems.

Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).

Release Notes

Theano 0.10.0beta1 (9th of August, 2017)

This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate.

We recommend that every developer updates to this version.

Highlights:
  • Moved Python 3.* minimum supported version from 3.3 to 3.4

  • Replaced deprecated package nose-parameterized with up-to-date package parameterized for Theano requirements

  • Theano now internally uses sha256 instead of md5 to work on systems that forbide md5 for security reason

  • Removed old GPU backend theano.sandbox.cuda. New backend theano.gpuarray is now the official GPU backend

  • Support more debuggers for PdbBreakpoint

  • Scan improvements

    • Speed up Theano scan compilation and gradient computation

    • Added meaningful message when missing inputs to scan

  • Speed up graph toposort algorithm

  • Faster C compilation by massively using a new interface for op params

  • Faster optimization step

  • Documentation updated and more complete

  • Many bug fixes, crash fixes and warning improvements

A total of 65 people contributed to this release since 0.9.0, see list below.

Interface changes:
  • Merged duplicated diagonal functions into two ops: ExtractDiag (extract a diagonal to a vector), and AllocDiag (set a vector as a diagonal of an empty array)

  • Renamed MultinomialWOReplacementFromUniform to ChoiceFromUniform

  • Removed or deprecated Theano flags:

    • cublas.lib

    • cuda.enabled

    • enable_initial_driver_test

    • gpuarray.sync

    • home

    • lib.cnmem

    • nvcc.* flags

    • pycuda.init

  • Changed grad() method to L_op() in ops that need the outputs to compute gradient

Convolution updates:
  • Extended Theano flag dnn.enabled with new option no_check to help speed up cuDNN importation

  • Implemented separable convolutions

  • Implemented grouped convolutions

GPU:
  • Prevent GPU initialization when not required

  • Added disk caching option for kernels

  • Added method my_theano_function.sync_shared() to help synchronize GPU Theano functions

  • Added useful stats for GPU in profile mode

  • Added Cholesky op based on cusolver backend

  • Added GPU ops based on magma library: SVD, matrix inverse, QR, cholesky and eigh

  • Added GpuCublasTriangularSolve

  • Added atomic addition and exchange for long long values in GpuAdvancedIncSubtensor1_dev20

  • Support log gamma function for all non-complex types

  • Support GPU SoftMax in both OpenCL and CUDA

  • Support offset parameter k for GpuEye

  • CrossentropyCategorical1Hot and its gradient are now lifted to GPU

  • Better cuDNN support

    • Official support for v5.* and v6.*

    • Better support and loading on Windows and Mac

    • Support cuDNN v6 dilated convolutions

    • Support cuDNN v6 reductions

    • Added new Theano flags cuda.include_path, dnn.base_path and dnn.bin_path to help configure Theano when CUDA and cuDNN can not be found automatically.

  • Updated float16 support

    • Added documentation for GPU float16 ops

    • Support float16 for GpuGemmBatch

    • Started to use float32 precision for computations that don’t support float16 on GPU

New features:
  • Added a wrapper for Baidu’s CTC cost and gradient functions

  • Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from scipy.special.

  • Added Scaled Exponential Linear Unit (SELU) activation

  • Added sigmoid_binary_crossentropy function

  • Added tri-gamma function

  • Added modes half and full for Images2Neibs ops

  • Implemented gradient for AbstractBatchNormTrainGrad

  • Implemented gradient for matrix pseudoinverse op

  • Added new prop replace for ChoiceFromUniform op

  • Added new prop on_error for CPU Cholesky op

  • Added new Theano flag deterministic to help control how Theano optimize certain ops that have deterministic versions. Currently used for subtensor Ops only.

  • Added new Theano flag cycle_detection to speed-up optimization step by reducing time spending in inplace optimizations

  • Added new Theano flag check_stack_trace to help check the stack trace during optimization process

  • Added new Theano flag cmodule.debug to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.

Others:
  • Added deprecation warning for the softmax and logsoftmax vector case

  • Added a warning to announce that C++ compiler will become mandatory in next Theano release 0.11

Other more detailed changes:
  • Removed useless warning when profile is manually disabled

  • Added tests for abstract conv

  • Added options for disconnected_outputs to Rop

  • Removed theano/compat/six.py

  • Removed COp.get_op_params()

  • Support of list of strings for Op.c_support_code(), to help not duplicate support codes

  • Macro names provided for array properties are now standardized in both CPU and GPU C codes

  • Started to move C code files into separate folder c_code in every Theano module

  • Many improvements for Travis CI tests (with better splitting for faster testing)

  • Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux

Commiters since 0.9.0:
  • Frederic Bastien

  • Arnaud Bergeron

  • amrithasuresh

  • João Victor Tozatti Risso

  • Steven Bocco

  • Pascal Lamblin

  • Mohammed Affan

  • Reyhane Askari

  • Alexander Matyasko

  • Simon Lefrancois

  • Shawn Tan

  • Thomas George

  • Faruk Ahmed

  • Zhouhan LIN

  • Aleksandar Botev

  • jhelie

  • xiaoqie

  • Tegan Maharaj

  • Matt Graham

  • Cesar Laurent

  • Gabe Schwartz

  • Juan Camilo Gamboa Higuera

  • AndroidCloud

  • Saizheng Zhang

  • vipulraheja

  • Florian Bordes

  • Sina Honari

  • Vikram

  • erakra

  • Chiheb Trabelsi

  • Shubh Vachher

  • Daren Eiri

  • Gijs van Tulder

  • Laurent Dinh

  • Mohamed Ishmael Diwan Belghazi

  • mila

  • Jeff Donahue

  • Ramana Subramanyam

  • Bogdan Budescu

  • Ghislain Antony Vaillant

  • Jan Schlüter

  • Xavier Bouthillier

  • fo40225

  • Aarni Koskela

  • Adam Becker

  • Adam Geitgey

  • Adrian Keet

  • Adrian Seyboldt

  • Andrei Costinescu

  • Anmol Sahoo

  • Chong Wu

  • Holger Kohr

  • Jayanth Koushik

  • Jenkins

  • Lilian Besson

  • Lv Tao

  • Michael Manukyan

  • Murugesh Marvel

  • NALEPA

  • Ubuntu

  • Zotov Yuriy

  • dareneiri

  • lrast

  • morrme

  • yikang

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Theano-0.10.0b1.tar.gz (2.8 MB view details)

Uploaded Source

File details

Details for the file Theano-0.10.0b1.tar.gz.

File metadata

  • Download URL: Theano-0.10.0b1.tar.gz
  • Upload date:
  • Size: 2.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for Theano-0.10.0b1.tar.gz
Algorithm Hash digest
SHA256 bdc20f53635a4b9b9c76d2330c4da22d1c72b021418c7f9cfd8057bfa1a2cbff
MD5 6a5875d92252970349ec0456be4bef4c
BLAKE2b-256 f58991bd31b97d4dded9009cc7f3d1bc9f74849c4162a056585f22e157085398

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page