Skip to main content

Fast numerical expression evaluator for NumPy

Project description

======================================================
Numexpr: Fast numerical expression evaluator for NumPy
======================================================

:Author: David M. Cooke, Francesc Alted and others
:Contact: faltet@gmail.com
:URL: https://github.com/pydata/numexpr
:Travis CI: |travis|
:Appveyor: |appveyor|
:PyPi: |version| |pypi|

.. |travis| image:: https://travis-ci.org/pydata/numexpr.png?branch=master
:target: https://travis-ci.org/pydata/numexpr
.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/FrancescAlted/numexpr
:target: https://ci.appveyor.com/project/FrancescAlted/numexpr
.. |pypi| image:: https://pypip.in/d/numexpr/badge.png
:target: https://pypi-hypernode.com/pypi/numexpr


What it is Numexpr?
===================

Numexpr is a fast numerical expression evaluator for NumPy. With it,
expressions that operate on arrays (like "3*a+4*b") are accelerated
and use less memory than doing the same calculation in Python.

In addition, its multi-threaded capabilities can make use of all your
cores -- which may accelerate computations, most specially if they are
not memory-bounded (e.g. those using transcendental functions).

Last but not least, numexpr can make use of Intel's VML (Vector Math
Library, normally integrated in its Math Kernel Library, or MKL).
This allows further acceleration of transcendent expressions.


How Numexpr achieves high performance
================================================

The main reason why Numexpr achieves better performance than NumPy
is that it avoids allocating memory for intermediate results. This
results in better cache utilization and reduces memory access in
general. Due to this, Numexpr works best with large arrays.

Numexpr parses expressions into its own op-codes that are then used by
an integrated computing virtual machine. The array operands are split
into small chunks that easily fit in the cache of the CPU and passed to
the virtual machine. The virtual machine then applies the operations
on each chunk. It's worth noting that all temporaries and constants
in the expression are also chunked.

The result is that Numexpr can get the most of your machine computing
capabilities for array-wise computations. Common speed-ups with regard
to NumPy are usually between 0.95x (for very simple expressions
like ’a + 1’) and 4x (for relatively complex ones like 'a*b-4.1*a > 2.5*b'),
although much higher speed-ups can be achieved (up to 15x in some cases).

Numexpr performs best on matrices that do not fit in CPU cache.
In order to get a better idea on the different speed-ups
that can be achieved on your platform, run the provided benchmarks.

See more info about how Numexpr works in the `wiki <https://github.com/pydata/numexpr/wiki>`_.


Examples of use
===============

::

>>> import numpy as np
>>> import numexpr as ne

>>> a = np.arange(1e6) # Choose large arrays for better speedups
>>> b = np.arange(1e6)

>>> ne.evaluate("a + 1") # a simple expression
array([ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, ...,
9.99998000e+05, 9.99999000e+05, 1.00000000e+06])

>>> ne.evaluate('a*b-4.1*a > 2.5*b') # a more complex one
array([False, False, False, ..., True, True, True], dtype=bool)

>>> ne.evaluate("sin(a) + arcsinh(a/b)") # you can also use functions
array([ NaN, 1.72284457, 1.79067101, ..., 1.09567006,
0.17523598, -0.09597844])

>>> s = np.array(['abba', 'abbb', 'abbcdef'])
>>> ne.evaluate("'abba' == s") # string arrays are supported too
array([ True, False, False], dtype=bool)


Datatypes supported internally
==============================

Numexpr operates internally only with the following types::

* 8-bit boolean (bool)
* 32-bit signed integer (int or int32)
* 64-bit signed integer (long or int64)
* 32-bit single-precision floating point number (float or float32)
* 64-bit, double-precision floating point number (double or float64)
* 2x64-bit, double-precision complex number (complex or complex128)
* Raw string of bytes (str)

If the arrays in the expression does not match any of these types,
they will be upcasted to one of the above types (following the usual
type inference rules, see below). Have this in mind when doing
estimations about the memory consumption during the computation of
your expressions.

Also, the types in Numexpr conditions are somewhat more restrictive
than those of Python. For instance, the only valid constants for booleans
are `True` and `False`, and they are never automatically cast to integers.


Casting rules
=============

Casting rules in Numexpr follow closely those of NumPy. However, for
implementation reasons, there are some known exceptions to this rule,
namely::

* When an array with type `int8`, `uint8`, `int16` or `uint16` is
used inside Numexpr, it is internally upcasted to an `int` (or
`int32` in NumPy notation).

* When an array with type `uint32` is used inside Numexpr, it is
internally upcasted to a `long` (or `int64` in NumPy notation).

* A floating point function (e.g. `sin`) acting on `int8` or
`int16` types returns a `float64` type, instead of the `float32`
that is returned by NumPy functions. This is mainly due to the
absence of native `int8` or `int16` types in Numexpr.

* In operations implying a scalar and an array, the normal rules
of casting are used in Numexpr, in contrast with NumPy, where
array types takes priority. For example, if 'a' is an array of
type `float32` and 'b' is an scalar of type `float64` (or Python
`float` type, which is equivalent), then 'a*b' returns a
`float64` in Numexpr, but a `float32` in NumPy (i.e. array
operands take priority in determining the result type). If you
need to keep the result a `float32`, be sure you use a `float32`
scalar too.


Supported operators
===================

Numexpr supports the set of operators listed below::

* Logical operators: &, |, ~
* Comparison operators: <, <=, ==, !=, >=, >
* Unary arithmetic operators: -
* Binary arithmetic operators: +, -, *, /, **, %, <<, >>


Supported functions
===================

Supported functions are listed below::

* where(bool, number1, number2): number
Number1 if the bool condition is true, number2 otherwise.
* {sin,cos,tan}(float|complex): float|complex
Trigonometric sine, cosine or tangent.
* {arcsin,arccos,arctan}(float|complex): float|complex
Trigonometric inverse sine, cosine or tangent.
* arctan2(float1, float2): float
Trigonometric inverse tangent of float1/float2.
* {sinh,cosh,tanh}(float|complex): float|complex
Hyperbolic sine, cosine or tangent.
* {arcsinh,arccosh,arctanh}(float|complex): float|complex
Hyperbolic inverse sine, cosine or tangent.
* {log,log10,log1p}(float|complex): float|complex
Natural, base-10 and log(1+x) logarithms.
* {exp,expm1}(float|complex): float|complex
Exponential and exponential minus one.
* sqrt(float|complex): float|complex
Square root.
* abs(float|complex): float|complex
Absolute value.
* conj(complex): complex
Conjugate value.
* {real,imag}(complex): float
Real or imaginary part of complex.
* complex(float, float): complex
Complex from real and imaginary parts.
* contains(str, str): bool
Returns True for every string in `op1` that contains `op2`.

.. Notes:

+ `abs()` for complex inputs returns a ``complex`` output too. This
is a departure from NumPy where a ``float`` is returned instead.
However, Numexpr is not flexible enough yet so as to allow this to
happen. Meanwhile, if you want to mimic NumPy behaviour, you may
want to select the real part via the ``real`` function
(e.g. "real(abs(cplx))") or via the ``real`` selector
(e.g. "abs(cplx).real").

+ `contains()` only works with bytes strings, not unicode strings.

You may add additional functions as needed.


Supported reduction operations
==============================

The following reduction operations are currently supported::

* sum(number, axis=None): Sum of array elements over a given axis.
Negative axis are not supported.

* prod(number, axis=None): Product of array elements over a given
axis. Negative axis are not supported.

* min(number, axis=None): Minimum of array elements over a given
axis. Negative axis are not supported.

* max(number, axis=None): Maximum of array elements over a given
axis. Negative axis are not supported.


General routines
================

::

* evaluate(expression, local_dict=None, global_dict=None,
out=None, order='K', casting='safe', **kwargs):
Evaluate a simple array expression element-wise. See docstrings
for more info on parameters. Also, see examples above.

* test(): Run all the tests in the test suite.

* print_versions(): Print the versions of software that numexpr
relies on.

* set_num_threads(nthreads): Sets a number of threads to be used in
operations. Returns the previous setting for the number of
threads. During initialization time Numexpr sets this number to
the number of detected cores in the system (see
`detect_number_of_cores()`).

If you are using Intel's VML, you may want to use
`set_vml_num_threads(nthreads)` to perform the parallel job with
VML instead. However, you should get very similar performance
with VML-optimized functions, and VML's parallelizer cannot deal
with common expressions like `(x+1)*(x-2)`, while Numexpr's one
can.

* detect_number_of_cores(): Detects the number of cores in the
system.


Intel's VML specific support routines
=====================================

When compiled with Intel's VML (Vector Math Library), you will be able
to use some additional functions for controlling its use. These are outlined below::

* set_vml_accuracy_mode(mode): Set the accuracy for VML operations.

The `mode` parameter can take the values:
- 'low': Equivalent to VML_LA - low accuracy VML functions are called
- 'high': Equivalent to VML_HA - high accuracy VML functions are called
- 'fast': Equivalent to VML_EP - enhanced performance VML functions are called

It returns the previous mode.

This call is equivalent to the `vmlSetMode()` in the VML library.

::

* set_vml_num_threads(nthreads): Suggests a maximum number of
threads to be used in VML operations.

This function is equivalent to the call
`mkl_domain_set_num_threads(nthreads, MKL_DOMAIN_VML)` in the MKL library.

See the Intel documentation on `VM Service Functions <https://software.intel.com/en-us/node/521831>`_ for more information.

* get_vml_version(): Get the VML/MKL library version.


Authors
=======

See AUTHORS.txt


License
=======

Numexpr is distributed under the MIT license.



.. Local Variables:
.. mode: text
.. coding: utf-8
.. fill-column: 70
.. End:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

numexpr-2.5.tar.gz (88.3 kB view details)

Uploaded Source

Built Distributions

numexpr-2.5-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (137.6 kB view details)

Uploaded CPython 3.5m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

numexpr-2.5-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (137.6 kB view details)

Uploaded CPython 3.4m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

numexpr-2.5-cp33-cp33m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (137.6 kB view details)

Uploaded CPython 3.3m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

numexpr-2.5-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (137.2 kB view details)

Uploaded CPython 2.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

File details

Details for the file numexpr-2.5.tar.gz.

File metadata

  • Download URL: numexpr-2.5.tar.gz
  • Upload date:
  • Size: 88.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for numexpr-2.5.tar.gz
Algorithm Hash digest
SHA256 319cdf4e402177a1c8ed4972cffd09f523446f186d347b7c1974787cdabf0294
MD5 84f66cced45ba3e30dcf77a937763aaa
BLAKE2b-256 7ba4e4fb495692ba8b2a6798b2b93c2ee090c1addaf0f6c16395ef3bad72e0ce

See more details on using hashes here.

File details

Details for the file numexpr-2.5-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for numexpr-2.5-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 6e292e54ab1cddcec0b8f157d23a84485bfed8b769abb35837621907c75b1fcf
MD5 7c7c3ea9f661f51304416352b3c53389
BLAKE2b-256 eb70df72ed29f448a890ced3299efbf6cf21a0bcacac5e53e74df3d3e4ddc943

See more details on using hashes here.

File details

Details for the file numexpr-2.5-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for numexpr-2.5-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 933a9ba47259bad43305f0f988054e97eb365279526b16ca08ce4d7031a28fb4
MD5 93d75c102c57e194ba926b4c1d9bca9a
BLAKE2b-256 197f2317dd3832603f26fd395cb89dff00873ae4c935360085656feb78149dc9

See more details on using hashes here.

File details

Details for the file numexpr-2.5-cp33-cp33m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for numexpr-2.5-cp33-cp33m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 8c719c7eae53b0cd3ca5cf2074c088efd31c4d5c56c33cdb69712d36beffac3d
MD5 189daa441324dcf3f5b83304129f4877
BLAKE2b-256 4ce2393094a8122c9ec3981f67d9b00c4470ba930e12d5e776926382571c958c

See more details on using hashes here.

File details

Details for the file numexpr-2.5-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for numexpr-2.5-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 71d6c55fdae9145fc45ed7135f7baebfd079b331e5741875c32480465796293d
MD5 ab9e8103b36b1e61c10b479812424852
BLAKE2b-256 397e0746fab17a72a94517bcba7fc388faeac138954648fa570776e44c63c6c6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page