Skip to main content

Functional algorithms and implementations

Project description

Python package Conda Version

Functional algorithms

Implementing a math function in a software is a non-trival task when requiring that the function must return correct (or very close to correct) results for all possible inputs including complex infinities, and extremely small or extremely large values of floating point numbers. Such algorithms typically use different approximations of the function depending on the inputs locations in the real line or complex plane.

This project provides a tool for defining functional algorithms for math functions and generating implementations of the algorithms to various programming languages and math libraries. The aim is to provide algorithms that are guaranteed to produce correct values on the whole complex plane or real line.

The motivation for this project raises from the need to implement sophisticated algorithms for various math libraries that otherwise can be a tedious and errorprone task when done manually using LISP-like languages. For instance, the definition of the algorithm for computing arcus sine for Python or NumPy target has LOC about 45 but for the StableHLO target the LOC is 186. Implementing arcus sine algorithm for StableHLO by hand would be very hard.

Supported algorithms

Currently, the definitions of algorithms are provided for the following math functions:

  • acos(z: complex | float), using modified Hull et al algorithm for complex acos,
  • asin(z: complex | float), using modified Hull et al algorithm for complex asin,
  • asinh(z: complex | float), using the relation asinh(z) = -I * asin(z * I).
  • hypot(x: float, y: float) and absolute(z: complex),
  • square(z: complex | float),

Supported targets

Currently, the implementations of supported algorithms are provided for the following target libraries and languages:

The Python and NumPy targets are provided only for debugging and testing functional algorithms.

Results

See results for generated implementation of all supported algorithms for supported targets. Feel free to copy the generated source codes to your program under the same licensing conditions as this project.

Testing algorithms and generated implementations

To ensure the correctness as well as accuracy of provided algorithms, we'll use MPMath as a reference library of math functions. We assume that mpmath implementations produce correct results to math functions with arbitrary precision - this is the prerequisity for ensuring accuracy. To ensure correctness, we'll verify this assumption for each function case separately to eliminate the possibility of false-positives due to possible bugs in MPMath.

The algorithms are typically validated with 32 and 64-bit floating point numbers and their complex compositions using NumPy. The evaluation of the numpy target implementation is performed on logarithmic-uniform samples (an ULP-distance of neighboring samples is constant) that represent the whole complex plane or real line including complex infinities, and extremly small and large values.

To characterize the correctness of algorithms, we'll use ULP to measure the distance between function result and its reference value.

When using over a million samples that are log-uniformly distributed on a complex plane of real line, the probability that the function return value is different from a reference value by the given number of ULPs, is displayed in the following table for the supported algorithms:

Function dtype ULP=0 (exact) ULP=1 ULP=2 ULP=3 ULP>3 errors
absolute float32 100.000 % - - - - -
absolute complex64 96.590 % 3.360 % 0.050 % - - -
asin float32 97.712 % 2.193 % 0.093 % 0.002 % - -
asin complex64 79.382 % 20.368 % 0.244 % 0.006 % - -
asinh float32 92.279% 7.714 % 0.006 % - - -
asinh complex64 76.625 % 23.350 % 0.024 % - - -
square float32 100.000 % - - - - -
square complex64 99.578 % 0.422 % - - - -

See a complete table of ULP differences for all provided algoritms.

A case study: square

A naive implementation of the square function can be defined as

def square(z):
    return z * z

which produces correct results on the real line, however, in the case of complex inputs, there exists regions in complex plane where the given algorithm of using a plain complex multiplication produces incorrect values. For example:

>>> def square(z):
...   return z * z
... 
>>> z = complex(1e170, 1e170)
>>> square(z)
(nan+infj)

where the imaginary part being inf is expected due to overflow from 1e170 * 1e170 but the real part ought to be zero but here the nan real part originates from the following computation of 1e170 * 1e170 - 1e170 * 1e170 -> inf - inf -> nan.

Btw, we cannot rely on NumPy square function as a reference because it produces incorrect value as well (likely in a platform-dependent way):

>>> numpy.square(z)
(-inf+infj)

In this project, the square function uses the following algorithm:

def square(ctx, z):
    if z.is_complex:
        real = ctx.select(abs(z.real) == abs(z.imag), 0, (z.real - z.imag) * (z.real + z.imag))
        imag = 2 * (z.real * z.imag)
        return ctx.complex(real, imag)
    return z * z

from which implementations for different libraries and programming languages can be generated. For example, to generate a square function for Python, we'll use

>>> import functional_algorithms as fa
>>> ctx = fa.Context()
>>> square_graph = ctx.trace(square, complex)
>>> py_square = fa.targets.python.as_function(square_graph)
>>> py_square(z)
infj

In general, py_square produces correct results on the whole complex plane.

Digging into details

Let us look into some of the details of the above example. First, square_graph is an Expr instance that represents the traced function using a pure functional form:

>>> print(square_graph)
(def square, (z: complex),
  (complex
    (select
      (eq
        (abs (real z)),
        (abs (imag z))),
      (constant 0, (real z)),
      (multiply
        (subtract
          (real z),
          (imag z)),
        (add
          (real z),
          (imag z)))),
    (multiply
      (constant 2, (real z)),
      (multiply
        (real z),
        (imag z)))))

The module object fa.targets.python defines the so-called Python target implementation. There exists other targets such as fa.targets.numpy, fa.targets.stablehlo, etc.

To visualize the implementation for the given target, say, fa.targets.python, we'll use tostring(<target>) method:

>>> print(square_graph.tostring(fa.targets.python))
def square(z: complex) -> complex:
    real_z: float = (z).real
    imag_z: float = (z).imag
    return complex(
        (0) if ((abs(real_z)) == (abs(imag_z))) else (((real_z) - (imag_z)) * ((real_z) + (imag_z))),
        (2) * ((real_z) * (imag_z)),
    )

which is actually the definition of the Python function used above when evaluating py_square(z).

Similarly, we can generate implementations for other targets, for instance:

>>> print(square_graph.tostring(fa.targets.stablehlo))
def : Pat<(CHLO_Square ComplexElementType:$z),
  (StableHLO_ComplexOp
    (StableHLO_SelectOp
      (StableHLO_CompareOp
       (StableHLO_AbsOp
         (StableHLO_RealOp:$real_z $z)),
       (StableHLO_AbsOp
         (StableHLO_ImagOp:$imag_z $z)),
        StableHLO_ComparisonDirectionValue<"EQ">,
        (STABLEHLO_DEFAULT_COMPARISON_TYPE)),
      (StableHLO_ConstantLike<"0"> $real_z),
      (StableHLO_MulOp
        (StableHLO_SubtractOp $real_z, $imag_z),
        (StableHLO_AddOp $real_z, $imag_z))),
    (StableHLO_MulOp
      (StableHLO_ConstantLike<"2"> $real_z),
      (StableHLO_MulOp $real_z, $imag_z)))>;

In the case of the NumPy target, the arguments types must include bit-width information:

>>> np_square_graph = ctx.trace(square, numpy.complex64)
>>> print(np_square_graph.tostring(fa.targets.numpy))
def square(z: numpy.complex64) -> numpy.complex64:
    with warnings.catch_warnings(action="ignore"):
        z = numpy.complex64(z)
        real_z: numpy.float32 = (z).real
        imag_z: numpy.float32 = (z).imag
        result = make_complex(
            (
                (numpy.float32(0))
                if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
                else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
            ),
            (numpy.float32(2)) * ((real_z) * (imag_z)),
        )
        return result

>>> fa.targets.numpy.as_function(np_square_graph)(z)
infj

Useful tips

Debugging NumPy target implementations

A useful feature in the tostring method is the debug kw-argument. When it is greater than 0, type checking statements are inserted into the function implementation:

>>> print(np_square_graph.tostring(fa.targets.numpy, debug=1))
def square(z: numpy.complex64) -> numpy.complex64:
    with warnings.catch_warnings(action="ignore"):
        z = numpy.complex64(z)
        real_z: numpy.float32 = (z).real
        assert real_z.dtype == numpy.float32, (real_z.dtype, numpy.float32)
        imag_z: numpy.float32 = (z).imag
        assert imag_z.dtype == numpy.float32, (imag_z.dtype, numpy.float32)
        result = make_complex(
            (
                (numpy.float32(0))
                if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
                else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
            ),
            (numpy.float32(2)) * ((real_z) * (imag_z)),
        )
        assert result.dtype == numpy.complex64, (result.dtype,)
        return result

When debug=2, the function implementation source code and the values of all variables are printed out when calling the function:

>>> fa.targets.numpy.as_function(np_square_graph, debug=2)(3 + 4j)
def square(z: numpy.complex64) -> numpy.complex64:
    with warnings.catch_warnings(action="ignore"):
        z = numpy.complex64(z)
        print("z=", z)
        real_z: numpy.float32 = (z).real
        print("real_z=", real_z)
        assert real_z.dtype == numpy.float32, (real_z.dtype, numpy.float32)
        imag_z: numpy.float32 = (z).imag
        print("imag_z=", imag_z)
        assert imag_z.dtype == numpy.float32, (imag_z.dtype, numpy.float32)
        result = make_complex(
            (
                (numpy.float32(0))
                if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
                else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
            ),
            (numpy.float32(2)) * ((real_z) * (imag_z)),
        )
        print("result=", result)
        assert result.dtype == numpy.complex64, (result.dtype,)
        return result

z= (3+4j)
real_z= 3.0
imag_z= 4.0
result= (-7+24j)
(-7+24j)

Intermediate variables in Python and NumPy target implementations

When generating implementations, one can control the naming of intermediate variables as well as their appearance. By default, intermediate variables are generated only for expressions that are used multiple times as subexpressions. However, one can also force the creation of intermediate variables for better visualization of the implementations. For that, we'll redefine the square algorithm as follows:

def square(ctx, z):
    if z.is_complex:
        x = abs(z.real)
        y = abs(z.imag)
        real = ctx.select(x == y, 0, ((x - y) * (y + y)).reference("real_part"))
        imag = 2 * (x * y)
        r = ctx.complex(real.reference(), imag.reference())
        return ctx(r)
    return z * z

Notice the usage of reference method that forces the expression to be defined as a variable. Also, notice wrapping the return value with ctx(...) call that will assing variable names in the function as reference values of expressions.

The generated implementation for the Python targer of the above definition is

>>> square_graph = ctx.trace(square, complex)
>>> print(square_graph.tostring(fa.targets.python))
def square(z: complex) -> complex:
    real_z: float = (z).real
    x: float = abs(real_z)
    y: float = abs((z).imag)
    real_part: float = ((x) - (y)) * ((y) + (y))
    real: float = (0) if ((x) == (y)) else (real_part)
    imag: float = (2) * ((x) * (y))
    return complex(real, imag)

which is more expressive than the one shown above.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

functional_algorithms-0.6.1.tar.gz (60.7 kB view details)

Uploaded Source

Built Distribution

functional_algorithms-0.6.1-py3-none-any.whl (48.8 kB view details)

Uploaded Python 3

File details

Details for the file functional_algorithms-0.6.1.tar.gz.

File metadata

  • Download URL: functional_algorithms-0.6.1.tar.gz
  • Upload date:
  • Size: 60.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for functional_algorithms-0.6.1.tar.gz
Algorithm Hash digest
SHA256 5cb7915e10d978a9d879be25536fc193d8f82f02b7aa5d1c9986c31d522a0c6d
MD5 16a96d161381c254d897c1bba6a2da27
BLAKE2b-256 b5ed8b30781fee30ffa75eba77546a3ff53ac584c53dec65228e51553c465e42

See more details on using hashes here.

Provenance

File details

Details for the file functional_algorithms-0.6.1-py3-none-any.whl.

File metadata

File hashes

Hashes for functional_algorithms-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fc7ad1d26a13e85137f0c78672949a364028fa90346468e3761ce0ade651fca8
MD5 6ae6763d89425193bcb4ebcf0c42fa21
BLAKE2b-256 3cc0f763fc6a22ff1c183cbe84153bdd4dafc8bb848f2dac55a29f53a5bdbba8

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page