Functional algorithms and implementations
Project description
Functional algorithms
Implementing a math function in a software is a non-trival task when requiring that the function must return correct (or very close to correct) results for all possible inputs including complex infinities, and extremely small or extremely large values of floating point numbers. Such algorithms typically use different approximations of the function depending on the inputs locations in the real line or complex plane.
This project provides a tool for defining functional algorithms for math functions and generating implementations of the algorithms to various programming languages and math libraries. The aim is to provide algorithms that are guaranteed to produce correct values on the whole complex plane or real line.
The motivation for this project raises from the need to implement sophisticated algorithms for various math libraries that otherwise can be a tedious and errorprone task when done manually using LISP-like languages. For instance, the definition of the algorithm for computing arcus sine for Python or NumPy target has LOC about 45 but for the StableHLO target the LOC is 186. Implementing arcus sine algorithm for StableHLO by hand would be very hard.
Supported algorithms
Currently, the definitions of algorithms are provided for the following math functions:
acos(z: complex | float)
, using modified Hull et al algorithm for complexacos
,acosh(z: complex | float)
, using the relationacosh(z) = sign(z.imag) * I * acos(z)
,asin(z: complex | float)
, using modified Hull et al algorithm for complexasin
,asinh(z: complex | float)
, using the relationasinh(z) = -I * asin(z * I)
,atan(z: complex | float)
, using the relationatan(z) = -I * atanh(z * I)
,atanh(z: complex | float)
, using a custom algorithm,hypot(x: float, y: float)
andabsolute(z: complex)
,square(z: complex | float)
using a custom algorithm,log1p(z: complex)
using a custom algorithm that employs Dekker's product and 2Sum algorithms.
All above algorithms are designed to be accurate upto maximal 3 ULP difference between computed and reference values.
Supported targets
Currently, the implementations of supported algorithms are provided for the following target libraries and languages:
- Python, using math functions on real inputs,
- NumPy, using numpy functions on real inputs,
- StableHLO, using existing decompositions and operations,
- XLA Client, using existing decompositions and operations,
- C++, using C++ Standard Library.
The Python and NumPy targets are provided only for debugging and testing functional algorithms.
Results
See results for generated implementation of all supported algorithms for supported targets. Feel free to copy the generated source codes to your program under the same licensing conditions as this project.
Testing algorithms and generated implementations
To ensure the correctness as well as accuracy of provided algorithms, we'll use MPMath as a reference library of math functions. We assume that mpmath implementations produce correct results to math functions with arbitrary precision - this is the prerequisity for ensuring accuracy. To ensure correctness, we'll verify this assumption for each function case separately to eliminate the possibility of false-positives due to possible bugs in MPMath.
The algorithms are typically validated with 32 and 64-bit floating point numbers and their complex compositions using NumPy. The evaluation of the numpy target implementation is performed on logarithmic-uniform samples (an ULP-distance of neighboring samples is constant) that represent the whole complex plane or real line including complex infinities, and extremly small and large values.
To characterize the correctness of algorithms, we'll use ULP to measure the distance between function result and its reference value.
When using over a million samples that are log-uniformly distributed on a complex plane of real line, the probability that the function return value is different from a reference value by the given number of ULPs, is displayed in the following table for the supported algorithms:
Function | dtype | ULP=0 (exact) | ULP=1 | ULP=2 | ULP=3 | ULP>3 | errors |
---|---|---|---|---|---|---|---|
absolute | float32 | 100.000 % | - | - | - | - | - |
absolute | complex64 | 96.590 % | 3.360 % | 0.050 % | - | - | - |
asin | float32 | 97.712 % | 2.193 % | 0.093 % | 0.002 % | - | - |
asin | complex64 | 79.382 % | 20.368 % | 0.244 % | 0.006 % | - | - |
asinh | float32 | 92.279% | 7.714 % | 0.006 % | - | - | - |
asinh | complex64 | 76.625 % | 23.350 % | 0.024 % | - | - | - |
square | float32 | 100.000 % | - | - | - | - | - |
square | complex64 | 99.578 % | 0.422 % | - | - | - | - |
See a complete table of ULP differences for all provided algoritms.
A case study: square
A naive implementation of the square function can be defined as
def square(z):
return z * z
which produces correct results on the real line, however, in the case of complex inputs, there exists regions in complex plane where the given algorithm of using a plain complex multiplication produces incorrect values. For example:
>>> def square(z):
... return z * z
...
>>> z = complex(1e170, 1e170)
>>> square(z)
(nan+infj)
where the imaginary part being inf
is expected due to overflow from
1e170 * 1e170
but the real part ought to be zero but here the nan
real part originates from the following computation of 1e170 * 1e170 - 1e170 * 1e170 -> inf - inf -> nan
.
Btw, we cannot rely on NumPy square function as a reference because it produces incorrect value as well (likely in a platform-dependent way):
>>> numpy.square(z)
(-inf+infj)
In this project, the square function uses the following algorithm:
def square(ctx, z):
if z.is_complex:
real = ctx.select(abs(z.real) == abs(z.imag), 0, (z.real - z.imag) * (z.real + z.imag))
imag = 2 * (z.real * z.imag)
return ctx.complex(real, imag)
return z * z
from which implementations for different libraries and programming languages can be generated. For example, to generate a square function for Python, we'll use
>>> import functional_algorithms as fa
>>> ctx = fa.Context()
>>> square_graph = ctx.trace(square, complex)
>>> py_square = fa.targets.python.as_function(square_graph)
>>> py_square(z)
infj
In general, py_square
produces correct results on the whole complex
plane.
Digging into details
Let us look into some of the details of the above example. First,
square_graph
is an Expr
instance that represents the traced
function using a pure functional form:
>>> print(square_graph)
(def square, (z: complex),
(complex
(select
(eq
(abs (real z)),
(abs (imag z))),
(constant 0, (real z)),
(multiply
(subtract
(real z),
(imag z)),
(add
(real z),
(imag z)))),
(multiply
(constant 2, (real z)),
(multiply
(real z),
(imag z)))))
The module object fa.targets.python
defines the so-called Python target
implementation. There exists other targets such as fa.targets.numpy
,
fa.targets.stablehlo
, etc.
To visualize the implementation for the given target, say,
fa.targets.python
, we'll use tostring(<target>)
method:
>>> print(square_graph.tostring(fa.targets.python))
def square(z: complex) -> complex:
real_z: float = (z).real
imag_z: float = (z).imag
return complex(
(0) if ((abs(real_z)) == (abs(imag_z))) else (((real_z) - (imag_z)) * ((real_z) + (imag_z))),
(2) * ((real_z) * (imag_z)),
)
which is actually the definition of the Python function used above
when evaluating py_square(z)
.
Similarly, we can generate implementations for other targets, for instance:
>>> print(square_graph.tostring(fa.targets.stablehlo))
def : Pat<(CHLO_Square ComplexElementType:$z),
(StableHLO_ComplexOp
(StableHLO_SelectOp
(StableHLO_CompareOp
(StableHLO_AbsOp
(StableHLO_RealOp:$real_z $z)),
(StableHLO_AbsOp
(StableHLO_ImagOp:$imag_z $z)),
StableHLO_ComparisonDirectionValue<"EQ">,
(STABLEHLO_DEFAULT_COMPARISON_TYPE)),
(StableHLO_ConstantLike<"0"> $real_z),
(StableHLO_MulOp
(StableHLO_SubtractOp $real_z, $imag_z),
(StableHLO_AddOp $real_z, $imag_z))),
(StableHLO_MulOp
(StableHLO_ConstantLike<"2"> $real_z),
(StableHLO_MulOp $real_z, $imag_z)))>;
In the case of the NumPy target, the arguments types must include bit-width information:
>>> np_square_graph = ctx.trace(square, numpy.complex64)
>>> print(np_square_graph.tostring(fa.targets.numpy))
def square(z: numpy.complex64) -> numpy.complex64:
with warnings.catch_warnings(action="ignore"):
z = numpy.complex64(z)
real_z: numpy.float32 = (z).real
imag_z: numpy.float32 = (z).imag
result = make_complex(
(
(numpy.float32(0))
if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
),
(numpy.float32(2)) * ((real_z) * (imag_z)),
)
return result
>>> fa.targets.numpy.as_function(np_square_graph)(z)
infj
Useful tips
Debugging NumPy target implementations
A useful feature in the tostring
method is the debug
kw-argument. When it is greater than 0, type checking statements are
inserted into the function implementation:
>>> print(np_square_graph.tostring(fa.targets.numpy, debug=1))
def square(z: numpy.complex64) -> numpy.complex64:
with warnings.catch_warnings(action="ignore"):
z = numpy.complex64(z)
real_z: numpy.float32 = (z).real
assert real_z.dtype == numpy.float32, (real_z.dtype, numpy.float32)
imag_z: numpy.float32 = (z).imag
assert imag_z.dtype == numpy.float32, (imag_z.dtype, numpy.float32)
result = make_complex(
(
(numpy.float32(0))
if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
),
(numpy.float32(2)) * ((real_z) * (imag_z)),
)
assert result.dtype == numpy.complex64, (result.dtype,)
return result
When debug=2
, the function implementation source code and the values
of all variables are printed out when calling the function:
>>> fa.targets.numpy.as_function(np_square_graph, debug=2)(3 + 4j)
def square(z: numpy.complex64) -> numpy.complex64:
with warnings.catch_warnings(action="ignore"):
z = numpy.complex64(z)
print("z=", z)
real_z: numpy.float32 = (z).real
print("real_z=", real_z)
assert real_z.dtype == numpy.float32, (real_z.dtype, numpy.float32)
imag_z: numpy.float32 = (z).imag
print("imag_z=", imag_z)
assert imag_z.dtype == numpy.float32, (imag_z.dtype, numpy.float32)
result = make_complex(
(
(numpy.float32(0))
if (numpy.equal(numpy.abs(real_z), numpy.abs(imag_z), dtype=numpy.bool_))
else (((real_z) - (imag_z)) * ((real_z) + (imag_z)))
),
(numpy.float32(2)) * ((real_z) * (imag_z)),
)
print("result=", result)
assert result.dtype == numpy.complex64, (result.dtype,)
return result
z= (3+4j)
real_z= 3.0
imag_z= 4.0
result= (-7+24j)
(-7+24j)
Intermediate variables in Python and NumPy target implementations
When generating implementations, one can control the naming of intermediate variables as well as their appearance. By default, intermediate variables are generated only for expressions that are used multiple times as subexpressions. However, one can also force the creation of intermediate variables for better visualization of the implementations. For that, we'll redefine the square algorithm as follows:
def square(ctx, z):
if z.is_complex:
x = abs(z.real)
y = abs(z.imag)
real = ctx.select(x == y, 0, ((x - y) * (y + y)).reference("real_part"))
imag = 2 * (x * y)
r = ctx.complex(real.reference(), imag.reference())
return ctx(r)
return z * z
Notice the usage of reference
method that forces the expression to
be defined as a variable. Also, notice wrapping the return value with
ctx(...)
call that will assing variable names in the function as
reference values of expressions.
The generated implementation for the Python targer of the above definition is
>>> square_graph = ctx.trace(square, complex)
>>> print(square_graph.tostring(fa.targets.python))
def square(z: complex) -> complex:
real_z: float = (z).real
x: float = abs(real_z)
y: float = abs((z).imag)
real_part: float = ((x) - (y)) * ((y) + (y))
real: float = (0) if ((x) == (y)) else (real_part)
imag: float = (2) * ((x) * (y))
return complex(real, imag)
which is more expressive than the one shown above.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file functional_algorithms-0.12.0.tar.gz
.
File metadata
- Download URL: functional_algorithms-0.12.0.tar.gz
- Upload date:
- Size: 115.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c55461bffe8ffe62540ba91ec78f21d4d87674d2f405ed22970e574fd03da36f |
|
MD5 | 12d05b94a0f49b9dd190ea35e491b0ea |
|
BLAKE2b-256 | 0cace180d69e47e7935d30e8e9426747c341756390a3ff51feb3e1d3a3c8d6e9 |
File details
Details for the file functional_algorithms-0.12.0-py3-none-any.whl
.
File metadata
- Download URL: functional_algorithms-0.12.0-py3-none-any.whl
- Upload date:
- Size: 84.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 41d19969905d689b0eb9e4a504caa54128772bf4708bfa4a08aaee0d4640f5fe |
|
MD5 | daaeb1ed5f10ec2c463cc72b1b6439f0 |
|
BLAKE2b-256 | 4128255e3808ae44e7765312d452684a0c96895ed7fd9e9f0dd06b48f0be30ef |