Skip to main content

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT.

Project description

nifty-ls

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT!

PyPI Tests pre-commit.ci status Jenkins Tests

Overview

The Lomb-Scargle periodogram, used for identifying periodicity in irregularly-spaced observations, is useful but computationally expensive. However, it can be phrased mathematically as a pair of non-uniform FFTs (NUFFTs). This allows us to leverage Flatiron Institute's finufft package, which is really fast! It also enables GPU (CUDA) support and is several orders of magnitude more accurate than Astropy's Lomb Scargle with default settings.

Background

The Press & Rybicki (1989) method for Lomb-Scargle poses the computation as four weighted trigonometric sums that are solved with a pair of FFTs by "extirpolation" to an equi-spaced grid. Specifically, the sums are of the form:

\begin{align}
S_k &= \sum_{j=1}^M h_j \sin(2 \pi f_k t_j), \\
C_k &= \sum_{j=1}^M h_j \cos(2 \pi f_k t_j),
\end{align}

where the $k$ subscript runs from 0 to $N$, the number of frequency bins, $f_k$ is the cyclic frequency of bin $k$, $t_j$ are the observation times (of which there are $M$), and $h_j$ are the weights.

The key observation for our purposes is that this is exactly what a non-uniform FFT computes! Specifically, a "type-1" (non-uniform to uniform) complex NUFFT in the finufft convention computes:

g_k = \sum_{j=1}^M h_j e^{i k t_j}.

The complex and real parts of this transform are Press & Rybicki's $S_k$ and $C_k$, with some adjustment for cyclic/angular frequencies, domain of $k$, real vs. complex transform, etc. finufft has a particularly fast and accurate spreading kernel ("exponential of semicircle") that it uses instead of Press & Rybicki's extirpolation.

There is some pre- and post-processing of $S_k$ and $C_k$ to compute the periodogram, which can become the bottleneck because finufft is so fast. This package also optimizes and parallelizes those computations.

Installation

From PyPI

For CPU support:

$ pip install nifty-ls

For GPU (CUDA) support:

$ pip install nifty-ls[cuda]

The default is to install with CUDA 12 support; one can use nifty-ls[cuda11] instead for CUDA 11 support (installs cupy-cuda11x).

From source

First, clone the repo and cd to the repo root:

$ git clone https://www.github.com/flatironinstitute/nifty-ls
$ cd nifty-ls

Then, to install with CPU support:

$ pip install .

To install with GPU (CUDA) support:

$ pip install .[cuda]

or .[cuda11] for CUDA 11.

For development (with automatic rebuilds enabled by default in pyproject.toml):

$ pip install nanobind scikit-build-core
$ pip install -e .[test] --no-build-isolation

Developers may also be interested in setting these keys in pyproject.toml:

[tool.scikit-build]
cmake.build-type = "Debug"
cmake.verbose = true
install.strip = false

For best performance

You may wish to compile and install finufft and cufinufft yourself so they will be built with optimizations for your hardware. To do so, first install nifty-ls, then follow the Python installation instructions for finufft and cufinufft, configuring the libraries as desired.

nifty-ls can likewise be built from source following the instructions above for best performance, but most of the heavy computations are offloaded to (cu)finufft, so the performance benefit is minimal.

Usage

From Astropy

Importing nifty_ls makes nifty-ls available via method="fastnifty" in Astropy's LombScargle module. The name is prefixed with "fast" as it's part of the fast family of methods that assume a regularly-spaced frequency grid.

import nifty_ls
from astropy.timeseries import LombScargle
frequency, power = LombScargle(t, y).autopower(method="fastnifty")
Full example
import matplotlib.pyplot as plt
import nifty_ls
import numpy as np
from astropy.timeseries import LombScargle

rng = np.random.default_rng(seed=123)
N = 1000
t = rng.uniform(0, 100, size=N)
y = np.sin(50 * t) + 1 + rng.poisson(size=N)

frequency, power = LombScargle(t, y).autopower(method='fastnifty')
plt.plot(frequency, power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')

To use the CUDA (cufinufft) backend, pass the appropriate argument via method_kws:

frequency, power = LombScargle(t, y).autopower(method="fastnifty", method_kws=dict(backend="cufinufft"))

In many cases, accelerating your periodogram is as simple as setting the method in your Astropy Lomb Scargle code! More advanced usage, such as computing multiple periodograms in parallel, should go directly through the nifty-ls interface.

From nifty-ls (native interface)

nifty-ls has its own interface that offers more flexibility than the Astropy interface for batched periodograms.

Single periodograms

A single periodogram can be computed through nifty-ls as:

import nifty_ls
# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy)

# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy, fmin=0.1, fmax=10, Nf=10**6)
Full example
import nifty_ls
import numpy as np

rng = np.random.default_rng(seed=123)
N = 1000
t = np.sort(rng.uniform(0, 100, size=N))
y = np.sin(50 * t) + 1 + rng.poisson(size=N)

# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y)

# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, fmin=0.1, fmax=10, Nf=10**6)

plt.plot(nifty_res.freq(), nifty_res.power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')

Batched Periodograms

Batched periodograms (multiple objects with the same observation times) can be computed as:

import nifty_ls
import numpy as np

N_t = 100
N_obj = 10
Nf = 200

rng = np.random.default_rng()
t = np.sort(rng.random(N_t))
obj_freqs = rng.random(N_obj).reshape(-1,1)
y_batch = np.sin(obj_freqs * t)
dy_batch = rng.random(y_batch.shape)

batched = nifty_ls.lombscargle(t, y_batch, dy_batch, Nf=Nf)
print(batched.power.shape)  # (10, 200)

Note that this computes multiple periodograms simultaneously on a set of time series with the same observation times. This approach is particularly efficient for short time series, and/or when using the GPU.

Support for batching multiple time series with distinct observation times is not currently implemented, but is planned.

Limitations

The code only supports frequency grids with fixed spacing; however, finufft does support type 3 NUFFTs (non-uniform to non-uniform), which would enable arbitrary frequency grids. It's not clear how useful this is, so it hasn't been implemented, but please open a GitHub issue if this is of interest to you.

Performance

Using 16 cores of an Intel Icelake CPU and a NVIDIA A100 GPU, we obtain the following performance. First, we'll look at results from a single periodogram (i.e. unbatched):

benchmarks

In this case, finufft is 5x faster (11x with threads) than Astropy for large transforms, and 2x faster for (very) small transforms. Small transforms improve futher relative to Astropy with more frequency bins. (Dynamic multi-threaded dispatch of transforms is planned as a future feature which will especially benefit small $N$.)

cufinufft is 200x faster than Astropy for large $N$! The performance plateaus towards small $N$, mostly due to the overhead of sending data to the GPU and fetching the result. (Concurrent job execution on the GPU is another planned feature, which will especially help small $N$.)

The following demonstrates "batch mode", in which 10 periodograms are computed from 10 different time series with the same observation times:

batched benchmarks

Here, the finufft single-threaded advantage is consistently 6x across problem sizes, while the multi-threaded advantage is up to 30x for large transforms.

The 200x advantage of the GPU extends to even smaller $N$ in this case, since we're sending and receiving more data at once.

We see that both multi-threaded finufft and cufinufft particularly benefit from batched transforms, as this exposes more parallelism and amortizes fixed latencies.

We use FFTW_MEASURE for finufft in these benchmarks, which improves performance a few tens of percents.

Multi-threading hurts the performance of small problem sizes; the default behavior of nifty-ls is to use fewer threads in such cases. The "multi-threaded" line uses between 1 and 16 threads.

On the CPU, nifty-ls gets its performance not only through its use of finufft, but also by offloading the pre- and post-processing steps to compiled extensions. The extensions enable us to do much more processing element-wise, rather than array-wise. In other words, they enable "kernel fusion" (to borrow a term from GPU computing), increasing the compute density.

Accuracy

While we compared performance with Astropy's fast method, this isn't quite fair. nifty-ls is much more accurate than Astropy fast! Astropy fast uses Press & Rybicki's extirpolation approximation, trading accuracy for speed, but thanks to finufft, nifty-ls can have both.

In the figure below, we plot the median periodogram error in circles and the 99th percentile error in triangles for astropy, finufft, and cufinufft for a range of $N$ (and default $N_F \approx 12N$).

The astropy result is presented for two cases: a nominal case and a "worst case". Internally, astropy uses an FFT grid whose size is the next power of 2 above the target oversampling rate. Each jump to a new power of 2 typically yields an increase in accuracy. The "worst case", therefore, is the highest frequency that does not yield such a jump.

Errors of $\mathcal{O}(10\%)$ or greater are common with worst-case evaluations. Errors of $\mathcal{O}(1\%)$ or greater are common in typical evaluations. nifty-ls is conservatively 6 orders of magnitude more accurate.

The reference result in the above figure comes from the "phase winding" method, which uses trigonometric identities to avoid expensive sin and cos evaluations. One can also use astropy's fast method as a reference with exact evaluation enabled via use_fft=False. One finds the same result, but the phase winding is a few orders of magnitude faster (but still not competitive with finufft).

In summary, nifty-ls is highly accurate while also giving high performance.

float32 vs float64

While 32-bit floats provide a substantial speedup for finufft and cufinufft, we generally don't recommend their use for Lomb-Scargle. The reason is the challenging condition number of the problem. The condition number is the response in the output to a small perturbation in the input—in other words, the derivative. It can easily be shown that the derivative of a NUFFT with respect to the non-uniform points is proportional to $N$, the transform length (i.e. the number of modes). In other words, errors in the observation times are amplified by $\mathcal{O}(N)$. Since float32 has a relative error of $\mathcal{O}(10^{-7})$, transforms of length $10^5$ already suffer $\mathcal{O}(1\%)$ error. Therefore, we focus on float64 in nifty-ls, but float32 is also natively supported by all backends for adventurous users.

The condition number is also a likely contributor to the mild upward trend in error versus $N$ in the above figure, at least for finufft/cufinufft. With a relative error of $\mathcal{O}(10^{-16})$ for float64 and a transform length of $\mathcal{O}(10^{6})$, the minimum error is $\mathcal{O}(10^{-10})$.

Testing

First, install from source (pip install .[test]). Then, from the repo root, run:

$ pytest

The tests are defined in the tests/ directory, and include a mini-benchmark of nifty-ls and Astropy, shown below:

$ pytest
======================================================== test session starts =========================================================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=True min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /mnt/home/lgarrison/nifty-ls
configfile: pyproject.toml
plugins: benchmark-4.0.0, asdf-2.15.0, anyio-3.6.2, hypothesis-6.23.1
collected 36 items                                                                                                                   

tests/test_ls.py ......................                                                                                        [ 61%]
tests/test_perf.py ..............                                                                                              [100%]


----------------------------------------- benchmark 'Nf=1000': 5 tests ----------------------------------------
Name (time in ms)                       Min                Mean            StdDev            Rounds  Iterations
---------------------------------------------------------------------------------------------------------------
test_batched[finufft-1000]           6.8418 (1.0)        7.1821 (1.0)      0.1831 (1.32)         43           1
test_batched[cufinufft-1000]         7.7027 (1.13)       8.6634 (1.21)     0.9555 (6.89)         74           1
test_unbatched[finufft-1000]       110.7541 (16.19)    111.0603 (15.46)    0.1387 (1.0)          10           1
test_unbatched[astropy-1000]       441.2313 (64.49)    441.9655 (61.54)    1.0732 (7.74)          5           1
test_unbatched[cufinufft-1000]     488.2630 (71.36)    496.0788 (69.07)    6.1908 (44.63)         5           1
---------------------------------------------------------------------------------------------------------------

--------------------------------- benchmark 'Nf=10000': 3 tests ----------------------------------
Name (time in ms)            Min              Mean            StdDev            Rounds  Iterations
--------------------------------------------------------------------------------------------------
test[finufft-10000]       1.8481 (1.0)      1.8709 (1.0)      0.0347 (1.75)        507           1
test[cufinufft-10000]     5.1269 (2.77)     5.2052 (2.78)     0.3313 (16.72)       117           1
test[astropy-10000]       8.1725 (4.42)     8.2176 (4.39)     0.0198 (1.0)         113           1
--------------------------------------------------------------------------------------------------

----------------------------------- benchmark 'Nf=100000': 3 tests ----------------------------------
Name (time in ms)              Min               Mean            StdDev            Rounds  Iterations
-----------------------------------------------------------------------------------------------------
test[cufinufft-100000]      5.8566 (1.0)       6.0411 (1.0)      0.7407 (10.61)       159           1
test[finufft-100000]        6.9766 (1.19)      7.1816 (1.19)     0.0748 (1.07)        132           1
test[astropy-100000]       47.9246 (8.18)     48.0828 (7.96)     0.0698 (1.0)          19           1
-----------------------------------------------------------------------------------------------------

------------------------------------- benchmark 'Nf=1000000': 3 tests --------------------------------------
Name (time in ms)                  Min                  Mean            StdDev            Rounds  Iterations
------------------------------------------------------------------------------------------------------------
test[cufinufft-1000000]         8.0038 (1.0)          8.5193 (1.0)      1.3245 (1.62)         84           1
test[finufft-1000000]          74.9239 (9.36)        76.5690 (8.99)     0.8196 (1.0)          10           1
test[astropy-1000000]       1,430.4282 (178.72)   1,434.7986 (168.42)   5.5234 (6.74)          5           1
------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
======================================================== 36 passed in 30.81s =========================================================

The results were obtained using 16 cores of an Intel Icelake CPU and 1 NVIDIA A100 GPU. The ratio of the runtime relative to the fastest are shown in parentheses. You may obtain very different performance on your platform! The slowest Astropy results in particular may depend on the Numpy distribution you have installed and its trig function performance.

Authors

nifty-ls was originally implemented by Lehman Garrison based on work done by Dan Foreman-Mackey in the dfm/nufft-ls repo, with consulting from Alex Barnett.

Acknowledgements

nifty-ls builds directly on top of the excellent finufft package by Alex Barnett and others (see the finufft Acknowledgements).

Many parts of this package are an adaptation of Astropy LombScargle, in particular the Press & Rybicki (1989) method.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nifty_ls-1.0.1.tar.gz (175.9 kB view details)

Uploaded Source

Built Distributions

nifty_ls-1.0.1-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180.6 kB view details)

Uploaded CPython 3.12+ manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1-cp312-abi3-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.12+ macOS 12.0+ x86-64

nifty_ls-1.0.1-cp312-abi3-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.12+ macOS 12.0+ ARM64

nifty_ls-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.0 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1-cp311-cp311-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.11 macOS 12.0+ x86-64

nifty_ls-1.0.1-cp311-cp311-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11 macOS 12.0+ ARM64

nifty_ls-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.1 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1-cp310-cp310-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10 macOS 12.0+ x86-64

nifty_ls-1.0.1-cp310-cp310-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10 macOS 12.0+ ARM64

nifty_ls-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.3 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1-cp39-cp39-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.9 macOS 12.0+ x86-64

nifty_ls-1.0.1-cp39-cp39-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9 macOS 12.0+ ARM64

nifty_ls-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.2 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1-cp38-cp38-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.8 macOS 12.0+ x86-64

nifty_ls-1.0.1-cp38-cp38-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.8 macOS 12.0+ ARM64

File details

Details for the file nifty_ls-1.0.1.tar.gz.

File metadata

  • Download URL: nifty_ls-1.0.1.tar.gz
  • Upload date:
  • Size: 175.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for nifty_ls-1.0.1.tar.gz
Algorithm Hash digest
SHA256 137259ec69cd3e05b894e344b2080f7649c2b74b5803d6d8b059362ec4ddc09a
MD5 8274b029a9ffb4fd7ded6fd850a09c17
BLAKE2b-256 c1436c082952f0dbdf9dd3481a00b382bae9f0e11523098b61bf0e1afa8c92ee

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 176433fd58f5fdabc9f385c693292528c31c801bccb55fbd8e6f795017c8e422
MD5 bc13e20ac1bf97958939d91eaf701f69
BLAKE2b-256 c7b737d24c15e3887ad3397e842c2778dcd718d39a06c21ed99b6b7af5d710a9

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp312-abi3-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp312-abi3-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 89a7765d8b337d03f034593891bee7575418fb7b1e4dc1d0a077f07b4205d7a4
MD5 1d4de38e07c79fc92af51c651ece9462
BLAKE2b-256 f92d0c82532e8a5f434e1521736d1f617780d60895fc8aaa07de915ef45c0f6e

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp312-abi3-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp312-abi3-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 3e6bc4a6cbae2b4b672833b85a9df2d798209e2d64556bd8a7e04fdaeb8ef906
MD5 9d1caffa842c4006e8db7e0fc4d45824
BLAKE2b-256 debb7a3fc09f192cc3ca1bdaa036ef34cc19f43e64ca1fe61b966e5906abb100

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 47dc83f969f7438a93a459c601bdd6c80b23acd3ae985b45605bbbfe452a0ed7
MD5 98b862a6a5f1fb779573abe91e910ea5
BLAKE2b-256 2727220362fccb18140f7ee79148dbc26f6959e932d8b2b86d6d7148d0a34d6c

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp311-cp311-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp311-cp311-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 716013f4bfae9ecdac099b8cfb0725756f43e7d36bc38d4966807228416c25a7
MD5 1e5aa2ca077690fe90652b5c5782f554
BLAKE2b-256 73c40bca5da57ab4fb42bf8fe43824e8e82bc7bd63ce6a992e18ef3405e8f88b

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp311-cp311-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp311-cp311-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 cd029abde4c1d0e145988a766fa0cf7643b81dfbec8e32acf62e3db6a278e30e
MD5 6ef75bd2b2ec81975d1f19f3714229df
BLAKE2b-256 95459112b786d061dd54d5e3d53b35b707151483ee6cd54f51abf181c7439ee5

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 690131da04a1648743a42ea0139d6f570c0b2a1e44b16f52ab73972d9db6241c
MD5 ea304a16286af4c23197578711be73d4
BLAKE2b-256 134e888dcc03ea9e255f7de1f99b7e6be44bf04c3dcd8aca2f854603f68a488f

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp310-cp310-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp310-cp310-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 5c2806c3cb1387a798e0bdc300f6597ff1a9fb5753a0c59961ef89e84dc701fe
MD5 ef8c3e14625666c50f355a775949a45a
BLAKE2b-256 3362f259496b65132730c7533e3c767a80ba9c966c54fcb46b37a1e2f512b37d

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 ca37cbd945b4f8aa77f50a99666496895d405d255a0d1ec79fa1520923143cea
MD5 3fa7b77639c5627f364a0bb46b27f96b
BLAKE2b-256 234b56eba6124ce012b4bb365073d7f403f9a78ad348af14766a468afc1a0dc1

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8eccb2390dd9f66f909edd1b8c8b96237a12b230eca212ed8b52a7501a3e304f
MD5 ef436cf7245fa321c9ead85c68af5903
BLAKE2b-256 586339c6f8f3e9415c31aebd9480786c3d8b694bbfaf78f4e4803a78de705735

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp39-cp39-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp39-cp39-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 e877e39b817a3b5ac093b8dd2933ffd5922ce3868dbd0772b07d82db5bc53993
MD5 dd3098538c46af35337946bb8a964ff5
BLAKE2b-256 e8570cd0604ac2779de36770101ea64a3d9be1c57b9d18768b67ab45aeaa3434

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 3189d12405558b84e740cd5cc384797440c56549812cc17aa20cd364f7340d48
MD5 7ae4862a81be2ccc2ce4047f313e3789
BLAKE2b-256 0aae40d7520328fd36907cde135606c55f3486a50b6906f003b04036bee1a966

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 36cc39c53b47e92e6dc1add64ad09ade14c310f9efa7f7a7d7109f2dea7a419a
MD5 32c1313d99532d095ff1b0c28b2b4b69
BLAKE2b-256 5ebe1236682c020006b906a0c54eafd7de3e43dfc814f3d3245660f200f2ac26

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp38-cp38-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp38-cp38-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 1030c89d6ac5f99844c5bd3eca7fcb0a99bcbeea7a6e7a405315cd480eacd88c
MD5 e939c59bf4180b19fa04cdcd06db4f99
BLAKE2b-256 aed526774f6dacdc423e1a2b4ea10c0538eaecec340b1090908769ca471f81e4

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 ac465e418b40d7c633937320612001bbd6cced912f47affe80a06dee3e92ac03
MD5 655ee0e934f0fb767b975c6dfedfbe4b
BLAKE2b-256 5fba385cb0af6d33ca375620686f7bd0944d84447fb521e37af02e2eb44d5051

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page