Skip to main content

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT.

Project description

nifty-ls

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT!

PyPI Tests pre-commit.ci status Jenkins Tests

Overview

The Lomb-Scargle periodogram, used for identifying periodicity in irregularly-spaced observations, is useful but computationally expensive. However, it can be phrased mathematically as a pair of non-uniform FFTs (NUFFTs). This allows us to leverage Flatiron Institute's finufft package, which is really fast! It also enables GPU (CUDA) support and is several orders of magnitude more accurate than Astropy's Lomb Scargle with default settings.

Background

The Press & Rybicki (1989) method for Lomb-Scargle poses the computation as four weighted trigonometric sums that are solved with a pair of FFTs by "extirpolation" to an equi-spaced grid. Specifically, the sums are of the form:

\begin{align}
S_k &= \sum_{j=1}^M h_j \sin(2 \pi f_k t_j), \\
C_k &= \sum_{j=1}^M h_j \cos(2 \pi f_k t_j),
\end{align}

where the $k$ subscript runs from 0 to $N$, the number of frequency bins, $f_k$ is the cyclic frequency of bin $k$, $t_j$ are the observation times (of which there are $M$), and $h_j$ are the weights.

The key observation for our purposes is that this is exactly what a non-uniform FFT computes! Specifically, a "type-1" (non-uniform to uniform) complex NUFFT in the finufft convention computes:

g_k = \sum_{j=1}^M h_j e^{i k t_j}.

The complex and real parts of this transform are Press & Rybicki's $S_k$ and $C_k$, with some adjustment for cyclic/angular frequencies, domain of $k$, real vs. complex transform, etc. finufft has a particularly fast and accurate spreading kernel ("exponential of semicircle") that it uses instead of Press & Rybicki's extirpolation.

There is some pre- and post-processing of $S_k$ and $C_k$ to compute the periodogram, which can become the bottleneck because finufft is so fast. This package also optimizes and parallelizes those computations.

Installation

From PyPI

For CPU support:

$ pip install nifty-ls

For GPU (CUDA) support:

$ pip install nifty-ls[cuda]

The default is to install with CUDA 12 support; one can use nifty-ls[cuda11] instead for CUDA 11 support (installs cupy-cuda11x).

From source

First, clone the repo and cd to the repo root:

$ git clone https://www.github.com/flatironinstitute/nifty-ls
$ cd nifty-ls

Then, to install with CPU support:

$ pip install .

To install with GPU (CUDA) support:

$ pip install .[cuda]

or .[cuda11] for CUDA 11.

For development (with automatic rebuilds enabled by default in pyproject.toml):

$ pip install nanobind scikit-build-core
$ pip install -e .[test] --no-build-isolation

Developers may also be interested in setting these keys in pyproject.toml:

[tool.scikit-build]
cmake.build-type = "Debug"
cmake.verbose = true
install.strip = false

For best performance

You may wish to compile and install finufft and cufinufft yourself so they will be built with optimizations for your hardware. To do so, first install nifty-ls, then follow the Python installation instructions for finufft and cufinufft, configuring the libraries as desired.

nifty-ls can likewise be built from source following the instructions above for best performance, but most of the heavy computations are offloaded to (cu)finufft, so the performance benefit is minimal.

⚠️ MacOS ARM users (M1/M2/etc): due to an OpenMP library incompatibility, the nifty-ls "C++ helpers" are not parallelized in the Mac ARM builds on PyPI. This is not expected to have a big impact on performance, as the core finufft computation will still be parallelized. Building both finufft and nifty-ls from source is a possible workaround.

Usage

From Astropy

Importing nifty_ls makes nifty-ls available via method="fastnifty" in Astropy's LombScargle module. The name is prefixed with "fast" as it's part of the fast family of methods that assume a regularly-spaced frequency grid.

import nifty_ls
from astropy.timeseries import LombScargle
frequency, power = LombScargle(t, y, method="fastnifty").autopower()

To use the CUDA (cufinufft) backend, pass the appropriate argument via method_kws:

frequency, power = LombScargle(t, y, method="fastnifty", method_kws=dict(backend="cufinufft")).autopower()

In many cases, accelerating your periodogram is as simple as setting the method in your Astropy Lomb Scargle code! More advanced usage, such as computing multiple periodograms in parallel, should go directly through the nifty-ls interface.

From nifty-ls (native interface)

nifty-ls has its own interface that offers more flexibility than the Astropy interface for batched periodograms.

Single periodograms

A single periodogram can be computed through nifty-ls as:

import nifty_ls
# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy)

# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy, fmin=0.1, fmax=10, Nf=10**6)

Batched Periodograms

Batched periodograms (multiple objects with the same observation times) can be computed as:

import nifty_ls
import numpy as np

N_t = 100
N_obj = 10
Nf = 200

rng = np.random.default_rng()
t = np.sort(rng.random(N_t))
freqs = rng.random(N_obj).reshape(-1,1)
y_batch = np.sin(freqs * t)
dy_batch = rng.random(y_batch.shape)

batched = nifty_ls.lombscargle(t, y_batch, dy_batch, Nf=Nf)
print(batched.power.shape)  # (10, 200)

Note that this computes multiple periodograms simultaneously on a set of time series with the same observation times. This approach is particularly efficient for short time series, and/or when using the GPU.

Support for batching multiple time series with distinct observation times is not currently implemented, but is planned.

Limitations

The code only supports frequency grids with fixed spacing; however, finufft does support type 3 NUFFTs (non-uniform to non-uniform), which would enable arbitrary frequency grids. It's not clear how useful this is, so it hasn't been implemented, but please open a GitHub issue if this is of interest to you.

Performance

Using 16 cores of an Intel Icelake CPU and a NVIDIA A100 GPU, we obtain the following performance. First, we'll look at results from a single periodogram (i.e. unbatched):

benchmarks

In this case, finufft is 5x faster (11x with threads) than Astropy for large transforms, and 2x faster for (very) small transforms. Small transforms improve futher relative to Astropy with more frequency bins. (Dynamic multi-threaded dispatch of transforms is planned as a future feature which will especially benefit small $N$.)

cufinufft is 200x faster than Astropy for large $N$! The performance plateaus towards small $N$, mostly due to the overhead of sending data to the GPU and fetching the result. (Concurrent job execution on the GPU is another planned feature, which will especially help small $N$.)

The following demonstrates "batch mode", in which 10 periodograms are computed from 10 different time series with the same observation times:

batched benchmarks

Here, the finufft single-threaded advantage is consistently 6x across problem sizes, while the multi-threaded advantage is up to 30x for large transforms.

The 200x advantage of the GPU extends to even smaller $N$ in this case, since we're sending and receiving more data at once.

We see that both multi-threaded finufft and cufinufft particularly benefit from batched transforms, as this exposes more parallelism and amortizes fixed latencies.

We use FFTW_MEASURE for finufft in these benchmarks, which improves performance a few tens of percents.

Multi-threading hurts the performance of small problem sizes; the default behavior of nifty-ls is to use fewer threads in such cases. The "multi-threaded" line uses between 1 and 16 threads.

On the CPU, nifty-ls gets its performance not only through its use of finufft, but also by offloading the pre- and post-processing steps to compiled extensions. The extensions enable us to do much more processing element-wise, rather than array-wise. In other words, they enable "kernel fusion" (to borrow a term from GPU computing), increasing the compute density.

Accuracy

While we compared performance with Astropy's fast method, this isn't quite fair. nifty-ls is much more accurate than Astropy fast! Astropy fast uses Press & Rybicki's extirpolation approximation, trading accuracy for speed, but thanks to finufft, nifty-ls can have both.

In the figure below, we plot the median periodogram error in circles and the 99th percentile error in triangles for astropy, finufft, and cufinufft for a range of $N$ (and default $N_F \approx 12N$).

The astropy result is presented for two cases: a nominal case and a "worst case". Internally, astropy uses an FFT grid whose size is the next power of 2 above the target oversampling rate. Each jump to a new power of 2 typically yields an increase in accuracy. The "worst case", therefore, is the highest frequency that does not yield such a jump.

Errors of $\mathcal{O}(10\%)$ or greater are common with worst-case evaluations. Errors of $\mathcal{O}(1\%)$ or greater are common in typical evaluations. nifty-ls is conservatively 6 orders of magnitude more accurate.

The reference result in the above figure comes from the "phase winding" method, which uses trigonometric identities to avoid expensive sin and cos evaluations. One can also use astropy's fast method as a reference with exact evaluation enabled via use_fft=False. One finds the same result, but the phase winding is a few orders of magnitude faster (but still not competitive with finufft).

In summary, nifty-ls is highly accurate while also giving high performance.

float32 vs float64

While 32-bit floats provide a substantial speedup for finufft and cufinufft, we generally don't recommend their use for Lomb-Scargle. The reason is the challenging condition number of the problem. The condition number is the response in the output to a small perturbation in the input—in other words, the derivative. It can easily be shown that the derivative of a NUFFT with respect to the non-uniform points is proportional to $N$, the transform length (i.e. the number of modes). In other words, errors in the observation times are amplified by $\mathcal{O}(N)$. Since float32 has a relative error of $\mathcal{O}(10^{-7})$, transforms of length $10^5$ already suffer $\mathcal{O}(1\%)$ error. Therefore, we focus on float64 in nifty-ls, but float32 is also natively supported by all backends for adventurous users.

The condition number is also a likely contributor to the mild upward trend in error versus $N$ in the above figure, at least for finufft/cufinufft. With a relative error of $\mathcal{O}(10^{-16})$ for float64 and a transform length of $\mathcal{O}(10^{6})$, the minimum error is $\mathcal{O}(10^{-10})$.

Testing

First, install from source (pip install .[test]). Then, from the repo root, run:

$ pytest

The tests are defined in the tests/ directory, and include a mini-benchmark of nifty-ls and Astropy, shown below:

$ pytest
======================================================== test session starts =========================================================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=True min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /mnt/home/lgarrison/nifty-ls
configfile: pyproject.toml
plugins: benchmark-4.0.0, asdf-2.15.0, anyio-3.6.2, hypothesis-6.23.1
collected 36 items                                                                                                                   

tests/test_ls.py ......................                                                                                        [ 61%]
tests/test_perf.py ..............                                                                                              [100%]


----------------------------------------- benchmark 'Nf=1000': 5 tests ----------------------------------------
Name (time in ms)                       Min                Mean            StdDev            Rounds  Iterations
---------------------------------------------------------------------------------------------------------------
test_batched[finufft-1000]           6.8418 (1.0)        7.1821 (1.0)      0.1831 (1.32)         43           1
test_batched[cufinufft-1000]         7.7027 (1.13)       8.6634 (1.21)     0.9555 (6.89)         74           1
test_unbatched[finufft-1000]       110.7541 (16.19)    111.0603 (15.46)    0.1387 (1.0)          10           1
test_unbatched[astropy-1000]       441.2313 (64.49)    441.9655 (61.54)    1.0732 (7.74)          5           1
test_unbatched[cufinufft-1000]     488.2630 (71.36)    496.0788 (69.07)    6.1908 (44.63)         5           1
---------------------------------------------------------------------------------------------------------------

--------------------------------- benchmark 'Nf=10000': 3 tests ----------------------------------
Name (time in ms)            Min              Mean            StdDev            Rounds  Iterations
--------------------------------------------------------------------------------------------------
test[finufft-10000]       1.8481 (1.0)      1.8709 (1.0)      0.0347 (1.75)        507           1
test[cufinufft-10000]     5.1269 (2.77)     5.2052 (2.78)     0.3313 (16.72)       117           1
test[astropy-10000]       8.1725 (4.42)     8.2176 (4.39)     0.0198 (1.0)         113           1
--------------------------------------------------------------------------------------------------

----------------------------------- benchmark 'Nf=100000': 3 tests ----------------------------------
Name (time in ms)              Min               Mean            StdDev            Rounds  Iterations
-----------------------------------------------------------------------------------------------------
test[cufinufft-100000]      5.8566 (1.0)       6.0411 (1.0)      0.7407 (10.61)       159           1
test[finufft-100000]        6.9766 (1.19)      7.1816 (1.19)     0.0748 (1.07)        132           1
test[astropy-100000]       47.9246 (8.18)     48.0828 (7.96)     0.0698 (1.0)          19           1
-----------------------------------------------------------------------------------------------------

------------------------------------- benchmark 'Nf=1000000': 3 tests --------------------------------------
Name (time in ms)                  Min                  Mean            StdDev            Rounds  Iterations
------------------------------------------------------------------------------------------------------------
test[cufinufft-1000000]         8.0038 (1.0)          8.5193 (1.0)      1.3245 (1.62)         84           1
test[finufft-1000000]          74.9239 (9.36)        76.5690 (8.99)     0.8196 (1.0)          10           1
test[astropy-1000000]       1,430.4282 (178.72)   1,434.7986 (168.42)   5.5234 (6.74)          5           1
------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
======================================================== 36 passed in 30.81s =========================================================

The results were obtained using 16 cores of an Intel Icelake CPU and 1 NVIDIA A100 GPU. The ratio of the runtime relative to the fastest are shown in parentheses. You may obtain very different performance on your platform! The slowest Astropy results in particular may depend on the Numpy distribution you have installed and its trig function performance.

Authors

nifty-ls was originally implemented by Lehman Garrison based on work done by Dan Foreman-Mackey in the dfm/nufft-ls repo, with consulting from Alex Barnett.

Acknowledgements

nifty-ls builds directly on top of the excellent finufft package by Alex Barnett and others (see the finufft Acknowledgements).

Many parts of this package are an adaptation of Astropy LombScargle, in particular the Press & Rybicki (1989) method.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nifty_ls-1.0.0.tar.gz (175.0 kB view details)

Uploaded Source

Built Distributions

nifty_ls-1.0.0-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (183.4 kB view details)

Uploaded CPython 3.12+ manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.0-cp312-abi3-macosx_11_0_arm64.whl (79.9 kB view details)

Uploaded CPython 3.12+ macOS 11.0+ ARM64

nifty_ls-1.0.0-cp312-abi3-macosx_10_15_x86_64.whl (359.7 kB view details)

Uploaded CPython 3.12+ macOS 10.15+ x86-64

nifty_ls-1.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (183.7 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.0-cp311-cp311-macosx_11_0_arm64.whl (79.9 kB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

nifty_ls-1.0.0-cp311-cp311-macosx_10_15_x86_64.whl (359.5 kB view details)

Uploaded CPython 3.11 macOS 10.15+ x86-64

nifty_ls-1.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (183.9 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.0-cp310-cp310-macosx_11_0_arm64.whl (80.0 kB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

nifty_ls-1.0.0-cp310-cp310-macosx_10_15_x86_64.whl (359.6 kB view details)

Uploaded CPython 3.10 macOS 10.15+ x86-64

nifty_ls-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (183.9 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.0-cp39-cp39-macosx_11_0_arm64.whl (80.1 kB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

nifty_ls-1.0.0-cp39-cp39-macosx_10_15_x86_64.whl (359.6 kB view details)

Uploaded CPython 3.9 macOS 10.15+ x86-64

nifty_ls-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (183.9 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.0-cp38-cp38-macosx_11_0_arm64.whl (80.1 kB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

nifty_ls-1.0.0-cp38-cp38-macosx_10_15_x86_64.whl (359.7 kB view details)

Uploaded CPython 3.8 macOS 10.15+ x86-64

File details

Details for the file nifty_ls-1.0.0.tar.gz.

File metadata

  • Download URL: nifty_ls-1.0.0.tar.gz
  • Upload date:
  • Size: 175.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for nifty_ls-1.0.0.tar.gz
Algorithm Hash digest
SHA256 d8f9eebdb96aa922044b124a823046b9da31c2d7030c14bcb1e5b26f69d5204c
MD5 fb6290fb72c125a68a4ed831c0b48005
BLAKE2b-256 6bf47efce56d4cf3b766c95cbf4f4a0f519faa1d889384a57074f6be6a780a99

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 93eb6e0a2e9520d344ef5e2cbb0e5b5a8868c9dbcf2b79cb576bbaf6a1d66909
MD5 68ec10b8cfce1b61b2bffc3624c6bc2f
BLAKE2b-256 99216a7afa5cb95cb2fcbc62a02d1103fdf8d4d02222458cceb9543364e0484f

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp312-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp312-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 473c04eb1d409971c3e6c83a0b01aa56a2e19ea8baf6c46fba6f06ad53f4a92d
MD5 4fece13a0c13ce6f0edf4098567ce4e5
BLAKE2b-256 03ec61e0229d5aea2ff028b646ad25baad815f4ee78e0cab199c5a912cab2ab4

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp312-abi3-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp312-abi3-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 63e028cc575759e3fd6b0edefe9c37c02ea8f25c712b8b6773e94c2a3adcc3c3
MD5 61779d16ad3350e0aca3ae6a733bef79
BLAKE2b-256 f661f39439ea33eaeb19518d150f43d021c25a0b68d603cc3530a9c46bd9db3d

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 897fdd3da3606a129507c7cb20921192187a3bd4dce916a041ef6766a7e922d5
MD5 f3c0bf2bb5a391b3e1d4b07a82790173
BLAKE2b-256 89dcdc836deb89ddca365a7fd9d323b1cd9168e53ca7e5ccb24f6908a137a216

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 791da6992bd3464fa5412b8ead45942e637744eaa1eb44342bcd109da2cdb319
MD5 de00e861a8fbe83624c8720dfd7c2f99
BLAKE2b-256 385ccb312995d27188f7b3124414416516d9ef6106dae80fd1ba58fb16322ac2

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp311-cp311-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp311-cp311-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 130361395226b795238dff805168f13f06a29df9a98d1d5f12ae1ead86b0014c
MD5 cc10d5a89e4d0a630e33311a6d590019
BLAKE2b-256 6c369b5e1d3bbad2c407365d8d4e1745d020691fa832f826ba6efd7756cf6f90

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b15de7df510f673ac1952a59b2cde2829ba29fd2bdffea3b8d9ce8b5ee622de2
MD5 72fbf0976bf6cb09d43c615b22855b4d
BLAKE2b-256 1ab517637bf9180d56e4685821afec23af38126ddb404bf68c61827c836ae0c9

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f325585916f0a58b98d3455813f3259fa004255761f1b114edc5a47bd52f2505
MD5 bba6fe171a0cf63fd4ef66f212938871
BLAKE2b-256 ae6d660db9c4d926806bcbbada644541179fe93dbd4eaae657962ffc20f59374

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp310-cp310-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp310-cp310-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 2ef4f0b32d74000a40dc2cd7cdc74beeca64b5a7302941ed03a4e58a1db1364c
MD5 a78385fd58cc84c5860db21a41254226
BLAKE2b-256 243e454091f95de66dd9a154b9495ce811a446b1de4d1028814c5c533477300d

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 66cb6f75cfe1e204d84d2e3bd7bb5f75a75aae49c7338b35a6dd93603b907823
MD5 5b431888d1b29ec9aa5207d4672de94d
BLAKE2b-256 25b1aa5ed0201e37a85e849fbf8cfa134b41806fd4e84e8e153fac93978f59cb

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 d052077d6137ca046acf0917c7ad688ef596b68568d361b1edd63bcd8c950885
MD5 350d5b6d3c720d2095bdb16303e29b85
BLAKE2b-256 087c1372a672a7454dd3ec082264d8b568bc2345d039b6e28f5ade54a140b9fa

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 ba166b50f5f984d8d491c9bca1183bc542ee17be4b482208cd0e0ac0403f690d
MD5 cb39d63d56f3bb09c12517f90eac6938
BLAKE2b-256 4ac16af199fca2ffff1581d1fb887bf22dc888ed790b9fdb4a90cabc248598cb

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ff8bb9f9b788133f1e2580ba799dfd1dd4d052c7eb6033e67f4dc4494e59bdc6
MD5 67d9607aacbe7e80c5aa33e9522265da
BLAKE2b-256 07a095c63874b15d8320da33f62dac0ee5003ae9ae568a6de01cee976b43aa88

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f012fc47561f110bd102b68c0888604d9e8506c94d9676783f6d94826402f388
MD5 e949f5ea964c62ed6bcc6b831003958d
BLAKE2b-256 1e0b2876d565d5ea3865b8e7d2f9bf73bab5505e8571760d36fd3660a550124c

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.0-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.0-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 a5d8fce78f343fce9e318d21bfe572938029f099d0f75cd1c4865d1eb17697e8
MD5 ae209ffdf6b66cd2870f299a353d929c
BLAKE2b-256 42110fb75406638ec9b1e0f4f881b2509f00c31915f3ddc40ce12234811fdde0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page