Skip to main content

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT.

Project description

nifty-ls

A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT!

PyPI Tests pre-commit.ci status Jenkins Tests

Overview

The Lomb-Scargle periodogram, used for identifying periodicity in irregularly-spaced observations, is useful but computationally expensive. However, it can be phrased mathematically as a pair of non-uniform FFTs (NUFFTs). This allows us to leverage Flatiron Institute's finufft package, which is really fast! It also enables GPU (CUDA) support and is several orders of magnitude more accurate than Astropy's Lomb Scargle with default settings.

Background

The Press & Rybicki (1989) method for Lomb-Scargle poses the computation as four weighted trigonometric sums that are solved with a pair of FFTs by "extirpolation" to an equi-spaced grid. Specifically, the sums are of the form:

\begin{align}
S_k &= \sum_{j=1}^M h_j \sin(2 \pi f_k t_j), \\
C_k &= \sum_{j=1}^M h_j \cos(2 \pi f_k t_j),
\end{align}

where the $k$ subscript runs from 0 to $N$, the number of frequency bins, $f_k$ is the cyclic frequency of bin $k$, $t_j$ are the observation times (of which there are $M$), and $h_j$ are the weights.

The key observation for our purposes is that this is exactly what a non-uniform FFT computes! Specifically, a "type-1" (non-uniform to uniform) complex NUFFT in the finufft convention computes:

g_k = \sum_{j=1}^M h_j e^{i k t_j}.

The complex and real parts of this transform are Press & Rybicki's $S_k$ and $C_k$, with some adjustment for cyclic/angular frequencies, domain of $k$, real vs. complex transform, etc. finufft has a particularly fast and accurate spreading kernel ("exponential of semicircle") that it uses instead of Press & Rybicki's extirpolation.

There is some pre- and post-processing of $S_k$ and $C_k$ to compute the periodogram, which can become the bottleneck because finufft is so fast. This package also optimizes and parallelizes those computations.

Installation

From PyPI

For CPU support:

$ pip install nifty-ls

For GPU (CUDA) support:

$ pip install nifty-ls[cuda]

The default is to install with CUDA 12 support; one can use nifty-ls[cuda11] instead for CUDA 11 support (installs cupy-cuda11x).

From source

First, clone the repo and cd to the repo root:

$ git clone https://www.github.com/flatironinstitute/nifty-ls
$ cd nifty-ls

Then, to install with CPU support:

$ pip install .

To install with GPU (CUDA) support:

$ pip install .[cuda]

or .[cuda11] for CUDA 11.

For development (with automatic rebuilds enabled by default in pyproject.toml):

$ pip install nanobind scikit-build-core
$ pip install -e .[test] --no-build-isolation

Developers may also be interested in setting these keys in pyproject.toml:

[tool.scikit-build]
cmake.build-type = "Debug"
cmake.verbose = true
install.strip = false

For best performance

You may wish to compile and install finufft and cufinufft yourself so they will be built with optimizations for your hardware. To do so, first install nifty-ls, then follow the Python installation instructions for finufft and cufinufft, configuring the libraries as desired.

nifty-ls can likewise be built from source following the instructions above for best performance, but most of the heavy computations are offloaded to (cu)finufft, so the performance benefit is minimal.

Usage

From Astropy

Importing nifty_ls makes nifty-ls available via method="fastnifty" in Astropy's LombScargle module. The name is prefixed with "fast" as it's part of the fast family of methods that assume a regularly-spaced frequency grid.

import nifty_ls
from astropy.timeseries import LombScargle
frequency, power = LombScargle(t, y).autopower(method="fastnifty")
Full example
import matplotlib.pyplot as plt
import nifty_ls
import numpy as np
from astropy.timeseries import LombScargle

rng = np.random.default_rng(seed=123)
N = 1000
t = rng.uniform(0, 100, size=N)
y = np.sin(50 * t) + 1 + rng.poisson(size=N)

frequency, power = LombScargle(t, y).autopower(method='fastnifty')
plt.plot(frequency, power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')

To use the CUDA (cufinufft) backend, pass the appropriate argument via method_kws:

frequency, power = LombScargle(t, y).autopower(method="fastnifty", method_kws=dict(backend="cufinufft"))

In many cases, accelerating your periodogram is as simple as setting the method in your Astropy Lomb Scargle code! More advanced usage, such as computing multiple periodograms in parallel, should go directly through the nifty-ls interface.

From nifty-ls (native interface)

nifty-ls has its own interface that offers more flexibility than the Astropy interface for batched periodograms.

Single periodograms

A single periodogram can be computed through nifty-ls as:

import nifty_ls
# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy)

# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy, fmin=0.1, fmax=10, Nf=10**6)
Full example
import nifty_ls
import numpy as np

rng = np.random.default_rng(seed=123)
N = 1000
t = np.sort(rng.uniform(0, 100, size=N))
y = np.sin(50 * t) + 1 + rng.poisson(size=N)

# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y)

# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, fmin=0.1, fmax=10, Nf=10**6)

plt.plot(nifty_res.freq(), nifty_res.power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')

Batched Periodograms

Batched periodograms (multiple objects with the same observation times) can be computed as:

import nifty_ls
import numpy as np

N_t = 100
N_obj = 10
Nf = 200

rng = np.random.default_rng()
t = np.sort(rng.random(N_t))
obj_freqs = rng.random(N_obj).reshape(-1,1)
y_batch = np.sin(obj_freqs * t)
dy_batch = rng.random(y_batch.shape)

batched = nifty_ls.lombscargle(t, y_batch, dy_batch, Nf=Nf)
print(batched.power.shape)  # (10, 200)

Note that this computes multiple periodograms simultaneously on a set of time series with the same observation times. This approach is particularly efficient for short time series, and/or when using the GPU.

Support for batching multiple time series with distinct observation times is not currently implemented, but is planned.

Limitations

The code only supports frequency grids with fixed spacing; however, finufft does support type 3 NUFFTs (non-uniform to non-uniform), which would enable arbitrary frequency grids. It's not clear how useful this is, so it hasn't been implemented, but please open a GitHub issue if this is of interest to you.

Performance

Using 16 cores of an Intel Icelake CPU and a NVIDIA A100 GPU, we obtain the following performance. First, we'll look at results from a single periodogram (i.e. unbatched):

benchmarks

In this case, finufft is 5x faster (11x with threads) than Astropy for large transforms, and 2x faster for (very) small transforms. Small transforms improve futher relative to Astropy with more frequency bins. (Dynamic multi-threaded dispatch of transforms is planned as a future feature which will especially benefit small $N$.)

cufinufft is 200x faster than Astropy for large $N$! The performance plateaus towards small $N$, mostly due to the overhead of sending data to the GPU and fetching the result. (Concurrent job execution on the GPU is another planned feature, which will especially help small $N$.)

The following demonstrates "batch mode", in which 10 periodograms are computed from 10 different time series with the same observation times:

batched benchmarks

Here, the finufft single-threaded advantage is consistently 6x across problem sizes, while the multi-threaded advantage is up to 30x for large transforms.

The 200x advantage of the GPU extends to even smaller $N$ in this case, since we're sending and receiving more data at once.

We see that both multi-threaded finufft and cufinufft particularly benefit from batched transforms, as this exposes more parallelism and amortizes fixed latencies.

We use FFTW_MEASURE for finufft in these benchmarks, which improves performance a few tens of percents.

Multi-threading hurts the performance of small problem sizes; the default behavior of nifty-ls is to use fewer threads in such cases. The "multi-threaded" line uses between 1 and 16 threads.

On the CPU, nifty-ls gets its performance not only through its use of finufft, but also by offloading the pre- and post-processing steps to compiled extensions. The extensions enable us to do much more processing element-wise, rather than array-wise. In other words, they enable "kernel fusion" (to borrow a term from GPU computing), increasing the compute density.

Accuracy

While we compared performance with Astropy's fast method, this isn't quite fair. nifty-ls is much more accurate than Astropy fast! Astropy fast uses Press & Rybicki's extirpolation approximation, trading accuracy for speed, but thanks to finufft, nifty-ls can have both.

In the figure below, we plot the median periodogram error in circles and the 99th percentile error in triangles for astropy, finufft, and cufinufft for a range of $N$ (and default $N_F \approx 12N$).

The astropy result is presented for two cases: a nominal case and a "worst case". Internally, astropy uses an FFT grid whose size is the next power of 2 above the target oversampling rate. Each jump to a new power of 2 typically yields an increase in accuracy. The "worst case", therefore, is the highest frequency that does not yield such a jump.

Errors of $\mathcal{O}(10\%)$ or greater are common with worst-case evaluations. Errors of $\mathcal{O}(1\%)$ or greater are common in typical evaluations. nifty-ls is conservatively 6 orders of magnitude more accurate.

The reference result in the above figure comes from the "phase winding" method, which uses trigonometric identities to avoid expensive sin and cos evaluations. One can also use astropy's fast method as a reference with exact evaluation enabled via use_fft=False. One finds the same result, but the phase winding is a few orders of magnitude faster (but still not competitive with finufft).

In summary, nifty-ls is highly accurate while also giving high performance.

float32 vs float64

While 32-bit floats provide a substantial speedup for finufft and cufinufft, we generally don't recommend their use for Lomb-Scargle. The reason is the challenging condition number of the problem. The condition number is the response in the output to a small perturbation in the input—in other words, the derivative. It can easily be shown that the derivative of a NUFFT with respect to the non-uniform points is proportional to $N$, the transform length (i.e. the number of modes). In other words, errors in the observation times are amplified by $\mathcal{O}(N)$. Since float32 has a relative error of $\mathcal{O}(10^{-7})$, transforms of length $10^5$ already suffer $\mathcal{O}(1\%)$ error. Therefore, we focus on float64 in nifty-ls, but float32 is also natively supported by all backends for adventurous users.

The condition number is also a likely contributor to the mild upward trend in error versus $N$ in the above figure, at least for finufft/cufinufft. With a relative error of $\mathcal{O}(10^{-16})$ for float64 and a transform length of $\mathcal{O}(10^{6})$, the minimum error is $\mathcal{O}(10^{-10})$.

Testing

First, install from source (pip install .[test]). Then, from the repo root, run:

$ pytest

The tests are defined in the tests/ directory, and include a mini-benchmark of nifty-ls and Astropy, shown below:

$ pytest
======================================================== test session starts =========================================================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=True min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /mnt/home/lgarrison/nifty-ls
configfile: pyproject.toml
plugins: benchmark-4.0.0, asdf-2.15.0, anyio-3.6.2, hypothesis-6.23.1
collected 36 items                                                                                                                   

tests/test_ls.py ......................                                                                                        [ 61%]
tests/test_perf.py ..............                                                                                              [100%]


----------------------------------------- benchmark 'Nf=1000': 5 tests ----------------------------------------
Name (time in ms)                       Min                Mean            StdDev            Rounds  Iterations
---------------------------------------------------------------------------------------------------------------
test_batched[finufft-1000]           6.8418 (1.0)        7.1821 (1.0)      0.1831 (1.32)         43           1
test_batched[cufinufft-1000]         7.7027 (1.13)       8.6634 (1.21)     0.9555 (6.89)         74           1
test_unbatched[finufft-1000]       110.7541 (16.19)    111.0603 (15.46)    0.1387 (1.0)          10           1
test_unbatched[astropy-1000]       441.2313 (64.49)    441.9655 (61.54)    1.0732 (7.74)          5           1
test_unbatched[cufinufft-1000]     488.2630 (71.36)    496.0788 (69.07)    6.1908 (44.63)         5           1
---------------------------------------------------------------------------------------------------------------

--------------------------------- benchmark 'Nf=10000': 3 tests ----------------------------------
Name (time in ms)            Min              Mean            StdDev            Rounds  Iterations
--------------------------------------------------------------------------------------------------
test[finufft-10000]       1.8481 (1.0)      1.8709 (1.0)      0.0347 (1.75)        507           1
test[cufinufft-10000]     5.1269 (2.77)     5.2052 (2.78)     0.3313 (16.72)       117           1
test[astropy-10000]       8.1725 (4.42)     8.2176 (4.39)     0.0198 (1.0)         113           1
--------------------------------------------------------------------------------------------------

----------------------------------- benchmark 'Nf=100000': 3 tests ----------------------------------
Name (time in ms)              Min               Mean            StdDev            Rounds  Iterations
-----------------------------------------------------------------------------------------------------
test[cufinufft-100000]      5.8566 (1.0)       6.0411 (1.0)      0.7407 (10.61)       159           1
test[finufft-100000]        6.9766 (1.19)      7.1816 (1.19)     0.0748 (1.07)        132           1
test[astropy-100000]       47.9246 (8.18)     48.0828 (7.96)     0.0698 (1.0)          19           1
-----------------------------------------------------------------------------------------------------

------------------------------------- benchmark 'Nf=1000000': 3 tests --------------------------------------
Name (time in ms)                  Min                  Mean            StdDev            Rounds  Iterations
------------------------------------------------------------------------------------------------------------
test[cufinufft-1000000]         8.0038 (1.0)          8.5193 (1.0)      1.3245 (1.62)         84           1
test[finufft-1000000]          74.9239 (9.36)        76.5690 (8.99)     0.8196 (1.0)          10           1
test[astropy-1000000]       1,430.4282 (178.72)   1,434.7986 (168.42)   5.5234 (6.74)          5           1
------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
======================================================== 36 passed in 30.81s =========================================================

The results were obtained using 16 cores of an Intel Icelake CPU and 1 NVIDIA A100 GPU. The ratio of the runtime relative to the fastest are shown in parentheses. You may obtain very different performance on your platform! The slowest Astropy results in particular may depend on the Numpy distribution you have installed and its trig function performance.

Authors

nifty-ls was originally implemented by Lehman Garrison based on work done by Dan Foreman-Mackey in the dfm/nufft-ls repo, with consulting from Alex Barnett.

Acknowledgements

nifty-ls builds directly on top of the excellent finufft package by Alex Barnett and others (see the finufft Acknowledgements).

Many parts of this package are an adaptation of Astropy LombScargle, in particular the Press & Rybicki (1989) method.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nifty_ls-1.0.1rc2.tar.gz (175.9 kB view details)

Uploaded Source

Built Distributions

nifty_ls-1.0.1rc2-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180.6 kB view details)

Uploaded CPython 3.12+ manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.12+ macOS 12.0+ x86-64

nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.12+ macOS 12.0+ ARM64

nifty_ls-1.0.1rc2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.0 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.11 macOS 12.0+ x86-64

nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11 macOS 12.0+ ARM64

nifty_ls-1.0.1rc2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.2 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10 macOS 12.0+ x86-64

nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10 macOS 12.0+ ARM64

nifty_ls-1.0.1rc2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.3 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.9 macOS 12.0+ x86-64

nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9 macOS 12.0+ ARM64

nifty_ls-1.0.1rc2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (184.3 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.8 macOS 12.0+ x86-64

nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.8 macOS 12.0+ ARM64

File details

Details for the file nifty_ls-1.0.1rc2.tar.gz.

File metadata

  • Download URL: nifty_ls-1.0.1rc2.tar.gz
  • Upload date:
  • Size: 175.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for nifty_ls-1.0.1rc2.tar.gz
Algorithm Hash digest
SHA256 b41e0cf8d596817a887a381a6bb2b05919415f26b8092685ecf1deea6cd04da2
MD5 a47be6cee422ce8bd01763b0e525fc33
BLAKE2b-256 c17658c763b06eefb94847243e4bc0e4c77ac9fd9eb9e3781d826548045c01ac

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp312-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3c4baf0a1af451a2cc6adda32c4e100fe1c11a007370f8a324e3ebca33ab262d
MD5 19da5cd9777cb79c0be940d3c6e2cf95
BLAKE2b-256 b59b44869cf2563cb46dd0cf6a335516c78c42f1f04ab94eb168810114efde35

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 d470b46947ced5c230bd9bf2c3e2beeb0de33f7e635d0777bd126e64b97b5c80
MD5 b9b54086fa6511a5593abac895e7644f
BLAKE2b-256 3477465eab79c3e8a1143bdce9b08baebc23d7547838e5196c63a82386cd9045

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp312-abi3-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 b3cd0f5e1ccf113dac4c3095ee8350be3d15bf48fee0cc9aeb74f4c905574d3b
MD5 bbbdba1443dc715ed55a1f96b24b95eb
BLAKE2b-256 b2fd6bc3c03b4d325fcae4faa273445721881244e66e40138b5504fbe828fb4c

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 89096030170370e7895aaaa0d7839621c57e1a619843a75c30c8431be681581d
MD5 cf47c2e1a7669ff1110541c0e916997d
BLAKE2b-256 28fe1b5b54e9d9b562430f8b41d3a69b4129a4ab4328f5545ccd0b2c708ebe35

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 fc6f41a72e38c951e718ac3c9f776cd3555d8f02c0fe2c90d177919f3515b522
MD5 f7e64d070dd46f849cc201d74fed9e3c
BLAKE2b-256 69f70746d15c8f5df2741ed44732da3db98a0fef93aadcde89e21894cf651937

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp311-cp311-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 a198113b2c8a5b3c422a8c2bff9af35964a1d8767576a400127acd2668c3b3b2
MD5 79725ca4c4acf52742a8990645e4418c
BLAKE2b-256 240a7b27df7236517944ef5c75ff70ced31efab31c16003267cfb35f63e7bfaf

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 25e91d88314f221b5de225793496c45a10220443453d3e3ebaa11469103b970a
MD5 d089826f60ff016635a9508cc2943145
BLAKE2b-256 b7903b281639096e47d1d5a62979b1c6eb151899e5cf4b631190d2b5bb94a58f

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 32abb4f78f71274bd15c7fea229103fef840f2b0cfa2cb0f6413e52c859f925a
MD5 1226519d563961b69ce12d188d9bb50c
BLAKE2b-256 2e7460ef8cb57766fda682a1c18384427de3e2392183e917ca83fc105620ad05

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp310-cp310-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 7c0f135a933fd3e3fbaf32725634eff840f89e4ef8a061edaa78e8f4cb19e70d
MD5 e9e2f80cdd6f99f2b89646ba3e660003
BLAKE2b-256 bf1cad79890bfbbc4cbadfda11a3dac9b0d3d0eb47f78c3fe320e576ab90b107

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 94dfaceebab44c48a5dcb9f25b427de5088955a775c32f848f863a88857199ca
MD5 2202044529d2a25c73b6aca68114dad4
BLAKE2b-256 871df46c56e3927df2c0647daac1d5ab712923dc1724f01464f5a8eef59d3281

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 670463a5c1aa38d97c88cb6fb80541b8602a38eca6e10c3f56ddfacfa1748b33
MD5 a0ac1dac409893a552cb184d085cf9bb
BLAKE2b-256 457ab49d3c0443c11ffd98094ea32b2f3393b16e1317ab106a36361f84ca0236

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp39-cp39-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 fde804e7b63c218b04f4754c3d15297b1855d2a1ffa7bbeb878dd0b2b4028557
MD5 aa2b8e2dbe81e051f37420c5a7df213f
BLAKE2b-256 abf150c9717cb7db99a5646ab4a3ccd3118a9db604425157546585b58a574e71

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4f48ba463d248d6e299eeef48c333f9a1c2fae35ecf024f89d35e7e2e6d56824
MD5 d998b497bb0f222022c4382d08c9bb5b
BLAKE2b-256 47ec8894ec62083815090f954c8859ce032dcc4196dcb35dbaac6bb9fd378750

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 007691a7d1d240dedc21e3368d0e88b5a07bbdb16ea9d7e47d074b9774f39d61
MD5 65600c7fa5f4ffe819d45154c11a36f5
BLAKE2b-256 ecbc8534d4d88b321c824b9b89e00618cb6bde001f05f4accb7190d50bfc9770

See more details on using hashes here.

File details

Details for the file nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_arm64.whl.

File metadata

File hashes

Hashes for nifty_ls-1.0.1rc2-cp38-cp38-macosx_12_0_arm64.whl
Algorithm Hash digest
SHA256 70defc4965076e391103402fa0d2dcfcdbbce0f3994ce9adda9a4396942e45ed
MD5 e28eacdbe6e128c89cfe21f90da401c4
BLAKE2b-256 d4b08bf73c0df5655193a1f8547dbbf1936342d6bb877e39f51c4361c76acd53

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page