Skip to main content

A robust, easy-to-deploy non-uniform Fast Fourier Transform in PyTorch.

Project description

Torch KB-NUFFT

API | GitHub | Notebook Examples

Simple installation from PyPI:

pip install torchkbnufft

About

Torch KB-NUFFT implements a non-uniform Fast Fourier Transform with Kaiser-Bessel gridding in PyTorch. The implementation is completely in Python, facilitating robustness and flexible deployment in human-readable code. NUFFT functions are each wrapped as a torch.autograd.Function, allowing backpropagation through NUFFT operators for training neural networks.

Documentation

Most files are accompanied with docstrings that can be read with help while running IPython. Example:

from torchkbnufft import KbNufft

help(KbNufft)

Behavior can also be inferred by inspecting the source code here. An html-based API reference is here.

Examples

Simple Forward NUFFT

Jupyter notebook examples are in the notebooks/ folder. The following minimalist code loads a Shepp-Logan phantom and computes a single radial spoke of k-space data:

import torch
import numpy as np
from torchkbnufft import KbNufft
from skimage.data import shepp_logan_phantom

x = shepp_logan_phantom().astype(np.complex)
im_size = x.shape
x = np.stack((np.real(x), np.imag(x)))
# convert to tensor, unsqueeze batch and coil dimension
# output size: (1, 1, 2, ny, nx)
x = torch.tensor(x).unsqueeze(0).unsqueeze(0)

klength = 64
ktraj = np.stack(
    (np.zeros(64), np.linspace(-np.pi, np.pi, klength))
)
# convert to tensor, unsqueeze batch dimension
# output size: (1, 2, klength)
ktraj = torch.tensor(ktraj).unsqueeze(0)

nufft_ob = KbNufft(im_size=im_size)
# outputs a (1, 1, 2, klength) vector of k-space data
kdata = nufft_ob(x, ktraj)

A detailed example of basic NUFFT usage is included in notebooks/Basic Example.ipynb.

SENSE-NUFFT

The package also includes utilities for working with SENSE-NUFFT operators. The above code can be modified to replace the nufft_ob with the following sensenufft_ob:

from torchkbnufft import MriSenseNufft

smap = torch.rand(1, 8, 2, 400, 400)
sensenufft_ob = MriSenseNufft(im_size=im_size, smap=smap)
sense_data = sensenufft_ob(x, ktraj)

Application of the object in place of nufft_ob above would first multiply by the sensitivity coils in smap, then compute a 64-length radial spoke for each coil. All operations are broadcast across coils, which minimizes interaction with the Python interpreter and maximizes computation speed.

A detailed example of SENSE-NUFFT usage is included in notebooks/SENSE Example.ipynb.

Sparse Matrix Precomputation

Previously, sparse matrix-based interpolation was the fastest operation mode. As of v0.2.0, this is no longer the case on the GPU. Nonetheless, sparse matrices can be faster than normal operation mode in certain situations (e.g., when using many coils and slices). The following code calculates sparse interpolation matrices and uses them to compute a single radial spoke of k-space data:

from torchkbnufft import AdjKbNufft
from torchkbnufft.nufft.sparse_interp_mat import precomp_sparse_mats

adjnufft_ob = AdjKbNufft(im_size=im_size)

# precompute the sparse interpolation matrices
real_mat, imag_mat = precomp_sparse_mats(ktraj, adjnufft_ob)
interp_mats = {
    'real_interp_mats': real_mat,
    'imag_interp_mats': imag_mat
}

# use sparse matrices in adjoint
image = adjnufft_ob(kdata, ktraj, interp_mats)

A detailed example of sparse matrix precomputation usage is included in notebooks/Sparse Matrix Example.ipynb.

Running on the GPU

All of the examples included in this repository can be run on the GPU by sending the NUFFT object and data to the GPU prior to the function call, e.g.:

adjnufft_ob = adjnufft_ob.to(torch.device('cuda'))
kdata = kdata.to(torch.device('cuda'))
ktraj = ktraj.to(torch.device('cuda'))

image = adjnufft_ob(kdata, ktraj)

Similar to programming low-level code, PyTorch will throw errors if the underlying dtype and device of all objects are not matching. Be sure to make sure your data and NUFFT objects are on the right device and in the right format to avoid these errors.

Computation Speed

TorchKbNufft is first and foremost designed to be lightweight with minimal dependencies outside of PyTorch. The following computation times in seconds were observed on a workstation with a Xeon E5-1620 CPU and an Nvidia GTX 1080 GPU for a 15-coil, 405-spoke 2D radial problem.

Operation CPU (normal) CPU (sparse matrix) GPU (normal) GPU (sparse matrix)
Forward NUFFT 3.23 3.38 9.34e-02 9.34e-02
Adjoint NUFFT 4.48 1.04 1.03e-01 1.15e-01

Profiling for your machine can be done by running

python profile_torchkbnufft.py

References

Fessler, J. A., & Sutton, B. P. (2003). Nonuniform fast Fourier transforms using min-max interpolation. IEEE transactions on signal processing, 51(2), 560-574.

Beatty, P. J., Nishimura, D. G., & Pauly, J. M. (2005). Rapid gridding reconstruction with a minimal oversampling ratio. IEEE transactions on medical imaging, 24(6), 799-808.

Citation

If you want to cite the package, you can use the following:

@misc{Muckley2019,
  author = {Muckley, M.J. et al.},
  title = {Torch KB-NUFFT},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/mmuckley/torchkbnufft}}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchkbnufft-0.2.1.tar.gz (21.2 kB view details)

Uploaded Source

Built Distribution

torchkbnufft-0.2.1-py3-none-any.whl (31.5 kB view details)

Uploaded Python 3

File details

Details for the file torchkbnufft-0.2.1.tar.gz.

File metadata

  • Download URL: torchkbnufft-0.2.1.tar.gz
  • Upload date:
  • Size: 21.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0.post20191030 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.5

File hashes

Hashes for torchkbnufft-0.2.1.tar.gz
Algorithm Hash digest
SHA256 e964379f2eebae2a436529019ba39d40bbe9596573faf45bf501f7f01249eec4
MD5 3fa3335618be26fe72d13f3f39e69e6c
BLAKE2b-256 4ede4f696b52c825858d1f73cbc1371fcb3ff385900abef20dd806d9f5023703

See more details on using hashes here.

File details

Details for the file torchkbnufft-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: torchkbnufft-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 31.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0.post20191030 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.5

File hashes

Hashes for torchkbnufft-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 65b5174d26ee5bbcbb4402a4fe9c42129c34f4fe33c00b06dfe507183945f693
MD5 a35656f5d0a03794482fceb9fd3da597
BLAKE2b-256 8d9b81f6158b31c4038ff27935b40312e1f7294b7a33a66bbcbd4584f92c1a53

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page