Skip to main content

PYthon Dicom Image ConvertER

Project description

PyDicer: PYthon Dicom Image ConvertER

Welcome to PyDicer, a tool to ease the process of converting Radiotherapy DICOM data objects into a format typically used for research purposes. In addition to data conversion, functionality is provided to help analyse the data. This includes computing radiomic features, radiotherapy dose metrics and auto-segmentation metrics. PyDicer uses the NIfTI format to store data is a well defined file system structure. Tracking of these data objects in CSV files, also stored on the file system, provides an easy and flexible way to work with the converted data in your research.

The PyDicer documentation provides several examples and guides to help you get started with the tool. Here are a few PyDicer principles to keep in mind as you get started:

  • The working directory structure is standardised and generalisable for use with any DICOM dataset.
  • Use Pandas DataFrame's to work with converted data objects.
  • SimpleITK and PlatiPy are used under the hood for the image conversion, visualisation and analysis tasks.
  • Always inspect visualisations, plots and metrics produced by PyDicer in your working directory. Remember, PyDicer is a research tool so only use it for research purposes and expect the unexpected!

Installation

PyDicer currently supports Python version 3.8, 3.9 and 3.10. Install PyDicer in your Python environment using pip:

pip install pydicer

Supported Modalities

PyDicer currently supports converting and analysing the following DICOM modalities:

  • CT
  • MR
  • PT (Experimental)
  • RTSTRUCT
  • RTPLAN (Not converted since this only consists of meta data)
  • RTDOSE

Directory Structure

PyDicer will place converted and intermediate files into a specific directory structure. Visualisation, metrics computed and plots are also stored along side the converted data objects. Within the configured working directory [working], the following directories will be generated:

  • [working]/data: Directory in which converted data will be placed
  • [working]/quarantine: Files which couldn't be preprocessed or converted will be placed in here for you to investigate further
  • [working]/.pydicer: Intermediate files as well as log output will be stored in here
  • [working]/[dataset_name]: Clean datasets prepared using the Dataset Preparation Module will be stored in a directory with their name and will symbolically link to converted in the [working]/data directory

Pipeline

The pipeline handles fetching of the DICOM data to conversion and preparation of your research dataset. Here are the key steps of the pipeline:

  1. Input: various classes are provided to fetch DICOM files from the file system, DICOM PACS, TCIA or Orthanc. A TestInput class is also provided to supply test data for development/testing.

  2. Preprocess: The DICOM files are sorted and linked. Error checking is performed and resolved where possible.

  3. Conversion: The DICOM files are converted to the target format (NIfTI).

  4. Visualistion: Visualistions of data converted are prepared to assist with data selection.

  5. Dataset Preparation: The appropriate files from the converted data are selected to prepare a clean dataset ready for use in your research project!

  6. Analysis: Radiomics and Dose Metrics are computed on the converted data.

Getting Started

Running the pipeline is easy. The following script will get you started:

from pathlib import Path

from pydicer.input.test import TestInput
from pydicer import PyDicer

# Configure working directory
directory = Path("./testdata")
directory.mkdir(exist_ok=True, parents=True)

# Fetch some test DICOM data to convert
dicom_directory = directory.joinpath("dicom")
dicom_directory.mkdir(exist_ok=True, parents=True)
test_input = TestInput(dicom_directory)
test_input.fetch_data()

# Create the PyDicer tool object and add the dicom directory as an input location
pydicer = PyDicer(directory)
pydicer.add_input(dicom_directory)

# Run the pipeline
pydicer.run_pipeline()

Contributing

PyDicer is an open-source tool and contributions are welcome! Here are some ways you might consider contributing to the project:

  • Reporting issues on GitHub.
  • Correcting/extending the documentation.
  • Contributing a bug fix or extending some functionality.
  • Providing functionality to support additional DICOM modalities.
  • Giving the PyDicer project a star on GitHub.

For more information, see the Contributing documentation.

Authors

PyDicer was developed by the Ingham Medical Physics team in South-Western Sydney. It was developed as part of the Australian Cancer Data Network supported by the Australian Research Data Commons.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydicer-0.2.0.tar.gz (66.2 kB view details)

Uploaded Source

Built Distribution

pydicer-0.2.0-py3-none-any.whl (81.7 kB view details)

Uploaded Python 3

File details

Details for the file pydicer-0.2.0.tar.gz.

File metadata

  • Download URL: pydicer-0.2.0.tar.gz
  • Upload date:
  • Size: 66.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.9.16 Darwin/22.6.0

File hashes

Hashes for pydicer-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9f0dd3b53931ca9100db9eb52f6725142570213eb184e26405c77dec709163be
MD5 5cca380555203ba299486111086e010e
BLAKE2b-256 1c1761776171aa1d701b9d94dcac78ec34178dd0ab2db01a1da5404379d2bb25

See more details on using hashes here.

File details

Details for the file pydicer-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: pydicer-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 81.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.9.16 Darwin/22.6.0

File hashes

Hashes for pydicer-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a2306de1d6f54bb5f81424639eb4837eec152fed51a00194d918e08c29ee4c47
MD5 e4d6c2c56db5c958a40ba3027792677f
BLAKE2b-256 2fe79c02ee7ae53945a0f8eb86c228267745af4df9ade448de7f2e9a93946851

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page