Skip to main content

Microsoft Health Intelligence package to elevate and monitor scripts to an AzureML workspace

Project description

Microsoft Health Intelligence Machine Learning Toolbox

Overview

This toolbox aims at providing low-level and high-level building blocks for Machine Learning / AI researchers and practitioners. It helps to simplify and streamline work on deep learning models for healthcare and life sciences, by providing tested components (data loaders, pre-processing), deep learning models, and cloud integration tools.

This toolbox is still in very early stages, and presently offers only the cloud integration components. ML components will be added in the next few weeks.

Getting started

  • Install from pypi via pip, by running pip install hi-ml

Documentation

The detailed package documentation, with examples and API reference, is on readthedocs.

Quick start: Using the Azure layer

Use case: you have a Python script that does something - that could be training a model, or pre-processing some data. The hi-ml package can help easily run that on Azure Machine Learning (AML) services.

Here is an example script that reads images from a folder, resizes and saves them to an output folder:

from pathlib import Path
if __name__ == '__main__':
    input_folder = Path("/tmp/my_dataset")
    output_folder = Path("/tmp/my_output")
    for file in input_folder.glob("*.jpg"):
        contents = read_image(file)
        resized = contents.resize(0.5)
        write_image(output_folder / file.name)

Doing that at scale can take a long time. We'd like to run that script in AzureML, consume the data from a folder in blob storage, and write the results back to blob storage.

With the hi-ml package, you can turn that script into one that runs on the cloud by adding one function call:

from pathlib import Path
from health.azure.himl import submit_to_azure_if_needed
if __name__ == '__main__':
    current_file = Path(__file__)
    run_info = submit_to_azure_if_needed(compute_cluster_name="preprocess-ds12",
                                         input_datasets=["images123"],
                                         # Omit this line if you don't create an output dataset (for example, in
                                         # model training scripts)
                                         output_datasets=["images123_resized"],
                                         default_datastore="my_datastore")
    # When running in AzureML, run_info.input_datasets and run_info.output_datasets will be populated,
    # and point to the data coming from blob storage. For runs outside AML, the paths will be None.
    # Replace the None with a meaningful path, so that we can still run the script easily outside AML.
    input_dataset = run_info.input_datasets[0] or Path("/tmp/my_dataset")
    output_dataset = run_info.output_datasets[0] or Path("/tmp/my_output")
    files_processed = []
    for file in input_dataset.glob("*.jpg"):
        contents = read_image(file)
        resized = contents.resize(0.5)
        write_image(output_dataset / file.name)
        files_processed.append(file.name)
    # Any other files that you would not consider an "output dataset", like metrics, etc, should be written to
    # a folder "./outputs". Any files written into that folder will later be visible in the AzureML UI.
    # run_info.output_folder already points to the correct folder.
    stats_file = run_info.output_folder / "processed_files.txt"
    stats_file.write_text("\n".join(files_processed))

Once these changes are in place, you can submit the script to AzureML by supplying an additional --azureml flag on the commandline, like python myscript.py --azureml.

That's it!

For details, please refer to the onboarding page.

For more examples, please see examples.md.

Issues

If you've found a bug in the code, please check the issues page. If no existing issue exists, please open a new one. Be sure to include

  • A descriptive title
  • Expected behaviour (including a code sample if possible)
  • Actual behavior

Contributing

We welcome all contributions that help us achieve our aim of speeding up ML/AI research in health and life sciences. Examples of contributions are

  • Data loaders for specific health & life sciences data
  • Network architectures and components for deep learning models
  • Tools to analyze and/or visualize data
  • ...

Please check the detailed page about contributions.

Licensing

MIT License

You are responsible for the performance, the necessary testing, and if needed any regulatory clearance for any of the models produced by this toolbox.

Contact

If you have any feature requests, or find issues in the code, please create an issue on GitHub.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hi-ml-0.1.4.tar.gz (27.4 kB view details)

Uploaded Source

Built Distribution

hi_ml-0.1.4-py3-none-any.whl (29.2 kB view details)

Uploaded Python 3

File details

Details for the file hi-ml-0.1.4.tar.gz.

File metadata

  • Download URL: hi-ml-0.1.4.tar.gz
  • Upload date:
  • Size: 27.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.7

File hashes

Hashes for hi-ml-0.1.4.tar.gz
Algorithm Hash digest
SHA256 a0cfae5f4d5da07e41690725bb7e1e13068d131a2bf833f9b7e96c2c7cd86d8d
MD5 4f1ac08a36f34d74202bd9ae18a3383a
BLAKE2b-256 ff9888a75a3820616e30165ae57228cbc0eed14ebbc331e0c6e7c4af1a72f68a

See more details on using hashes here.

File details

Details for the file hi_ml-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: hi_ml-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 29.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.7

File hashes

Hashes for hi_ml-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 dbcbbbe49686ac75e6a3790ba748e8e767d9bdea0f820e2ed1e4fab141f397d1
MD5 22265c9729811a6265f7e4bbb93ca292
BLAKE2b-256 d48298cf1d686c8e32ef48346fbb90be99b395c52e8d87286373b963429b3fef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page