A deep learning framework for AI-driven multi-physics systems
Project description
NVIDIA Modulus
Getting Started | Install guide | Contributing Guidelines | Resources | Communication
What is Modulus?
NVIDIA Modulus is an open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art SciML methods for AI4science and engineering.
Modulus provides utilities and optimized pipelines to develop AI models that combine physics knowledge with data, enabling real-time predictions.
Whether you are exploring the use of Neural operators, GNNs, or transformers or are interested in Physics-informed Neural Networks or a hybrid approach in between, Modulus provides you with an optimized stack that will enable you to train your models at scale.
- More About Modulus
- Who is contributing to Modulus
- Why use Modulus
- Getting Started
- Resources
- Installation
- Contributing
- Communication
- License
More About Modulus
At a granular level, Modulus provides a library of a few key components:
Component | Description |
---|---|
modules.models | A collection of optimized, customizable, and easy-to-use models such as Fourier Neural Operators, Graph Neural Networks, and many more |
modulus.datapipes | A data pipeline and data loader library, including benchmark datapipes, weather daptapipes, and graph datapipes |
modulus.distributed | A distributed computing library build on top of torch.distributed to enable parallel training with just a few steps |
modulus.sym.geometry | A library to handle geometry for DL training using the Constructive Solid Geometry modeling and CAD files in STL format. |
modulus.sym.eq | A library to use PDEs in your DL training with several implementations of commonly observed equations and easy ways for customization. |
For a complete list, refer to the Modulus API documentation for Modulus Core and Modulus Sym.
Usually, Modulus is used either as:
- A complementary tool to Pytorch when exploring AI for SciML and AI4Science applications.
- A deep learning research platform that provides scale and optimal performance on NVIDIA GPUs.
Elaborating Further:
Scalable GPU-optimized training Library
Modulus provides a highly optimized and scalable training library for maximizing the power of NVIDIA GPUs. Distributed computing utilities allow for efficient scaling from a single GPU to multi-node GPU clusters with a few lines of code, ensuring that large-scale. physics-informed machine learning (ML) models can be trained quickly and effectively. The framework includes support for advanced. optimization utilities, tailor made datapipes, validation utilities to enhance the end to end training speed.
A suite of Physics Informed ML Models
Modulus offers a comprehensive library of state-of-the-art models specifically designed for physics-ML applications. The Model Zoo includes generalizable model architectures such as Fourier Neural Operators (FNOs), DeepONet, Physics-Informed Neural Networks (PINNs), Graph Neural Networks (GNNs), and generative AI models like Diffusion Models as well as domain-specific models such as Deep Learning Weather Prediction (DLWP) and Super Resolution Network (SrNN) among others. These models are optimized for various physics domains, such as computational fluid dynamics, structural mechanics, and electromagnetics. Users can download, customize, and build upon these models to suit their specific needs, significantly reducing the time required to develop high-fidelity simulations.
Seamless PyTorch Integration
Modulus is built on top of PyTorch, providing a familiar and user-friendly experience for those already proficient with PyTorch. This includes a simple Python interface and modular design, making it easy to use Modulus with existing PyTorch workflows. Users can leverage the extensive PyTorch ecosystem, including its libraries and tools while benefiting from Modulus's specialized capabilities for physics-ML. This seamless integration ensures users can quickly adopt Modulus without a steep learning curve.
For more information, refer Converting PyTorch Models to Modulus Models
Easy Customization and Extension
Modulus is designed to be highly extensible, allowing users to add new functionality with minimal effort. The framework provides Pythonic APIs for defining new physics models, geometries, and constraints, making it easy to extend its capabilities to new use cases. The adaptability of Modulus is further enhanced by key features such as ONNX support for flexible model deployment, robust logging utilities for streamlined error handling, and efficient checkpointing to simplify model loading and saving.
This extensibility ensures that Modulus can adapt to the evolving needs of researchers and engineers, facilitating the development of innovative solutions in the field of physics-ML.
Detailed information on features and capabilities can be found in the Modulus documentation.
Reference samples cover a broad spectrum of physics-constrained and data-driven workflows to suit the diversity of use cases in the science and engineering disciplines.
[!TIP] Have questions about how Modulus can assist you? Try our [Experimental] chatbot, Modulus Guide, for answers.
Hello world
You can start using Modulus in your PyTorch code as simple as shown here:
python
>>> import torch
>>> from modulus.models.mlp.fully_connected import FullyConnected
>>> model = FullyConnected(in_features=32, out_features=64)
>>> input = torch.randn(128, 32)
>>> output = model(input)
>>> output.shape
torch.Size([128, 64])
AI4Science Library
- Modulus Symbolic: This repository of algorithms and utilities allows SciML researchers and developers to physics inform model training and model validation. It also provides a higher level abstraction for domain experts that is native to science and engineering.
Domain Specific Packages
The following are packages dedicated for domain experts of specific communities catering to their unique exploration needs.
- Earth-2 Studio: Open source project to enable climate researchers and scientists to explore and experiment with AI models for weather and climate.
Research packages
The following are research packages that get packaged into Modulus once they are stable.
- Modulus Makani: Experimental library designed to enable the research and development of machine-learning based weather and climate models.
- Earth2 Grid: Experimental library with utilities for working geographic data defined on various grids.
- Earth-2 MIP: Experimental library with utilities for model intercomparison for weather and climate models.
Who is using and contributing to Modulus
Modulus is an open source project and gets contributions from researchers in the SciML and AI4science fields. While Modulus team works on optimizing the underlying SW stack, the community collaborates and contributes model architectures, datasets, and reference applications so we can innovate in the pursuit of developing generalizable model architectures and algorithms.
Some latest examples of community contributors are HP Labs 3D Printing team, Stanford Cardiovascular research team, UIUC team, CMU team etc.
Latest examples of research teams using Modulus are ORNL team, TU Munich CFD team etc.
Please navigate to this page for a complete list of research work leveraging Modulus. For a list of enterprises using Modulus refer here.
Using Modulus and interested in showcasing your work on NVIDIA Blogs? Fill out this proposal form and we will get back to you!
Why are they using Modulus
Here are some of the key benefits of Modulus for SciML model development:
SciML Benchmarking and validation | Ease of using generalized SciML recipes with heterogenous datasets | Out of the box performance and scalability |
Modulus enables researchers to benchmark their AI model against proven architectures for standard benchmark problems with detailed domain-specific validation criteria. | Modulus enables researchers to pick from SOTA SciML architectures and use built-in data pipelines for their use case. | Modulus provides out-of-the-box performant training pipelines including optimized ETL pipelines for heterogrneous engineering and scientific datasets and out of the box scaling across multi-GPU and multi-node GPUs. |
See what your peer SciML researchers are saying about Modulus (Coming soon).
Getting started
The following resources will help you in learning how to use Modulus. The best way is to start with a reference sample and then update it for your own use case.
- Using Modulus with your PyTorch model
- Using Modulus built-in models
- Getting started Guide
- Reference Samples
- User guide Documentation
Resources
- Getting started Webinar
- AI4Science Modulus Bootcamp
- Modulus Pretrained models
- Modulus Datasets and Supplementary materials
- Self-paced Modulus DLI training
- Deep Learnning for Science and Engineering Lecture Series with Modulus
- Video Tutorials
Installation
PyPi
The recommended method for installing the latest version of Modulus is using PyPi:
pip install nvidia-modulus
The installation can be verified by running the hello world example as demonstrated here.
Optional dependencies
Modulus has many optional dependencies that are used in specific components.
When using pip, all dependencies used in Modulus can be installed with
pip install nvidia-modulus[all]
. If you are developing Modulus, developer dependencies
can be installed using pip install nvidia-modulus[dev]
. Otherwise, additional dependencies
can be installed on a case by case basis. Detailed information on installing the
optional dependencies can be found in the
Getting Started Guide.
NVCR Container
The recommended Modulus docker image can be pulled from the NVIDIA Container Registry:
docker pull nvcr.io/nvidia/modulus/modulus:24.04
Inside the container, you can clone the Modulus git repositories and get started with the examples. The below command shows the instructions to launch the modulus container and run examples from this repo.
docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia \
--rm -it nvcr.io/nvidia/modulus/modulus:24.04 bash
git clone https://github.com/NVIDIA/modulus.git
cd modulus/examples/cfd/darcy_fno/
pip install warp-lang # install NVIDIA Warp to run the darcy example
python train_fno_darcy.py
For enterprise supported NVAIE container, refer Modulus Secured Feature Branch
From Source
Package
For a local build of the Modulus Python package from source use:
git clone git@github.com:NVIDIA/modulus.git && cd modulus
pip install --upgrade pip
pip install .
Source Container
To build Modulus docker image:
docker build -t modulus:deploy \
--build-arg TARGETPLATFORM=linux/amd64 --target deploy -f Dockerfile .
Alternatively, you can run make container-deploy
To build CI image:
docker build -t modulus:ci \
--build-arg TARGETPLATFORM=linux/amd64 --target ci -f Dockerfile .
Alternatively, you can run make container-ci
.
Currently, only linux/amd64
and linux/arm64
platforms are supported. If using
linux/arm64
, some dependencies like warp-lang
might not install correctly.
Contributing to Modulus
Modulus is an open source collaboration and its success is rooted in community contribution to further the field of Physics-ML. Thank you for contributing to the project so others can build on top of your contribution.
For guidance on contributing to Modulus, please refer to the contributing guidelines.
Cite Modulus
If Modulus helped your research and you would like to cite it, please refer to the guidelines
Communication
- Github Discussions: Discuss new architectures, implementations, Physics-ML research, etc.
- GitHub Issues: Bug reports, feature requests, install issues, etc.
- Modulus Forum: The Modulus Forum hosts an audience of new to moderate-level users and developers for general chat, online discussions, collaboration, etc.
Feedback
Want to suggest some improvements to Modulus? Use our feedback form here.
License
Modulus is provided under the Apache License 2.0, please see LICENSE.txt for full license text.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file nvidia_modulus-0.8.0-py3-none-any.whl
.
File metadata
- Download URL: nvidia_modulus-0.8.0-py3-none-any.whl
- Upload date:
- Size: 498.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bd61326f2728c062a53469b4171880d5847aa576d0219facdee350f901629bac |
|
MD5 | 36d44d0af382ddf7d9933756633a55f5 |
|
BLAKE2b-256 | 436b45637a2748686e521cb1c2d1063e0285ab0031867fe4885cd43d93c7d422 |