Skip to main content

Federated Learning Application Runtime Environment

Project description

NVIDIA Federated Learning Application Runtime Environment

NVIDIA FLARE enables researchers to collaborate and build AI models without sharing private data.

NVIDIA FLARE is a standalone python library designed to enable federated learning amongst different parties using their local secure protected data for client-side training, at the same time it includes capabilities to coordinate and exchange progressing of results across all sites to achieve better global model while preserving data privacy. The participating clients can be in any part of the world.

NVIDIA FLARE builds on a flexible and modular architecture and is abstracted through APIs allowing developers & researchers to customize their implementation of functional learning components in a Federated Learning paradigm.

Learn more - NVIDIA FLARE.

Installation

To install the current release, you can simply run:

pip install nvflare

Quick Start

Release Highlights

Release 2.2.1

  • FL Simulator -- A lightweight simulator of a running NVFLARE FL deployment. It allows researchers to test and debug their application without provisioning a real project. The FL jobs run on a server and multiple clients in the same process but in a similar way to how it would run in a real deployment. Researchers can quickly build out new components and jobs that can then be directly used in a real production deployment.

  • FLARE Dashboard NVFLARE's web UI. In its initial incarnation, the Flare Dashboard is used to help project setup, user registration, startup kits distribution and dynamic provisions. Dashboard setup and apis can be found here

  • Site-policy management -- Prior to NVFLARE 2.2, all policies (resource management, authorization and privacy protection, logging configurations) can only be defined by the Project Admin during provision time; and authorization policies are centrally enforced by the FL Server. NVFLARE 2.2 makes it possible for each site to define its own policies in the following areas:

    • Resource Management: the configuration of system resources that are solely the decisions of local IT.
    • Authorization Policy: local authorization policy that determines what a user can or cannot do on the local site. see related Federated Authorization
    • Privacy Policy: local policy that specifies what types of studies are allowed and how to add privacy protection to the learning results produced by the FL client on the local site.
    • Logging Configuration: each site can now define its own logging configuration for system generated log messages.
  • Federated XGBoost -- We developed federated XGBoost for data scientists to perform machine learning on tabular data with popular tree-based method. In this release, we provide several approaches for the horizontal federated XGBoost algorithms.

    • Histogram-based Collaboration -- leverages recently released (XGBoost 1.7.0) federated versions of open-source XGBoost histogram-based distributed training algorithms, achieving identical results as centralized training (trees trained on global data information).
    • Tree-based Collaboration -- individual trees are independently trained on each client's local data without aggregating the global sample gradient histogram information. Trained trees are collected and passed to the server / other clients for aggregation and further boosting rounds.
  • Federated Statistics -- built-in federated statistics operators that can generate global statistics based on local client side statistics. The results, for all features of all datasets at all sites as well as global aggregates, can be visualized via the visualization utility in the notebook.

  • MONAI Integration In 2.2 release, we provided two implementations by leveraging MONAI Bundle.

    • MONAI ClientAlgo Integration -- enable running MONAI bundles directly in a federated setting using NVFLARE
    • MONAI ClientAlgoStats Integration -- through NVFLARE Federated Statistics we can generate, compare and visualize all clients' data statistics generated from MONAI summary statistics
  • Tools and Production Support

Migrations tips

To migrate from releases prior to 2.2.1, here are few notes that might help migrate to 2.2.1.

Related talks and publications

For a list of talks, blogs, and publications related to NVIDIA FLARE, see here.

Third party license

See 3rdParty folder for their license files.

Project details


Release history Release notifications | RSS feed

This version

2.2.3

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

nvflare-2.2.3-py3-none-any.whl (670.0 kB view details)

Uploaded Python 3

File details

Details for the file nvflare-2.2.3-py3-none-any.whl.

File metadata

  • Download URL: nvflare-2.2.3-py3-none-any.whl
  • Upload date:
  • Size: 670.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.12.0 pkginfo/1.8.2 requests/2.28.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.10

File hashes

Hashes for nvflare-2.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b29e9faba164e81fc874580553f73a79a73a89ef36040e00ed3c6aeaca1aa4cb
MD5 5a5bb10788caad7a922fd622e1ff6929
BLAKE2b-256 9d53b2c1cf77b3a1c62794d421806cd9253b26cef9fc2101e7dd39844cca167c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page