Skip to main content

Large-scale synthesis of functional neuroimaging data

Project description

# What is Neurosynth?

Neurosynth is a Python package for large-scale synthesis of functional neuroimaging data.

## Code status

## Installation

Dependencies:

  • NumPy/SciPy

  • pandas

  • NiBabel

  • [ply](http://www.dabeaz.com/ply/) (optional, for complex structured queries)

  • scikit-learn (optional, used in some classification functions)

Assuming you have those packages in working order, the easiest way to install Neurosynth is from the command line with pip:

> pip install neurosynth

Alternatively (for the latest dev version), download or clone the package from github, then install it from source:

> python setup.py install

Depending on your operating system, you may need superuser privileges (prefix the above line with ‘sudo’).

That’s it! You should now be ready to roll.

## Documentation

Documentation, including a [full API reference](http://neurosynth.readthedocs.org/en/latest/reference.html), is available [here](http://neurosynth.readthedocs.org/en/latest/)(caution: work in progress).

## Usage

Running analyses in Neurosynth is pretty straightforward. We’re working on a user manual; in the meantime, you can take a look at the code in the /examples directory for an illustration of some common uses cases (some of the examples are in IPython Notebook format; you can view these online by entering the URL of the raw example on github into the online [IPython Notebook Viewer](http://nbviewer.ipython.org)–for example [this tutorial](http://nbviewer.ipython.org/urls/raw.github.com/neurosynth/neurosynth/master/examples/neurosynth_demo.ipynb) provides a nice overview). The rest of this Quickstart guide just covers the bare minimum.

The NeuroSynth dataset resides in a separate submodule located [here](http://github.com/neurosynth/neurosynth-data). Probably the easiest way to get the most recent data though is from within the Neurosynth package itself:

import neurosynth as ns ns.dataset.download(path=’.’, unpack=True)

…which should download the latest database files and save them to the current directory. Alternatively, you can manually download the data files from the [neurosynth-data repository](http://github.com/neurosynth/neurosynth-data). The latest dataset is always stored in current_data.tar.gz in the root folder. Older datasets are also available in the archive folder.

The dataset archive (current_data.tar.gz) contains 2 files: database.txt and features.txt. These contain the activations and meta-analysis tags for Neurosynth, respectively.

Once you have the data in place, you can generate a new Dataset instance from the database.txt file:

> from neurosynth.base.dataset import Dataset > dataset = Dataset(‘data/database.txt’)

This should take several minutes to process. Note that this is a memory-intensive operation, and may be very slow on machines with less than 8 GB of RAM.

Once initialized, the Dataset instance contains activation data from nearly 10,000 published neuroimaging articles. But it doesn’t yet have any features attached to those data, so let’s add some:

> dataset.add_features(‘data/features.txt’)

Now our Dataset has both activation data and some features we can use to manipulate the data with. In this case, the features are just term-based tags–i.e., words that occur in the abstracts of the articles from which the dataset is drawn (for details, see this [Nature Methods] paper, or the Neurosynth website).

We can now do various kinds of analyses with the data. For example, we can use the features we just added to perform automated large-scale meta-analyses. Let’s see what features we have:

> dataset.get_feature_names() [‘phonetic’, ‘associative’, ‘cues’, ‘visually’, … ]

We can use these features–either in isolation or in combination–to select articles for inclusion in a meta-analysis. For example, suppose we want to run a meta-analysis of emotion studies. We could operationally define a study of emotion as one in which the authors used words starting with ‘emo’ with high frequency:

> ids = dataset.get_studies(features=’emo*’, frequency_threshold=0.001)

Here we’re asking for a list of IDs of all studies that use words starting with ‘emo’ (e.g.,’emotion’, ‘emotional’, ‘emotionally’, etc.) at a frequency of 1 in 1,000 words or greater (in other words, if an article has 5,000 words of text, it will only be included in our set if it uses words starting with ‘emo’ at least 5 times).

> len(ids) 639

The resulting set includes 639 studies.

Once we’ve got a set of studies we’re happy with, we can run a simple meta-analysis, prefixing all output files with the string ‘emotion’ to distinguish them from other analyses we might run:

> from neurosynth.analysis import meta > ma = meta.MetaAnalysis(dataset, ids) > ma.save_results(‘some_directory/emotion’)

You should now have a set of Nifti-format brain images on your drive that display various meta-analytic results. The image names are somewhat cryptic; see the Documentation for details. It’s important to note that the meta-analysis routines currently implemented in Neurosynth aren’t very sophisticated; they’re designed primarily for efficiency (most analyses should take just a few seconds), and take multiple shortcuts as compared to other packages like ALE or MKDA. But with that caveat in mind (and one that will hopefully be remedied in the near future), Neurosynth gives you a streamlined and quick way of running large-scale meta-analyses of fMRI data.

## Getting help

For a more comprehensive set of examples, see [this tutorial](http://nbviewer.ipython.org/urls/raw.github.com/neurosynth/neurosynth/master/examples/neurosynth_demo.ipynb)–also included in IPython Notebook form in the examples/ folder (along with several other simpler examples).

For bugs or feature requests, please [create a new issue](https://github.com/neurosynth/neurosynth/issues/new). If you run into problems installing or using the software, try posting to the [Neurosynth Google group](https://groups.google.com/forum/#!forum/neurosynthlist) or email [Tal Yarkoni](mailto:tyarkoni@gmail.com).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neurosynth-0.3.3.tar.gz (548.6 kB view details)

Uploaded Source

Built Distributions

neurosynth-0.3.3-py2.7.egg (578.9 kB view details)

Uploaded Source

File details

Details for the file neurosynth-0.3.3.tar.gz.

File metadata

  • Download URL: neurosynth-0.3.3.tar.gz
  • Upload date:
  • Size: 548.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for neurosynth-0.3.3.tar.gz
Algorithm Hash digest
SHA256 04ecfd819445f1db2d48860d38e66ead95ecb77459d23fa2b69b36717384c207
MD5 dd16fd7c836c32dac029b3ea34a23fd8
BLAKE2b-256 1e25f9e5219d94a79d0269b793708ac1a7076f737717f64b9ecc828e3e355680

See more details on using hashes here.

File details

Details for the file neurosynth-0.3.3-py2.7.egg.

File metadata

  • Download URL: neurosynth-0.3.3-py2.7.egg
  • Upload date:
  • Size: 578.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.8

File hashes

Hashes for neurosynth-0.3.3-py2.7.egg
Algorithm Hash digest
SHA256 e8a820f33f07e0c57fbbd360c12b247a3632b34ed529b40f18429cf25de595bd
MD5 5d07f1ed9960fdb7ae79439d1846fb2d
BLAKE2b-256 870e01c560166643fdddc663645c82323e162dcf6a9fc85a247773fc521906d2

See more details on using hashes here.

File details

Details for the file neurosynth-0.3.3-py2.7 (Tal Yarkoni's conflicted copy 2015-05-14).egg.

File metadata

File hashes

Hashes for neurosynth-0.3.3-py2.7 (Tal Yarkoni's conflicted copy 2015-05-14).egg
Algorithm Hash digest
SHA256 9703a7e997b37f2be35d51bdfd98da4fc4e0c5033704621241d713d2e9664008
MD5 f7a2d8760ee5b6391606b196fe7e027a
BLAKE2b-256 25b4ca183b17c9db68a1a9a8703803db6fc08b60bdf1604591b0a8c227c2ea4a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page