Skip to main content

Decentralized database of biomedical synonyms

Project description

Biosynonyms

Tests PyPI PyPI - Python Version PyPI - License Documentation Status Codecov status Cookiecutter template from @cthoyt Code style: black Contributor Covenant

DOI

A decentralized database of synonyms for biomedical entities and concepts. This resource is meant to be complementary to ontologies, databases, and other controlled vocabularies that provide synonyms. It's released under a permissive license (CC0), so they can be easily adopted by/contributed back to upstream resources.

Here's how to get the data:

import biosynonyms

# Uses an internal data structure
positive_synonyms = biosynonyms.get_positive_synonyms()
negative_synonyms = biosynonyms.get_negative_synonyms()

# Get ready for use in NER with Gilda, only using positive synonyms
gilda_terms = biosynonyms.get_gilda_terms()

Synonyms

The data are also accessible directly through TSV such that anyone can consume them from any programming language.

The positives.tsv has the following columns:

  1. text the synonym text itself
  2. curie the compact uniform resource identifier (CURIE) for a biomedical entity or concept, standardized using the Bioregistry
  3. name the standard name for the concept
  4. scope the predicate which encodes the synonym scope, written as a CURIE from the OBO in OWL (oio) controlled vocabulary, i.e., one of:
    • oboInOwl:hasExactSynonym
    • oboInOwl:hasNarrowSynonym (i.e., the synonym represents a narrower term)
    • oboInOwl:hasBroadSynonym (i.e., the synonym represents a broader term)
    • oboInOwl:hasRelatedSynonym
    • oboInOwl:hasSynonym (use this if the scope is unknown)
  5. type the (optional) synonym property type, written as a CURIE from the OBO Metadata Ontology (omo) controlled vocabulary, e.g., one of:
    • OMO:0003000 (abbreviation)
    • OMO:0003001 (ambiguous synonym)
    • OMO:0003002 (dubious synonym)
    • OMO:0003003 (layperson synonym)
    • OMO:0003004 (plural form)
    • ...
  6. provenance a comma-delimited list of CURIEs corresponding to publications that use the given synonym (ideally using highly actionable identifiers from semantic spaces like pubmed, pmc, doi)
  7. contributor the ORCID identifier of the contributor
  8. date the optional date when the row was curated in YYYY-MM-DD format
  9. language the (optional) ISO 2-letter language code. If missing, assumed to be American English.
  10. comment an optional comment
  11. source the source of the synonyms, usually biosynonyms unless imported from elsewhere

Here's an example of some rows in the synonyms table (with linkified CURIEs):

text curie scope provenance contributor language
PI(3,4,5)P3 CHEBI:16618 oio:hasExactSynonym pubmed:29623928, pubmed:20817957 0000-0003-4423-4370 en
phosphatidylinositol (3,4,5) P3 CHEBI:16618 oio:hasExactSynonym pubmed:29695532 0000-0003-4423-4370 en

Incorrect Synonyms

The negatives.tsv has the following columns for non-trivial examples of text strings that aren't synonyms. This document doesn't address the same issues as context-based disambiguation, but rather helps dscribe issues like incorrect sub-string matching:

  1. text the non-synonym text itself
  2. curie the compact uniform resource identifier (CURIE) for a biomedical entity or concept that does not match the following text, standardized using the Bioregistry
  3. references same as for positives.tsv, illustrating documents where this string appears
  4. contributor the ORCID identifier of the contributor
  5. language the (optional) ISO 2-letter language code. If missing, assumed to be American English.

Here's an example of some rows in the negative synonyms table (with linkified CURIEs):

text curie provenance contributor language
PI(3,4,5)P3 hgnc:22979 pubmed:29623928, pubmed:20817957 0000-0003-4423-4370 en

Known Limitations

It's hard to know which exact matches between different vocabularies could be used to deduplicate synonyms. Right now, this isn't covered but some partial solutions already exist that could be adopted.

🚀 Installation

The most recent release can be installed from PyPI with:

pip install biosynonyms

The most recent code and data can be installed directly from GitHub with:

pip install git+https://github.com/biopragmatics/biosynonyms.git

👐 Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.

👋 Attribution

⚖️ License

The code in this package is licensed under the MIT License. The data is licensed under CC0.

🍪 Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

🛠️ For Developers

See developer instructions

The final section of the README is for if you want to get involved by making a code contribution.

Development Installation

To install in development mode, use the following:

git clone git+https://github.com/biopragmatics/biosynonyms.git
cd biosynonyms
pip install -e .

Updating Package Boilerplate

This project uses cruft to keep boilerplate (i.e., configuration, contribution guidelines, documentation configuration) up-to-date with the upstream cookiecutter package. Update with the following:

pip install cruft
cruft update

More info on Cruft's update command is available here.

🥼 Testing

After cloning the repository and installing tox with pip install tox tox-uv, the unit tests in the tests/ folder can be run reproducibly with:

tox -e py

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

📖 Building the Documentation

The documentation can be built locally using the following:

git clone git+https://github.com/biopragmatics/biosynonyms.git
cd biosynonyms
tox -e docs
open docs/build/html/index.html

The documentation automatically installs the package as well as the docs extra specified in the pyproject.toml. sphinx plugins like texext can be added there. Additionally, they need to be added to the extensions list in docs/source/conf.py.

The documentation can be deployed to ReadTheDocs using this guide. The .readthedocs.yml YAML file contains all the configuration you'll need. You can also set up continuous integration on GitHub to check not only that Sphinx can build the documentation in an isolated environment (i.e., with tox -e docs-test) but also that ReadTheDocs can build it too.

Configuring ReadTheDocs

  1. Log in to ReadTheDocs with your GitHub account to install the integration at https://readthedocs.org/accounts/login/?next=/dashboard/
  2. Import your project by navigating to https://readthedocs.org/dashboard/import then clicking the plus icon next to your repository
  3. You can rename the repository on the next screen using a more stylized name (i.e., with spaces and capital letters)
  4. Click next, and you're good to go!

📦 Making a Release

Configuring Zenodo

Zenodo is a long-term archival system that assigns a DOI to each release of your package.

  1. Log in to Zenodo via GitHub with this link: https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page that lists all of your organizations and asks you to approve installing the Zenodo app on GitHub. Click "grant" next to any organizations you want to enable the integration for, then click the big green "approve" button. This step only needs to be done once.
  2. Navigate to https://zenodo.org/account/settings/github/, which lists all of your GitHub repositories (both in your username and any organizations you enabled). Click the on/off toggle for any relevant repositories. When you make a new repository, you'll have to come back to this

After these steps, you're ready to go! After you make "release" on GitHub (steps for this are below), you can navigate to https://zenodo.org/account/settings/github/repository/biopragmatics/biosynonyms to see the DOI for the release and link to the Zenodo record for it.

Registering with the Python Package Index (PyPI)

You only have to do the following steps once.

  1. Register for an account on the Python Package Index (PyPI)
  2. Navigate to https://pypi-hypernode.com/manage/account and make sure you have verified your email address. A verification email might not have been sent by default, so you might have to click the "options" dropdown next to your address to get to the "re-send verification email" button
  3. 2-Factor authentication is required for PyPI since the end of 2023 (see this blog post from PyPI). This means you have to first issue account recovery codes, then set up 2-factor authentication
  4. Issue an API token from https://pypi-hypernode.com/manage/account/token

Configuring your machine's connection to PyPI

You have to do the following steps once per machine. Create a file in your home directory called .pypirc and include the following:

[distutils]
index-servers =
    pypi
    testpypi

[pypi]
username = __token__
password = <the API token you just got>

# This block is optional in case you want to be able to make test releases to the Test PyPI server
[testpypi]
repository = https://test.pypi.org/legacy/
username = __token__
password = <an API token from test PyPI>

Note that since PyPI is requiring token-based authentication, we use __token__ as the user, verbatim. If you already have a .pypirc file with a [distutils] section, just make sure that there is an index-servers key and that pypi is in its associated list. More information on configuring the .pypirc file can be found here.

Uploading to PyPI

After installing the package in development mode and installing tox with pip install tox tox-uv, run the following from the shell:

tox -e finish

This script does the following:

  1. Uses bump-my-version to switch the version number in the pyproject.toml, CITATION.cff, src/biosynonyms/version.py, and docs/source/conf.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel using uv build
  3. Uploads to PyPI using twine.
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion -- minor after.

Releasing on GitHub

  1. Navigate to https://github.com/biopragmatics/biosynonyms/releases/new to draft a new release
  2. Click the "Choose a Tag" dropdown and select the tag corresponding to the release you just made
  3. Click the "Generate Release Notes" button to get a quick outline of recent changes. Modify the title and description as you see fit
  4. Click the big green "Publish Release" button

This will trigger Zenodo to assign a DOI to your release as well.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biosynonyms-0.1.0.tar.gz (35.7 kB view details)

Uploaded Source

Built Distribution

biosynonyms-0.1.0-py3-none-any.whl (28.2 kB view details)

Uploaded Python 3

File details

Details for the file biosynonyms-0.1.0.tar.gz.

File metadata

  • Download URL: biosynonyms-0.1.0.tar.gz
  • Upload date:
  • Size: 35.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for biosynonyms-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f1c6a1c2f2c751f1d8d368dc3d9b10ed5949a058991ea93098985e659dd110df
MD5 9b77e182edf26d9073e2b6376d0e3738
BLAKE2b-256 4fb399e5ea02f83f3c27a51806c9fd3611ad07791bc4936b28820c2a63d7b450

See more details on using hashes here.

File details

Details for the file biosynonyms-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: biosynonyms-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 28.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for biosynonyms-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0d5607d05d98ff89ded939fdfddd023bdef62b7e774afd9fd0f26c120510390d
MD5 657ae373ab13a1715483c73847821392
BLAKE2b-256 aba9df1643bda5bafcfb66f7ae5bf8ebc64bf657d1c058e49b72e59f60b870ab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page