Skip to main content

VICC normalization routine for variations

Project description

DOI

Variation Normalization

Services and guidelines for normalizing variation terms into VRS (v1.2.0) and VRSATILE (latest) compatible representations.

Public OpenAPI endpoint: https://normalize.cancervariants.org/variation

Installing with pip:

pip install variation-normalizer

About

Variation Normalization works by using four main steps: tokenization, classification, validation, and translation. During tokenization, we split strings on whitespace and parse to determine the type of token. During classification, we specify the order of tokens a classification can have. We then do validation checks such as ensuring references for a nucleotide or amino acid matches the expected value and validating a position exists on the given transcript. During translation, we return a VRS Allele object.

Variation Normalization is limited to the following types of variants:

  • HGVS expressions and text representations (ex: BRAF V600E):
    • protein (p.): substitution, deletion, insertion, deletion-insertion
    • coding DNA (c.): substitution, deletion, insertion, deletion-insertion
    • genomic (g.): substitution, deletion, ambiguous deletion, insertion, deletion-insertion, duplication
  • gnomAD-style VCF (chr-pos-ref-alt, ex: 7-140753336-A-T)
    • genomic (g.): substitution, deletion, insertion

We are working towards adding more types of variations, coordinates, and representations.

Endpoints

/toVRS

The /toVRS endpoint returns a list of validated VRS Variations.

The /normalize endpoint returns a Variation Descriptor containing the MANE Transcript, if one is found. If a genomic query is not given a gene, normalize will return its GRCh38 representation.

The steps for retrieving MANE Transcript data is as follows:

  1. Map starting annotation layer to genomic
  2. Liftover to preferred GRCh38
    We only support lifting over from GRCh37.
  3. Select preferred compatible annotation
    1. MANE Select
    2. MANE Plus Clinical
    3. Longest Compatible Remaining Transcript
  4. Map back to starting annotation layer

Backend Services

Variation Normalization relies on some local data caches which you will need to set up. It uses pipenv to manage its environment, which you will also need to install.

Once pipenv is installed:

pipenv shell
pipenv lock
pipenv sync

Gene Normalizer

Variation Normalization relies on data from Gene Normalization. You must load all sources and merged concepts.

You must also have Gene Normalization's DynamoDB running for the application to work.

For more information about the gene-normalizer, visit the README.

SeqRepo

Variation Normalization relies on seqrepo, which you must download yourself.

Variation Normalizer uses seqrepo to retrieve sequences at given positions on a transcript.

From the root directory:

pip install seqrepo
sudo mkdir /usr/local/share/seqrepo
sudo chown $USER /usr/local/share/seqrepo
seqrepo pull -i 2021-01-29

UTA

Variation Normalizer also uses uta.

Variation Normalizer uses UTA to retrieve MANE Transcript data.

The following commands will likely need modification appropriate for the installation environment.

  1. Install PostgreSQL

  2. Create user and database.

    $ createuser -U postgres uta_admin
    $ createuser -U postgres anonymous
    $ createdb -U postgres -O uta_admin uta
    
  3. To install locally, from the variation/data directory:

export UTA_VERSION=uta_20210129.pgd.gz
curl -O http://dl.biocommons.org/uta/$UTA_VERSION
gzip -cdq ${UTA_VERSION} | grep -v "^REFRESH MATERIALIZED VIEW" | psql -h localhost -U uta_admin --echo-errors --single-transaction -v ON_ERROR_STOP=1 -d uta -p 5433

To connect to the UTA database, you can use the default url (postgresql://uta_admin@localhost:5433/uta/uta_20210129). If you use the default url, you must either set the password using environment variable UTA_PASSWORD or setting the parameter db_pwd in the UTA class.

If you do not wish to use the default, you must set the environment variable UTA_DB_URL which has the format of driver://user:pass@host/database/schema.

PyLiftover

Variation Normalizer uses PyLiftover to convert GRCh37 coordinates to GRCh38 coordinates.

Starting the Variation Normalization Service Locally

gene-normalizers dynamodb and the uta database must be running.

To start the service, run the following:

uvicorn variation.main:app --reload

Next, view the OpenAPI docs on your local machine: http://127.0.0.1:8000/variation

Init coding style tests

Code style is managed by flake8 and checked prior to commit.

We use pre-commit to run conformance tests.

This ensures:

  • Check code style
  • Check for added large files
  • Detect AWS Credentials
  • Detect Private Key

Before first commit run:

pre-commit install

Testing

From the root directory of the repository:

pytest tests/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

variation-normalizer-0.3.0.tar.gz (118.2 kB view details)

Uploaded Source

Built Distribution

variation_normalizer-0.3.0-py3-none-any.whl (4.2 MB view details)

Uploaded Python 3

File details

Details for the file variation-normalizer-0.3.0.tar.gz.

File metadata

  • Download URL: variation-normalizer-0.3.0.tar.gz
  • Upload date:
  • Size: 118.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.10.4

File hashes

Hashes for variation-normalizer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 9d82bf9077923aa0c0523e71c91d727abc2ea0961cbc7ec58c57381971ae11dc
MD5 450674ad32353110d8fc7db4243dbba1
BLAKE2b-256 b7fb5108b0d5e8e9eca51f5dc51c8d3635295fe03c04f94d0dd4ecc492d1a873

See more details on using hashes here.

File details

Details for the file variation_normalizer-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for variation_normalizer-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b17f434a63d749167332a007119568169e8f916bc0394f80bdf8d3fc7d1730ae
MD5 b95a4462392db392d62b85a6e61c8142
BLAKE2b-256 aa199e8653fe0819a163dfe2ccc06c105ccda58cd3a42b95acbc3d901582ac7a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page