Skip to main content

VICC normalization routine for variations

Project description

DOI

Variation Normalization

Services and guidelines for normalizing variation terms into VRS and VRSATILE compatible representations.

Public OpenAPI endpoint: https://normalize.cancervariants.org/variation

Installing with pip:

pip install variation-normalizer

About

Variation Normalization works by using four main steps: tokenization, classification, validation, and translation. During tokenization, we split strings on whitespace and parse to determine the type of token. During classification, we specify the order of tokens a classification can have. We then do validation checks such as ensuring references for a nucleotide or amino acid matches the expected value and validating a position exists on the given transcript. During translation, we return a VRS Allele object.

Variation Normalization is limited to the following types of variants:

  • HGVS expressions and text representations (ex: BRAF V600E):
    • protein (p.): substitution, deletion, insertion, deletion-insertion
    • coding DNA (c.): substitution, deletion, insertion, deletion-insertion
    • genomic (g.): substitution, deletion, ambiguous deletion, insertion, deletion-insertion, duplication
  • gnomAD-style VCF (chr-pos-ref-alt, ex: 7-140753336-A-T)
    • genomic (g.): substitution, deletion, insertion

We are working towards adding more types of variations, coordinates, and representations.

Endpoints

/to_vrs

The /to_vrs endpoint returns a list of validated VRS Variations.

The /normalize endpoint returns a Variation Descriptor containing the MANE Transcript, if one is found. If a genomic query is not given a gene, normalize will return its GRCh38 representation.

The steps for retrieving MANE Transcript data is as follows:

  1. Map starting annotation layer to genomic
  2. Liftover to preferred GRCh38
    We only support lifting over from GRCh37.
  3. Select preferred compatible annotation
    1. MANE Select
    2. MANE Plus Clinical
    3. Longest Compatible Remaining Transcript
  4. Map back to starting annotation layer

Backend Services

Variation Normalization relies on some local data caches which you will need to set up. It uses pipenv to manage its environment, which you will also need to install.

Once pipenv is installed:

pipenv shell
pipenv lock
pipenv sync

Gene Normalizer

Variation Normalization relies on data from Gene Normalization. You must load all sources and merged concepts.

You must also have Gene Normalization's DynamoDB running for the application to work.

For more information about the gene-normalizer, visit the README.

SeqRepo

Variation Normalization relies on seqrepo, which you must download yourself.

Variation Normalizer uses seqrepo to retrieve sequences at given positions on a transcript.

From the root directory:

pip install seqrepo
sudo mkdir /usr/local/share/seqrepo
sudo chown $USER /usr/local/share/seqrepo
seqrepo pull -i 2021-01-29

UTA

Variation Normalizer also uses uta.

Variation Normalizer uses UTA to retrieve MANE Transcript data.

The following commands will likely need modification appropriate for the installation environment.

  1. Install PostgreSQL

  2. Create user and database.

    $ createuser -U postgres uta_admin
    $ createuser -U postgres anonymous
    $ createdb -U postgres -O uta_admin uta
    
  3. To install locally, from the variation/data directory:

export UTA_VERSION=uta_20210129.pgd.gz
curl -O http://dl.biocommons.org/uta/$UTA_VERSION
gzip -cdq ${UTA_VERSION} | grep -v "^REFRESH MATERIALIZED VIEW" | psql -h localhost -U uta_admin --echo-errors --single-transaction -v ON_ERROR_STOP=1 -d uta -p 5433

To connect to the UTA database, you can use the default url (postgresql://uta_admin@localhost:5433/uta/uta_20210129). If you use the default url, you must either set the password using environment variable UTA_PASSWORD or setting the parameter db_pwd in the UTA class.

If you do not wish to use the default, you must set the environment variable UTA_DB_URL which has the format of driver://user:pass@host/database/schema.

PyLiftover

Variation Normalizer uses PyLiftover to convert GRCh37 coordinates to GRCh38 coordinates.

Starting the Variation Normalization Service Locally

gene-normalizers dynamodb and the uta database must be running.

To start the service, run the following:

uvicorn variation.main:app --reload

Next, view the OpenAPI docs on your local machine: http://127.0.0.1:8000/variation

Init coding style tests

Code style is managed by flake8 and checked prior to commit.

We use pre-commit to run conformance tests.

This ensures:

  • Check code style
  • Check for added large files
  • Detect AWS Credentials
  • Detect Private Key

Before first commit run:

pre-commit install

Testing

From the root directory of the repository:

pytest tests/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

variation-normalizer-0.4.0a6.tar.gz (128.2 kB view details)

Uploaded Source

Built Distribution

variation_normalizer-0.4.0a6-py3-none-any.whl (4.2 MB view details)

Uploaded Python 3

File details

Details for the file variation-normalizer-0.4.0a6.tar.gz.

File metadata

File hashes

Hashes for variation-normalizer-0.4.0a6.tar.gz
Algorithm Hash digest
SHA256 c8be7de3ecafd8142dc5cd9057dae1ace4cfdc07f281428d1f4b1e5afa2b1c89
MD5 255ec5f554aaaf00030113fd70515e7c
BLAKE2b-256 5e750d0f8feb0d7afd486f0f6ee600ea3482191515e95f04f46bf3e063f5b470

See more details on using hashes here.

File details

Details for the file variation_normalizer-0.4.0a6-py3-none-any.whl.

File metadata

File hashes

Hashes for variation_normalizer-0.4.0a6-py3-none-any.whl
Algorithm Hash digest
SHA256 f0d2e31f801a04af265b8e42997fcdcd82843f27cdb708c90e8388f6abdba12b
MD5 17fbce753a1ab26cf73908ff161b1e79
BLAKE2b-256 1e0871d2a769a6d513efc3770267e34fd039619562986a0bad4e41a49cd40c26

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page