Skip to main content

Simple data loader for CGP HCA Data Store

Project description

cgp-dss-data-loader

Simple data loader for CGP HCA Data Store

Common Setup

  1. (optional) We recommend using a Python 3 virtual environment.

  2. Run:

    pip3 install cgp-dss-data-loader

Setup for Development

  1. Clone the repo:

    git clone https://github.com/DataBiosphere/cgp-dss-data-loader.git

  2. Go to the root directory of the cloned project:

    cd cgp-dss-data-loader

  3. Make sure you are on the branch develop.

  4. Run (ideally in a new virtual environment):

    make develop

Cloud Credentials Setup

Because this program uses Amazon Web Services and Google Cloud Platform, you will need to set up credentials for both of these before you can run the program.

AWS credentials

  1. If you haven't already you will need to make an IAM user and create a new access key. Instructions are here.

  2. Next you will need to store your credentials so that Boto can access them. Instructions are here.

GCP credentials

  1. Follow the steps here to set up your Google Credentials.

Running Tests

Run:

make test

Getting Data from Gen3 and Loading it

  1. The first step is to extract the Gen3 data you want using the sheepdog exporter. The TopMed public data extracted from sheepdog is available on the release page under Assets. Assuming you use this data, you will now have a file called topmed-public.json

  2. Make sure you are running the virtual environment you set up in the Setup instructions.

  3. Now we need to transform the data. We can transform to the outdated gen3 format, or to the new standard format.

    • For the standard format, follow instructions at newt-transformer.

    • For the old Gen3 format, run this from the root of the project:

      python transformer/gen3_transformer.py /path/to/topmed_public.json --output-json transformed-topmed-public.json
      
  4. Now that we have our new transformed output we can run it with the loader.

    If you used the standard transformer use the command:

    dssload --no-dry-run --dss-endpoint MY_DSS_ENDPOINT --staging-bucket NAME_OF_MY_S3_BUCKET standard --json-input-file transformed-topmed-public.json
    

    Otherwise for the outdated gen3 format run:

    dssload --no-dry-run --dss-endpoint MY_DSS_ENDPOINT --staging-bucket NAME_OF_MY_S3_BUCKET gen3 --json-input-file transformed-topmed-public.json
    
  5. You did it!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cgp-dss-data-loader-0.1.0.tar.gz (14.9 kB view details)

Uploaded Source

File details

Details for the file cgp-dss-data-loader-0.1.0.tar.gz.

File metadata

File hashes

Hashes for cgp-dss-data-loader-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4ea4978cf44a8d80725208768c853f06e12c2f79d0950422a0ef6b0b5e5cd6e7
MD5 2742e9aa6aee1143eef1a2b374d1f864
BLAKE2b-256 ce6e957ca4761a5dfc58bb8fab9361642713f7af7497d7cd13fa667edcf95d3a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page