Skip to main content

Define, validate and transform DBnomics data

Project description

DBnomics data model

Define, validate and transform DBnomics data.

For a quick schematic look at the data model, please read the cheat_sheet.md file. If you are a developer working on fetchers, you can print it!

See also these sample directories.

Note: The symbol means that a constraint is validated by the validation script.

Entities and relationships

provider -> dataset -> time series -> observations
  • Each provider contains datasets
  • Each dataset contains time series
  • Each time series contains observations
  • Each observation is a tuple like (period, value, attribute1, attribute2, ..., attributeN), where attributes are optional

Note: the singluar and plural forms of "time series" are identical (cf Wiktionary).

Storage

DBnomics data is stored in regular directories of the file-system.

A directory containing data from a provider converted by a fetcher.

  • ✓ The directory name MUST be {provider_code}-json-data.

Revisions

Each storage directory is versioned using Git in order to track revisions.

General constraints

Minimal data

Data MUST NOT be stored if it adds no value or if it can be computed from any other data.

As a consequence:

  • series names MUST NOT be generated when not provided by source data;

DBnomics can generate a name from the dimensions values codes

Data stability

Any commit in the storage directory of a provider MUST reflect a change from the side of the provider.

Data conversions MUST be stable: running a conversion script on the same source-data MUST NOT change converted data.

As a consequence:

  • when series codes are generated from a dimensions dict, always use the same order;
  • properties of JSON objects MUST be sorted alphabetically;

/provider.json

This JSON file contains meta-data about the provider.

See its JSON schema.

/category_tree.json

This JSON file contains a tree of categories whose leaves are datasets and nodes are categories.

This file is optional:

  • if categories are provided by source data, it SHOULD exist;
  • if it's missing, DBnomics will generate the tree as a list of datasets ordered lexicographically;
  • it MUST NOT be written if it is identical to the generated list mentioned above (due to the general constraint about minimal data)

See its JSON schema.

/{dataset_code}/

This directory contains data about a dataset of the provider.

  • The directory name MUST be equal to the dataset code.

/{dataset_code}/dataset.json

This JSON file contains meta-data about a dataset of the provider.

See its JSON schema.

The series property if optional: see storing time series section.

/{dataset_code}/series.jsonl

This JSON-lines file contains meta-data about time series of a dataset of a provider.

Each line is a JSON object validated against this JSON schema.

This file is optional: see storing time series section.

/{dataset_code}/{series_code}.tsv

This TSV file contains observations of a time series of a dataset of a provider.

These files are optional: see storing time series section.

Constraints on time series

  • With providers using series codes composed of dimensions values codes:
    • The separator MUST be '.' to be compatible with series codes masks. It is allowed to change the separator used originally by the provider. Example: this commit on BIS.
    • The parts of the series code MUST follow the order defined by dimensions_codes_order. Example: if dimensions_codes_order = ["FREQ", "COUNTRY"], the series code MUST be A.FR and not FR.A.
    • When dimensions codes order is not defined by the provider, the lexicographic order of the dimensions codes SHOULD be used, and the dimensions_codes_order key MUST NOT be written. Example: if dimensions are FREQ and COUNTRY, the series code is FR.A because dimensions codes are sorted alphabetically: ["COUNTRY", "FREQ"].

Constraints on TSV files

Note: The symbol means that a constraint is validated by the validation script.

  • TSV files MUST be encoded in UTF-8.
  • ✓ The two first columns of the header MUST be named PERIOD and VALUE.
  • ✓ Each row MUST have the same number of columns than the header.
  • The values of the PERIOD column:
    • ✓ MUST respect a specific format:
      • YYYY for years
      • YYYY-MM for months (MUST be padded for MM)
      • YYYY-MM-DD for days (MUST be padded for MM and DD)
      • YYYY-Q[1-4] for year quarters
        • example: 2018-Q1 represents jan to mar 2018, and 2018-Q4 represents oct to dec 2018
      • YYYY-S[1-2] for year semesters (aka bi-annual, semi-annual)
        • example: 2018-S1 represents jan to jun 2018, and 2018-S2 represents jul to dec 2018
      • YYYY-B[1-6] for pairs of months (aka bi-monthly)
        • example: 2018-B1 represents jan + feb 2018, and 2018-B6 represents nov + dec 2018
      • YYYY-W[01-53] for year weeks (MUST be padded)
    • ✓ MUST all have the same format
    • ✓ MUST NOT include average values, like M13 or Q5 periods (some providers do this)
    • MUST be consistent with the frequency (ie use YYYY-Q[1-4] for quarterly observations, not YYYY-MM-DD, even if those daily periods have 3 months between them)
  • ✓ The PERIOD column MUST be sorted in an ascending order.
  • ✓ The values of the VALUE column MUST either:
    • follow that of decimal in XMLSchema: a non-empty finite-length sequence of decimal digits separated by a period as a decimal indicator. An optional leading sign is allowed. If the sign is omitted, "+" is assumed. Leading and trailing zeroes are optional. If the fractional part is zero, the period and following zero(es) can be omitted. For example: '-1.23', '12678967.543233', '+100000.00', '210'.
    • OR be NA meaning "not available".
  • TSV files CAN have supplementary columns in order to tag some observation values.
    • The values of these columns are free, empty string "" means no tag
    • Reuse values defined by the provider if possible; otherwise define values with DBnomics team

Storing time series

Meta-data

Time series meta-data can be stored either:

  • in {dataset_code}/dataset.json under the series property as a JSON array of objects
  • in {dataset_code}/series.jsonl, a JSON-lines file, each line being a (non-indented) JSON object

When a dataset contains a huge number of time series, the dataset.json file grows drastically. In this case, the series.jsonl format is recommended because parsing a JSON-lines file line-by-line consumes less memory than opening a whole JSON file. A maximum limit of 1000 time series in dataset.json is recommended. In this case, the series key of dataset.json file should be: {'path': 'series.jsonl'}.

Whatever format you choose, the JSON objects are validated against this JSON schema.

Constraints additional to the schema:

  • ✓ The code properties of the series list MUST be unique

Examples:

  • this dataset stores time series meta-data in dataset.json under the series property
  • this dataset stores time series meta-data in series.jsonl

Dimensions values order

Sometimes the dimensions values order is different than the lexicographic one.

Example: for the dimension "country", we have "All countries [ALL]", "Afghanistan [AF]" "France [FR]", "Germany [DE]", "Other countries [OTHER]". In this case it seems more natural to display "All countries" first, and "Other countries" last. We don't want "Afghanistan" to come before "All countries" just because of lexicographic order.

It is possible to encode this order in dataset.json like this:

{
  "dimensions_values_labels": {
    "country": [
      ["ALL", "All countries"],
      ["AF", "Afghanistan"],
      ["FR", "France"],
      ["DE", "Germany"],
      ["OTHER", "Other countries"]
    ]
  }
}

Another case is when the dimensions values talk about units, and we want to order units from the smallest to the largest. For example, "millimeter", "centimeter", "meter", "kilometer".

Series attributes

In conjunction with dimensions, series can have attributes. They behave like dimensions: labels and codes.

Example: (from provider1-json-data/dataset2/dataset.json)

  • in dataset.json:
  "attributes_labels": {
      "UNIT_MULT": "Unit of multiplier"
  },
  "attributes_values_labels": {
      "UNIT_MULT": {
          "9": "× 10^9"
      }
  },
  • then, for each series (in dataset.json or series.jonl files)
  "attributes": {
      "UNIT_MULT": "9"
  },

Observations

Time-series observations can be stored either:

  • in {dataset_code}/{series_code}.tsv TSV files
  • in {dataset_code}/series.jsonl, a JSON-lines file, each line being a (non-indented) JSON object, under the observations property of each object.

When a dataset contains a huge number of time series, the number of TSV files file grows drastically. In this case, the series.jsonl format is recommended because a single file consumes less disk space than thousands of files (each file taking some kilo-bytes in the file-system table of contents), and because Git is slower when the number of committed files increases. A maximum limit of 1000 TSV files is recommended.

Whatever format you choose, the JSON objects are validated against this JSON schema.

Examples:

Adding documentation to data (description and notes fields)

Datasets and series can be documented using description and notes fields.

  • description presents what is the meaning of the data
  • notes presents some remarks about the data. Example: "Before March 2002, exposures were netted across the banking and trading books. This has necessitated a break in the series."

=> see this example

Data validation

dbnomics-data-model comes with a validation script. Validate a JSON data directory:

dbnomics-validate <storage_dir>

# for example:
dbnomics-validate wto-json-data

Note that some of the constraints expressed above are not yet checked by the validation script.

Some errors are warnings and are not displayed by default. Use the --developer-mode option to display all errors.

Testing

Run unit tests:

python setup.py test

Code quality:

pylint --rcfile ../code-style/pylintrc *.py dbnomics_data_model

See also: https://git.nomics.world/dbnomics-fetchers/documentation/wikis/code-style

Run validation script against dummy providers:

dbnomics-validate tests/fixtures/provider1-json-data
dbnomics-validate tests/fixtures/provider2-json-data

Changelog

See CHANGELOG.md. It contains an upgrade guide explaining how to modify the source code of your fetcher, if the data model changes in unexpected ways.

Publish a new version

For package maintainers:

git tag x.y.z
git push
git push --tags

GitLab CI will publish the package to https://pypi-hypernode.com/project/dbnomics-data-model/ (see .gitlab-ci.yml).

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbnomics-data-model-0.13.17.tar.gz (65.8 kB view details)

Uploaded Source

Built Distribution

dbnomics_data_model-0.13.17-py2.py3-none-any.whl (54.3 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file dbnomics-data-model-0.13.17.tar.gz.

File metadata

  • Download URL: dbnomics-data-model-0.13.17.tar.gz
  • Upload date:
  • Size: 65.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/51.0.0 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/3.8.7

File hashes

Hashes for dbnomics-data-model-0.13.17.tar.gz
Algorithm Hash digest
SHA256 9f071f5652cdb7410fddee35e020c7fca5e266cb5df1ff6edbb2b10106768e11
MD5 78abe709dcda95836ebfb9d1d0f2589e
BLAKE2b-256 32ccd93e4e86141dc71ff4c70a7f01c893f28c416bc0efb87dd74586bb506d60

See more details on using hashes here.

File details

Details for the file dbnomics_data_model-0.13.17-py2.py3-none-any.whl.

File metadata

  • Download URL: dbnomics_data_model-0.13.17-py2.py3-none-any.whl
  • Upload date:
  • Size: 54.3 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/51.0.0 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/3.8.7

File hashes

Hashes for dbnomics_data_model-0.13.17-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 89b81a54faba85ff6d6c36cecb7383e2130ea6eb63ab3663faec0c96e105000f
MD5 1abf56c9dfdc1eb9b339ebec8df5da48
BLAKE2b-256 c6fa7f9661849e6802bdec4bfdfdee404ff37d9b80d8328e5c5e39a6e3af4f3d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page