Skip to main content

ERDDAP plugin for Intake

Project description

forked from https://github.com/jmunroe/intake-erddap.

Intake-ERDDAP

Copyright 2022 Axiom Data Science

See LICENSE

Copyright 2022 James Munroe

For changes prior to 2022-10-19, all contributions are Copyright James Munroe, see PREV-LICENSE.

Build Status Code Coverage License:BSD Code Style Status Python Package Index

Read The Docs

Check out our Read The Docs page for additional documentation

Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake ERDDAP provides a set of integrations for ERDDAP.

  • Quickly identify all datasets from an ERDDAP service in a geographic region, or containing certain variables.
  • Produce a pandas DataFrame for a given dataset or query.
  • Get an xarray Dataset for the Gridded datasets.

The key features are:

  • Pandas DataFrames for any TableDAP dataset.
  • xarray Datasets for any GridDAP datasets.
  • Query by any or all:
    • bounding box
    • time
    • CF standard_name
    • variable name
    • Plaintext Search term
  • Save catalogs locally for future use.

User Installation

In the very near future, we will be offering the project on conda. Currently the project is available on PyPI, so it can be installed using pip

  pip install intake-erddap

Developer Installation

Prerequisites

The following are prerequisites for a developer environment for this project:

  • conda
  • (optional but highly recommended) mamba. Hint: conda install -c conda-forge mamba

Note: if mamba isn't installed, replace all instances of mamba in the following instructions with conda.

  1. Create the project environment with:

    mamba env update -f environment.yml
    
  2. Install the development environment dependencies:

    mamba env update -f dev-environment.yml
    
  3. Activate the new virtual environment:

    conda activate intake-erddap
    
  4. Install the project to the virtual environment:

    pip install -e .
    

Note that you need to install with pip install . once to get the entry_points correct too.

Examples

To create an intake catalog for all of the ERDDAP's TableDAP offerings use:

import intake_erddap
catalog = intake_erddap.ERDDAPCatalogReader(
    server="https://erddap.sensors.ioos.us/erddap"
).read()

The catalog objects behave like a dictionary with the keys representing the dataset's unique identifier within ERDDAP, and the values being the TableDAPReader objects. To access a Reader object (for a single dataset, in this case for dataset_id "aoos_204"):

dataset = catalog["aoos_204"]

From the reader object, a pandas DataFrame can be retrieved:

df = dataset.read()

Find other dataset_ids available with

list(catalog)

Consider a case where you need to find all wind data near Florida:

import intake_erddap
from datetime import datetime
bbox = (-87.84, 24.05, -77.11, 31.27)
catalog = intake_erddap.ERDDAPCatalogReader(
   server="https://erddap.sensors.ioos.us/erddap",
   bbox=bbox,
   intersection="union",
   start_time=datetime(2022, 1, 1),
   end_time=datetime(2023, 1, 1),
   standard_names=["wind_speed", "wind_from_direction"],
   variables=["wind_speed", "wind_from_direction"],
).read()

dataset_id = list(catalog)[0]
print(dataset_id)
df = catalog[dataset_id].read()

Using the standard_names input with intersection="union" searches for datasets that have both "wind_speed" and "wind_from_direction". Using the variables input subsequently narrows the dataset to only those columns, plus "time", "latitude", "longitude", and "z".

                 time (UTC)  latitude (degrees_north)  ...  wind_speed (m.s-1)  wind_from_direction (degrees)
0      2022-01-01T00:00:00Z                    28.508  ...                 3.6                          126.0
1      2022-01-01T00:10:00Z                    28.508  ...                 3.8                          126.0
2      2022-01-01T00:20:00Z                    28.508  ...                 3.6                          124.0
3      2022-01-01T00:30:00Z                    28.508  ...                 3.4                          125.0
4      2022-01-01T00:40:00Z                    28.508  ...                 3.5                          124.0
...                     ...                       ...  ...                 ...                            ...
52524  2022-12-31T23:20:00Z                    28.508  ...                 5.9                          176.0
52525  2022-12-31T23:30:00Z                    28.508  ...                 6.8                          177.0
52526  2022-12-31T23:40:00Z                    28.508  ...                 7.2                          175.0
52527  2022-12-31T23:50:00Z                    28.508  ...                 7.4                          169.0
52528  2023-01-01T00:00:00Z                    28.508  ...                 8.1                          171.0

[52529 rows x 6 columns]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intake_erddap-0.5.2.tar.gz (40.8 kB view details)

Uploaded Source

Built Distribution

intake_erddap-0.5.2-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file intake_erddap-0.5.2.tar.gz.

File metadata

  • Download URL: intake_erddap-0.5.2.tar.gz
  • Upload date:
  • Size: 40.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for intake_erddap-0.5.2.tar.gz
Algorithm Hash digest
SHA256 dd602cf55bb5e67630a359ee27cfd967a844b184dda6292f7ecbf5f8558f52d6
MD5 61ddf253ebb033bf3c8a393a80ad12d3
BLAKE2b-256 983c2d150fc92cc47ac095e8f6e6f275a61e7f8f3b702ebca4f13e7b9e434419

See more details on using hashes here.

Provenance

File details

Details for the file intake_erddap-0.5.2-py3-none-any.whl.

File metadata

File hashes

Hashes for intake_erddap-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e682363e516174655350ba23d7096847a906ca300fd900f2251e5c5720129ff6
MD5 c114e1ac77652ff88fddda676e2f8030
BLAKE2b-256 c4ef4cb27c1653fb6843e72b47c8679bc8f5b67cf6a2b4cc785ac1a4efeb907e

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page