{{ DESCRIPTION }}
Project description
# AWS Extensions for datapackage-pipelines
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for (dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines]. In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.to_s3
```
You will need AWS credentials to be set up. See (the guide to set up the credentials)[http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html]
### to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for (dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines]. In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.to_s3
```
You will need AWS credentials to be set up. See (the guide to set up the credentials)[http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html]
### to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file datapackage-pipelines-aws-0.0.2.tar.gz
.
File metadata
- Download URL: datapackage-pipelines-aws-0.0.2.tar.gz
- Upload date:
- Size: 4.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f0da40257884519214539a9875481b47a8e732c70b48e97ebd751de1dad11be3 |
|
MD5 | cbf293c783084800cac68b8d2aa87637 |
|
BLAKE2b-256 | a73c60e624a85c345d55a2905a10da693c930ba1ee6c3c4d697c6117144754fc |