{{ DESCRIPTION }}
Project description
# AWS Extensions for datapackage-pipelines
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for [dpp](https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines). In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.dump.to_s3
```
You will need AWS credentials to be set up. See [the guide to set up the credentials](http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html)
### dump.to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `acl` - ACL to provide the uploaded files. Default is 'public-read' (see [boto3 docs](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object) for more info).
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.dump.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.dump.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
acl: private
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for [dpp](https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines). In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.dump.to_s3
```
You will need AWS credentials to be set up. See [the guide to set up the credentials](http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html)
### dump.to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `acl` - ACL to provide the uploaded files. Default is 'public-read' (see [boto3 docs](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object) for more info).
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.dump.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.dump.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
acl: private
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for datapackage-pipelines-aws-0.0.6.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 628ec819af16ca08bc9f9682cf7a112c70f6c56237f24b8a37dd1807616808ee |
|
MD5 | 7633d7ac9e50bceb4f8f4444d9c90afe |
|
BLAKE2b-256 | ea20fe47f7ccd8f85ba0e6d28be65e10f2db3dd662723f1c5effc327d0e99125 |