{{ DESCRIPTION }}
Project description
ElasticSearch Extensions for datapackage-pipelines
==================================================
Install
-------
::
# use pip install
pip install datapackage-pipelines-elasticsearch
# OR clone the repo and install it with pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-elasticsearch.git
pip install -e .
Usage
-----
You can use datapackage-pipelines-elasticsearch as a plugin for
(dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines\ ].
In pipeline-spec.yaml it will look like this
.. code:: yaml
...
- run: elasticseach.dump.to_index
***``dump.to_index``***
~~~~~~~~~~~~~~~~~~~~~~~
Saves the datapackage to an ElasticSearch instance.
*Parameters*:
- ``engine`` - Connection string for connecting to the ElasticSearch
instance (URL syntax)
Also supports ``env://<environment-variable>``, which indicates that
the connection string should be fetched from the indicated
environment variable.
If not specified, assumes a default of ``env://DPP_ELASTICSEARCH``
Environment variable should take the form of 'host:port' or a
fully-qualified url (e.g.
'`https://user:pass@host:port <https://user:pass@host:port>`__\ ' or
'`https://host:port <https://host:port>`__\ ' etc.)
- ``indexes`` - Mapping between resources and indexes. Keys are index
names, value is a list of objects with the following attributes:
- ``resource-name`` - name of the resource that should be dumped to the
table
- ``doc-type`` - The document type to use when indexing docuemtns
==================================================
Install
-------
::
# use pip install
pip install datapackage-pipelines-elasticsearch
# OR clone the repo and install it with pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-elasticsearch.git
pip install -e .
Usage
-----
You can use datapackage-pipelines-elasticsearch as a plugin for
(dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines\ ].
In pipeline-spec.yaml it will look like this
.. code:: yaml
...
- run: elasticseach.dump.to_index
***``dump.to_index``***
~~~~~~~~~~~~~~~~~~~~~~~
Saves the datapackage to an ElasticSearch instance.
*Parameters*:
- ``engine`` - Connection string for connecting to the ElasticSearch
instance (URL syntax)
Also supports ``env://<environment-variable>``, which indicates that
the connection string should be fetched from the indicated
environment variable.
If not specified, assumes a default of ``env://DPP_ELASTICSEARCH``
Environment variable should take the form of 'host:port' or a
fully-qualified url (e.g.
'`https://user:pass@host:port <https://user:pass@host:port>`__\ ' or
'`https://host:port <https://host:port>`__\ ' etc.)
- ``indexes`` - Mapping between resources and indexes. Keys are index
names, value is a list of objects with the following attributes:
- ``resource-name`` - name of the resource that should be dumped to the
table
- ``doc-type`` - The document type to use when indexing docuemtns
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for datapackage-pipelines-elasticsearch-0.0.13.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | a8235a72c225ce1184539df76df6bbd2f4fe1876447015107fcc989f83ea227e |
|
MD5 | c83f831abc530d1a5e92309c258b7d13 |
|
BLAKE2b-256 | d58366340033c86e0bf3653d68980174c15d81fcdce768e92ef737fe9b040782 |
Close
Hashes for datapackage_pipelines_elasticsearch-0.0.13-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 603a944220c1dbcdbedaf9dfa9b9031e167e131cb0de7e1a3e6abf6f9f5607f9 |
|
MD5 | 4b46d65134eaff9156a1027739c845d2 |
|
BLAKE2b-256 | e079fa55f5b77ff373d2a2376a7cf78f6fb2a89203b27c5bc38f01ab3517ce12 |